The Truth Behind Signatures on AI-Generated Art
Why a Signature on AI Art Isn’t a Sign of Theft
--
The results from generative art models, colloquially known as “AI art,” have been making waves on social media and in the art community. Many people are upset, asserting that these models are stealing the work of human artists. One of the biggest misconceptions that is often used as ‘proof’, is the presence of a distorted signature in the bottom corner of the image. In this post, I’d like to clear up this and other misconceptions about how AI art is made and explain why it’s not as sinister as it may seem.
The key thing to understand is that when you see AI art with a signature, it’s not because the model is copying that artist’s work. It’s because the model has learned that when people use certain prompts, like “by [famous artist]” they might want the output to resemble something that a famous artist might have created, and famous art is often signed. When using prompts like “in the style of [famous artist],” you rarely see signatures because the model interprets that prompt differently.
It’s also important to understand that AI art is not capable of copying and pasting parts of existing artwork.
Generative AI models (Stable Diffusion, MidJourney, DALL·E 2) are trained on vast amounts of art and other visual data, just like how a human artist would be exposed to different concepts and styles throughout life.
So how exactly does the AI model go about creating an image from its wealth of trained knowledge? It actually starts with random noise, which is essentially a set of random pixel values. The model then applies a process called “diffusion” to this noise, using the data it’s been trained on to transform the noise into an image gradually over dozens or hundreds of iterations (steps). The model will apply mathematical operations, such as convolutions and upsampling, to the noise, adjusting the pixel values with each iteration. Each iteration brings the image closer to the desired outcome until the model reaches a point where it thinks the image is complete.
Imagine I asked you to draw a dragon, something you’ve obviously never seen in person and have only been exposed to through the art and interpretation of others. You would have to rely on your imagination and the knowledge you have about dragons from movies, books, and other media. You might think about what features a dragon usually has, such as wings, scales, and fire-breathing abilities. You might reference other artworks you’ve seen of creatures with similar features to dragons, such as birds or reptiles. You might also use your imagination to develop unique features your dragon could have.
The final result would be a unique representation of a dragon inspired by previous artwork but not a copy of any one dragon you’ve seen. Similarly, when AI generates art, it doesn’t copy a specific image it has seen; it creates something new based on the data it has been trained on and a healthy dose of randomness (noise).
I think it’s important to understand how this technology works before forming opinions and assumptions about what it is or is not capable of doing. Hopefully, I provided a surface-level overview for anyone concerned about the outright stealing of art by these generative models.
Does knowing that these models can’t copy/paste and that they start with random noise and iterate from there make you feel better or not? Let me know in the comments below.