Double Glazed: Taking Artists’ Rights Seriously and… Algorithmically

Posted on: March 4, 2024 by

The Cybernetic Milkmaid

What would Vermeer (1632-1675) feel if he lived until today when he suddenly discovered that his artistic style could be reproduced by state-of-the-art generative artificial intelligence (GenAI) tools? How would Han van Meegeren (1889-1947), a skilful art forger who infamously fooled Nazis with his faked Old Masters’ paintings (including Vermeer’s), react to GenAI? It is not unlikely that both Vermeer and his forger would feel amazed (or even threatened) by what GenAI is capable to achieve today. With GenAI tools such as those powered by Stable Diffusion, some simple textual prompts can yield highly convincing and impressive outputs. One may wish to play with the easy-to-use Stable Diffusion Online (SDO) interface to see what might be generated. For example, we can generate a “Cybernetic Milkmaid” with a simple text-to-image prompt like this:

  • “generate an image of a girl pouring milk out of a brown jug in a 17th-century Dutch kitchen in the style of Johannes Vermeer as a cybernetic robot”.*

 

I put Vermeer’s original Milkmaid and SDO’s Cybernetic Milkmaid side by side here, for a quick forensic inspection. I will defer to our knowledgeable readers for deciding if there is perceived substantial similarity between the two images.

On the left is a digital scan of Vermeer’s original 17th-century Milkmaid. On the right is the Cybernetic Milkmaid, which was generated on 22 Feb. 2024 using SDO. According to SDO, AI-generated images using SDO are licensed under a Public Domain Dedication Licence known as CC0. The comparison plot is made with Python’s versatile matplotlib library

There seems little doubt that a great number of, if not all, AI-generated outputs can now pass the fabled Turing Test. Behaviouristically speaking, GenAI is capable of ‘forging’ a human artist’s style, which can easily trick human viewers into believing that a given output is made by a human artist. In a similar fashion, human forgers are also likely to be outperformed by AI, leaving van Meegeren’s meticulous analogue techniques and tools (think, for example, his purpose-made badger-haired paintbrush for faking Vermeer) obsolete in their digital sense.

Vermeer and Law

Now let’s move onto a hypothetical legal scenario: suppose Vermeer is a living artist and his masterpieces are harvested as raw input data trained by a GenAI model without his knowledge. What could be the recourse available to protect Vermeer’s interests? Conversely, what are the possible legal defences that may shield those AI trainers from being held liable? Vermeer and his lawyers would need to fight a hard battle at two fronts. First, at the input front, Vermeer needs to show that an AI-generated work is caused by and substantially similar to the original Milkmaid to establish copyright infringement. If GenAI transforms Vermeer’s works into something substantially dissimilar, no cause of action under copyright law will be available to Vermeer (one does not need to stretch his or her imagination to perceive the above Cybernetic Milkmaid as substantially different from the original Milkmaid, though this may not be universally agreed upon). This scenario is infinitely close to what was actually disputed in the recent litigation Andersen v. Stability AI, where three plaintiff-artists faced tremendous difficulty in convincing Judge William Orrick (US Northern District Of California) that AI-generated outputs would be substantially similar to their works included in a training dataset. Second, at the output front, AI companies may resort to copyright defences such as ‘fair use’ (e.g., in the US) and text and data mining exception (e.g., in the EU)* to legitimate unauthorised AI training, though the scope of these exceptions are yet to be fully tested in litigation. In summary, an artist like Vermeer would fight an uphill battle against GenAI companies which use his works as input to generate arguably substantially transformed outputs.

Glazing Vermeer Algorithmically

Given the vast legal challenge faced by artists, it is interesting to see that new technological measures are being developed to deter unauthorised AI training. One of such pro-artist tools — known as Glaze developed by a team from the University of Chicago — stands out as a highly plausible solution to help visual artists to take more control of their copyright works. The tool’s name ‘Glaze’ intriguingly sounds like another computer scientists’ playful double entendre, though I am not entirely sure if and how the pun is precisely intended. Glaze’s FAQ section has some extremely informative and helpful answers to Frequently Asked Questions, but I fail to find one explaining why the project is called ‘Glaze’. Maybe my little pedanticism is a ‘rarely’ asked question instead of an FAQ. Here’s my guess about the term ‘Glaze’, and it might be helpful to illustrate what Glaze can do. First, ‘Glaze’ could be thought of as algorithmic ‘glass’ for protecting digital artworks on the internet against unauthorised AI training. It is not uncommon to frame valuable paintings with bullet-proof glass against unwanted physical attacks in public spaces. ‘Glaze’ serves as an algorithmic equivalent of protective glass that protects artworks digitally exhibited on the internet. Second, ‘Glaze’ could also refer to the computational perturbation method applied to artworks to make machines’ ‘eye’ (a homophonic pun for ‘AI’?) glaze over because it skilfully shifts pixels of digital works to prevent stylistic mimicry by AI. It is intended to be a defensive tool to algorithmically confuse AI’s mechanical eyes by applying so-called “style cloaks”, which are almost invisible to human eyes. In both senses, Glaze protects “human artists by disrupting style mimicry” enabled by AI. For example, a glazed Vermeer painting would still look like the one typically made in the Dutch Golden Age to human viewers, but an AI model would get so confused that it might generate something in the style of Salvador Dali!*

More recently, the Glaze team has also shipped a sister tool called Nightshade, which intends to offensively data-poison AI training models that disregard “copyrights, opt-out lists, and do-not-scrape/robots.txt directives”. Working together, Glaze and Nightshade produce a kind of algorithmic cataract that clouds AI’s computational vision. At the time of writing, Glaze and Nightshade are only available for two operating system platforms —Windows and MacOS. No executable binaries are provided for the Linux platform and this means Linux-based artists are unable to benefit from Glaze/Nightshade for the time being.*

The use of Glaze/Nightshade for the purpose of protecting visual artists against machine learning is extremely interesting and highly innovative, though the idea of the adversarial data-poison attacks is not necessarily new. However, like any technological self-help measures, Glaze cannot be a replacement of the legal infrastructure (however inadequate it is at the moment) to protect individual artists against unauthorised AI training by big corporations. Its emergence and working mechanism needs to be understood in a wider social setting. When artists’ works are ingested into AI machine’s giant datasets, these artists are not merely copyright owners, but they are also data subjects, whose data-related interests need to be taken seriously. Glaze is great in the sense that it provides an ingenious hack to counter unauthorised AI training with an AI countermeasure. It is an agile response to the problem that the current legal system is incapable of handling in the short term.

It is not surprising to see there have already been attempts to break Glaze, though the Glaze team states that no such attempts have been successful so far. Threats to break Glaze are inevitable in the current AI arms race fuelled by the ideology of technological accelerationism. Glaze’s longevity, to some extent, hinges upon the team’s ability to secure sustained funding to constantly maintain and upgrade this algorithmic protection measure for a long period of time.

In summary, new AI tools like Glaze (and Nightshade) would clearly provide much needed help to artists who are vulnerable to unscrupulous AI training. However, these self-help measures also need to work closely with updated and upgraded legislation under a holistic design for protecting and remunerating human artists in the longer term. The next AI winter will be ever closer to us if an artists’ winter is an imminent one.

* It is possible to apply ‘style’ labels (from ‘cinematic’ to ‘cyberpunk’) to SDO-generated works. The online tool does not supply labels such as ‘Dutch Golden Age’, which could have matched Vermeer’s style in a more precise way. As a compromise, I found a style called ‘Baroque’, which seems to be the one closest to Vermeer’s period.

* For example, Articles 3 and 4 of the EU’s Digital Single Market Directive provides a text-and-data-mining exception, with the former clause reserved for “research organisations and cultural heritage institutions”. For a critical analysis of this exception, see Thomas Margoni and Martin Kretschmer, ‘A Deeper Look into the EU Text and Data Mining Exceptions: Harmonisation, Data Ownership, and the Future of Technology’ (2022) 71 GRUR International 685.

* Some may still find that a surrealist Vermeer can have its own appeal, even though this is not intended by the style-mimicking AI.

* It is totally understandable that to develop a Linux version would entail substantial efforts and resources, but it is hoped that the team would secure more funding to make these tools available for Linux users.

Image Credits:

Johannes Vermeer, The Milkmaid, c. 1660, public domain.

Dr Chen Zhu, Cybernetic Milkmaid, 22 Feb. 2024, made with Stable Diffusion Online.