Pending AI Suits in the US: Frivolous or Crucial?
Posted on: June 9, 2023 by Stephanie Drawdy
The US Senate Judiciary Committee’s Subcommittee on Privacy, Technology, and the Law held a hearing in May 2023 on “Oversight of AI: Rules for Artificial Intelligence”. During that hearing, OpenAI CEO Sam Altman testified about the myriad of issues raised by generative AI. Certainly content creators/owners “need to benefit from this technology”, Altman offered. But it was his response about whether his company had been sued that was telling: yes, OpenAI has been sued, Altman affirmed, for “like pretty frivolous things like I think happens to any company”.
What sort of ‘company’ might Altman be describing that’s subject to trivial suits? One that uses protected material without permission? We’ve recently seen how that’s worked out for the Warhol Foundation (for an arguably borderline ‘fair’ use) so how might that go for OpenAI?
The following is a survey of the IP suit currently faced by OpenAI based on allegations of unauthorised use of programmer code as well as two other AI-related US suits and how tech leaders like Altman and Stability AI CEO Emad Mostaque tout a pro-creator policy while simultaneously pushing back against creators who stand up with claims that their content has been hijacked.
Open-Source Programmers’ Suit – Northern District of CA
In November 2022, anonymous open-source programmers filed a putative class action over what they describe as “a brave new world of software piracy”.
Their allegations are against OpenAI, developer platform GitHub, Inc. and Microsoft Corporation and centre on the unauthorised use since OpenAI’s founding in 2015 of licensed material to train OpenAI and Github’s “joint venture” cloud-based AI tool known as Copilot and OpenAI’s Codex, which is “an integral piece of Copilot”. Plaintiffs allege Copilot “ignores, violates, and removes the Licenses offered by thousands – possibly millions – of software developers” and its output is “often a near-identical reproduction of code from the training data”.
If plaintiffs prevail, damages sought could include injunction, actual or statutory damages as well as punitive damages. Based on “the 1.2 million Copilot users” reported in mid-2022 by Microsoft, plaintiffs postulate statutory damages could surpass $9 billion under § 1202 of the Digital Millennium Copyright Act (DMCA) alone, which allows “not less than $2,500 or more than $25,000” for a violation.
In early May 2023, the district court issued an order that granted defendants’ motions to dismiss plaintiffs’ claims for civil conspiracy and declaratory relief with prejudice but dismissed many other claims with leave to amend, including plaintiffs’ claims for violation of the DMCA and tortious interference in a contractual relationship (against all defendants); fraud and violation of GitHub privacy policy and terms of service (against GitHub); and false designation of origin – reverse passing off, unjust enrichment, unfair competition, violation of the California Consumer Privacy Act and negligent handling of personal data (against GitHub and OpenAI).
Artists’ Suit – Northern District of CA
Filing in the same venue as the software developers referenced above and with the same counsel, a group of disgruntled artists were next to file a putative class action over what they viewed as egregious AI training. The defendants named are Stability AI (both the UK and Delaware corporations), DeviantArt, Inc. and Midjourney, Inc.
The crux of the artists’ suit is this: Stability’s software library known as Stable Diffusion was trained on the work of artists who would form the class as exemplified by its class representatives who allege their works were included in defendants’ training data: Sarah Andersen (over 200 works), Kelly McKernan (over 30 works), Karla Ortiz (approximately twelve works). AI-image software products by Stability (DreamStudio), DeviantArt (DreamUp) and Midjourney (its product also goes by Midjourney) are alleged to rely on Stable Diffusion “to produce images” based on that training data without the permission of those artists.
The January 2023 complaint includes claims for copyright infringement, DMCA violation, right of publicity violation and unfair competition; and the relief sought includes injunction, statutory or actual damages as well as profits and punitive damages.
In its motion to dismiss, Stability describes the plaintiff-artists as having a “fundamental misunderstanding of Stability AI’s technology”, disavows their description of Stable Diffusion as a “collage tool” and hotly contests their allegations. Stable Diffusion is instead defended as creating “entirely new and unique images utilizing simple word prompts”, thus, Stability AI contends that it “enables creation; it is not a copyright infringer”.
A hearing is scheduled for mid-July on defendants’ pending motions to strike and dismiss the artists’ claims.
Despite standing as the self-proclaimed “AI by the people for the people”, Stability does not appear to be acquiescing to the people who form this class – or those who comprise Getty Images.
Stock Image Behemoth’s Suit – District of Delaware
In January 2023, as Getty Images announced its suit against Stability AI in London’s High Court of Justice, the Stable Diffusion startup’s CEO Emad Mostaque tweeted that he believed the training data used by Stability to be “ethically, morally and legally sourced and used”. In contrast, Getty’s press release stated its suit was prompted by Stability’s decision “to ignore viable licensing options and long‑standing legal protections in pursuit of their stand‑alone commercial interests.” Which is true? It is now the task of both UK and US courts to decide.
In February 2023, Getty Images filed its US suit against Stability that, if successful, promises to bring in damages in the trillion range. Getty represents that “over 12 million links to images and their associated text and metadata on [Getty] websites” have been identified in datasets used to train Stable Diffusion, that those links include Getty works made-for-hire, acquisitions from third parties with assigned copyrights and/or licensed works and that Stability was “well aware” it was scraping protected material as part of its “infringing scheme”.
Getty’s US claims include copyright infringement, providing false copyright management information and removal/alteration of copyright management information, unfair competition and infringement/dilution of trademark (which could warrant statutory damages as high as “$2,000,000 per counterfeit mark used”).
In May 2023, Stability filed a motion to dismiss for lack of personal jurisdiction, failure to join Stability UK and failure to state a claim upon which relief can be granted; alternatively, Stability requested removal to the Northern District of California.
The Unmapped Legal Landscape of AI
As the courts work through the dockets of these cases (and likely others soon to come) and the legislature (as well as the US Copyright Office) scramble to adapt to technological revelations about generative AI capabilities, we are left in the interim with concerning questions. Are the laws currently in place able to address the issues raised by these cases? Some – like Midjourney Founder and CEO David Holz – say no. If there is an absence of applicable laws, what’s to stop unrestrained theft from those who hold less power than AI mega-entities like OpenAI (valued at $27 to $29 billion) and Stability AI (said to be “seeking to raise money at a valuation of about $4 billion”)? And do we as a society want such entities that wield the force of generative AI to be led by those who preen moralistic about protecting rights while unabashedly trivialising infringement claims?