Anthropic Defends AI Training Practices, Cites Robust Safeguards Against Music Publishers' Injunction Request

Anthropic Defends AI Training Practices, Cites Robust Safeguards Against Music Publishers' Injunction Request

By Marcus Bennett

December 24, 2024 at 11:16 PM

Anthropic is opposing a preliminary injunction request from major music publishers who claim their Claude chatbot infringes on protected works. The publishers seek to remove their protected content from Claude's training data and block protected lyrics from appearing in outputs.

In its recent refiling, Anthropic maintains several key arguments:

  • Using copyrighted works to train Large Language Models (LLMs) constitutes fair use
  • Monetary damages would sufficiently compensate publishers if they win the case
  • Claude's training process uses "trillions of tiny textual data points" with some copyrighted works included
  • The company has implemented "a broad array of safeguards" to prevent reproducing copyrighted works

Circuit board with AI processor

Circuit board with AI processor

Anthropic emphasizes that using song lyrics as part of a massive training dataset is "transformative" under fair use doctrine. The company also notes that their cited research predated Claude's commercial release by nearly a year.

Co-founder Jared Kaplan provided a supporting declaration detailing Claude's training specifics and his credentials. Anthropic argues there's "no reasonable expectation" of continued copyright infringement due to their implemented safeguards.

The case (5:24-cv-03811) remains ongoing, with reports suggesting a significant portion may be dismissed in the near future.

3D blue AI text on abstract

3D blue AI text on abstract

Anthropic logo on black background

Anthropic logo on black background

Related Articles

Previous Articles