M.G. Siegler •

A Cynical Read on Anthropic's Book Settlement

$1.5B. Damn. Who can afford to pay that?
A Cynical Read on Anthropic's Book Settlement

There I was, just sitting there enjoying life when I got the push notification that Anthropic had settled their case with book publishers over potential copyright infringement related to the downloading of books used to train their models. Good on them, I thought. Despite seemingly everyone else doing some type of infringing, and despite winning a related case, they knew they were in the wrong and were taking the high road. What was the settlement amount? Maybe a few million? Maybe $50 million?

I'm sorry, what?

$1.5 billion. With a 'b'. And not for "book".

Holy shit that's a lot of money. Sure, if Anthropic lost such a case they could have been liable for more money. Maybe a trillion dollars! But come on, we've all seen this story before. Copyright holders sue a new tech player and say their violations are worth trillions. And then they settle for a teeny, tiny fraction of that. This fraction is decidedly higher than those other fractions.

And normally you might think a smaller settlement with a new deal going forward would be more likely here since again, it sure sounds like many other companies were doing the same basic things. That doesn't make it right, of course. But it makes it more murky – especially if we're talking about multiple trillion-dollar settlement requests coming in. Another wrinkle here: in their settlement statement, Anthropic makes it clear that they didn't even use any of the material taken from these data sets to train their models that are in use.1

And so you can't help but wonder if part of the equation in this settlement wasn't decidedly more cynical. Fresh off a new massive fundraise – one in which they raised far more than they were initially targeting, I might add – Anthropic has a lot of money. More than perhaps all but one of their competitors on the startup side. By settling for $1.5B, is Anthropic sort of pulling up a drawbridge, making it so that other startups can't possibly come into their castle? I mean, am I crazy?

I'm not so sure I am. At $1.5B, there are only a handful of companies that could afford to pay such fines. Certainly OpenAI is one. Maybe xAI. And of course all the tech giants like Apple, Amazon, Microsoft, Google, and Meta. But could any other startup that has done any level of model training with such data? Probably not.

And so I wonder: did Dario Amodei just pull a Michael Corleone-like elimination-of-enemies move overnight? Was this a maneuver he learned from Elon Musk? Oh, OpenAI wants to convert into a for-profit? Let's set a new floor price on that. Dario just set a floor price on AI training infringement!

This is a message to all competitors: a standard has been set, open up those wallets if you want to keep playing this game.

I'm watching a lot of journalists and writers seem almost euphoric at the news. How much is my work worth?! Again, not so fast. This move may actually jolt the market in a bad way, by taking out would-be competitors for your words.

Anthropic says it's settling because they didn't want to risk having to spend trillions. That may or may not have come to pass. But certainly Anthropic was more likely to have to spend trillions if the AI Wars continue on as they have been these past couple of years, battling so many companies on so many fronts. With this move, Anthropic may have just narrowed that field.


Update September 6, 2025: And here come the next wave of lawsuits with Apple now being sued by a group of authors.


Update September 9, 2025: In a twist, the judge in the case at first rejected the settlement before later saying the approval was "postponed" with a new version of the settlement being requested. Specifically, it sounds like he's worried that too many of the details are to be determined at a later date and that many would-be claimants in the class action will "get the shaft". He said he also felt "misled" by the way the settlement has been framed.


Update September 25, 2025: And now the judge has agreed to "preliminarily" approve the deal and it sounds like all parties are aligned on making it happen...



1 Which suggests they used it to train other models that they didn't end up going with? Which probably isn't any less illegal? Also, one has to wonder if Anthropic was worried about what would be found in the discovery process here...