- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
A number of suits have been filed regarding the use of copyrighted material during training of AI systems. But the Times' suit goes well beyond that to show how the material ingested during training can come back out during use. "Defendants’ GenAI tools can generate output that recites Times content verbatim, closely summarizes it, and mimics its expressive style, as demonstrated by scores of examples," the suit alleges.
The suit alleges—and we were able to verify—that it's comically easy to get GPT-powered systems to offer up content that is normally protected by the Times' paywall. The suit shows a number of examples of GPT-4 reproducing large sections of articles nearly verbatim.
The suit includes screenshots of ChatGPT being given the title of a piece at The New York Times and asked for the first paragraph, which it delivers. Getting the ensuing text is apparently as simple as repeatedly asking for the next paragraph.
The suit is dismissive of attempts to justify this as a form of fair use. "Publicly, Defendants insist that their conduct is protected as 'fair use' because their unlicensed use of copyrighted content to train GenAI models serves a new 'transformative' purpose," the suit notes. "But there is nothing 'transformative' about using The Times’s content without payment to create products that substitute for The Times and steal audiences away from it."
The suit seeks nothing less than the erasure of both any GPT instances that the parties have trained using material from the Times, as well as the destruction of the datasets that were used for the training. It also asks for a permanent injunction to prevent similar conduct in the future. The Times also wants money, lots and lots of money: "statutory damages, compensatory damages, restitution, disgorgement, and any other relief that may be permitted by law or equity."
Is this just postering before the Times has a large round of layoffs and gets an openAI subscription?
Cynicism aside, I would love if this actually hurts AI.
consider this capitalist rivalry. openai basically undermining the writing sector. To cheapen costs NYT would have to eventually adopt this, but it basically cheapens the content produced. it is basically massed produced stuff that will eventually just be really dull. sports illustrated just recently fired their ceo for having already used ai generated stuff.
Buzzfeed tried to replace their news department with ChatGPT and failed. They shut down the division instead. Even though Buzzfeed was already creating nothing original and just publishing a mashup of shit from other sources. Still too complicated for the dumbass plagiarism algorithms, which are basically incapable of producing anything that humans find interesting for more than a few seconds and a couple of "oohs and ahs". LOL.