- cross-posted to:
- futurology@futurology.today
- cross-posted to:
- futurology@futurology.today
It has finally happened...not surprised though.
Its not any different than how it already was. Initially the GenAI models were all being trained on masses of unlicensed data including data from reddit. The problem is some companies like New York Times are suing for training an LLM off of their data. So in response companies like OpenAI are now trying to reach partnerships that basically license the use of the data (that they already had). This also means that they will be able to continue to have future access to that data as long as the partnership is in place. Whereas some companies without a partnership could start to ban scraping activity or update their terms to forbid training AI off of their data.
Overall these partnerships are a good thing. Licensed training data is good. But from a privacy standpoint, the AI models were already trained on reddit data. This is just formalizing the relationship
I like how they monetized their API and data because they don't want it to be used to train AI models, and now they are selling user data for millions to OpenAI.
Ain't you glad you gave Reddit content for free and they're reselling if for millions?
This is not so bad. Reddit is crawling with bot spam and that will increase as users leave the platform every time it does a stunt to pump the stock price. The percentage of real/fake content will decrease and will poison the training pipelines. It's a great experiment to test model collapse in real time, really.