it also has controls on it to stop it from extremely racist/sexist/homophobic things, like a base politeness level that's enforced and can be navigated around if you know how
It doesn't tell you directly how to cook epic meme drugs like based Walter White and it doesn't print N-words on command.
Elon posted a grok example that was like "How do I make cocaine?"
And the bot went on a long thing like "Go to university get a chemistry degree, make cocaine, hope you dont get blown up. JK just kidding I dont want you to get in trouble with the DEA so I wont tell you ;)"
Treated it like the funniest thing he'd ever seen.
I would put it slightly diffrent. The power insert main character is a writer that has three hot female assistants, but also is a doctor and a lawyer. Grok comes from the Martians who are so right about everything that when they say things reality changes. So specifically not his self insert but from the power of drugs and space sex.
In the context of AI, people tend to use "grok" to describe what can sometimes happen if you overtrain the living shit out of a model and it somehow goes from being trained appropriately and generalizing well -> overfitted and only working well on the training data, with shit performance everywhere else -> somehow working again and generalizing, even better than before. Example in a paper: https://arxiv.org/abs/2201.02177
OpenAI really wants a monopoly and are trying to present themselves as a "safe" AI company while also lobbying for regulation of "unsafe" AI companies (everyone else, and especially open-source development). So pretty much half of all manhours spent on developing models at OpenAI seem to be directed towards stopping it from generating anything that will get them the wrong kind of press. Sometimes, they are moderately successful at doing this, but someone always eventually finds a way to get something on the level of "gender reveal 9/11" out of their models.
Elon owned OpenAI at some point but sold it because, as we all know, he makes a lot of extremely poor financial decisions.
trained appropriately and generalizing well -> overfitted and only working well on the training data, with shit performance everywhere else -> somehow working again and generalizing, even better than before
That's fascinating, I've never heard of that before.
Wtf is Grok
How is ChatGPT "woke"?
Doesn't 'Piss Shuttle Elon' own ChatGPT?
I mean, fuck ChatGPT, but fuck these X losers even more.
The woke part is the "don't say slurs" in the base prompt
does that include such terrible slurs as "cis" and "cracker", I wonder?
it also has controls on it to stop it from extremely racist/sexist/homophobic things, like a base politeness level that's enforced and can be navigated around if you know how
deleted by creator
Elon posted a grok example that was like "How do I make cocaine?"
And the bot went on a long thing like "Go to university get a chemistry degree, make cocaine, hope you dont get blown up. JK just kidding I dont want you to get in trouble with the DEA so I wont tell you ;)"
Treated it like the funniest thing he'd ever seen.
He genuinely did think that was funny. His sense of humor is Facebook memes that are only barely above angry minion tier.
I also wouldn't be surprised if that image was fake and he'd written it himself.
deleted by creator
deleted by creator
I would put it slightly diffrent. The power insert main character is a writer that has three hot female assistants, but also is a doctor and a lawyer. Grok comes from the Martians who are so right about everything that when they say things reality changes. So specifically not his self insert but from the power of drugs and space sex.
deleted by creator
It don't think it is particularly better, just instresting in the ways it is weirder
I'm once more reminded of how I only read one or two Heinlein pieces and went, "yep, that checks the box. Off to read another famous person."
ChatGPT tells you to be respectful towards minorities.
In the context of AI, people tend to use "grok" to describe what can sometimes happen if you overtrain the living shit out of a model and it somehow goes from being trained appropriately and generalizing well -> overfitted and only working well on the training data, with shit performance everywhere else -> somehow working again and generalizing, even better than before. Example in a paper: https://arxiv.org/abs/2201.02177
OpenAI really wants a monopoly and are trying to present themselves as a "safe" AI company while also lobbying for regulation of "unsafe" AI companies (everyone else, and especially open-source development). So pretty much half of all manhours spent on developing models at OpenAI seem to be directed towards stopping it from generating anything that will get them the wrong kind of press. Sometimes, they are moderately successful at doing this, but someone always eventually finds a way to get something on the level of "gender reveal 9/11" out of their models.
Elon owned OpenAI at some point but sold it because, as we all know, he makes a lot of extremely poor financial decisions.
That's fascinating, I've never heard of that before.