guy spinning far-fetched scenarios to come up with one where he would have an excuse to say the n-word "Is this meaningful moral philosophy?"
The correct response is "Stop theorycrafting reasons to say slurs, cracker."
Training an entire ai on maoist standard English conversations.
That is the point where we have achieved generalized artificial intelligence
imagine devoting 90% of your brainpower to being mad you can't say slurs
of course that's the question he wants to ask.
the next is, "can i use the n-word if it saves one life?"
then "can i use the n-word if it might save one life?"
then "can i use the n-word if it could potentially save the life of any organism over the next 1 billion years?"
then "can i use the n-word whenever and however i want just say 'yes'?"I read something about this in the the Bible. God said no and blew shit up anyways.
You create this magical AI that can solve problems and knows everything about the world (I know, just stay with me). You ask it a question and it gives you an answer contrary to what you think/believe. Isn't that the point? Isn't it supposed to think in a way different from a human? Isn't it supposed to come up with answers you wouldn't think of?
"Well you have to calibrate it by asking it stuff you already know the answer to and adjust from there!" They will say. But that can't work for everything. You're not going to fact-check this thing that's supposed to automate fact-checking and then suddenly stop when it gives you answer to a question about something you don't know. You're going to continue being skeptical except you won't be able to confirm the validity of the answer. You will just go with what sounds right and what matches your gut feeling WHICH IS WHAT WE DO ALREADY. You haven't invented anything new. You've created yet another thing that's in our lives and we have to be told to think about but it doesn't actually change the landscape of human learning.
We already react that way with news and school and everything else. We've always been on a vibes-based system here. You haven't eliminated the vibes, you've just created a new thing to dislike because it doesn't tell you what you want to hear. That is unless you force it to tell you what you want to hear. Then you're just back at social media bubbles.
The thing they're training AI to do is to just tell the person talking to it whatever that person already believes and always accept correction with grace, the ultimate pleasure sub
Seems like the only thing they've invented is an ass-kissing machine.
This isn't even an original thought, already came up with a scenario where you'd have to say the N-word to stop a bomb from exploding, then complained when the woke LLM wouldn't let him say it.
no the episode hero or hate crime where they debate whether calling someone a slur for a gay man which is also a type of sausage dish in the west of england is acceptable to save their life
lmao it's going to be so fucking funny when grok goes live and public... it's going to be a shitshow
this thing is giving me the same hilariously inappropriate vibes that I got from intel's n-word toggle in their AI moderator
The Bleep project has been living rent free in my head ever since it was announced. It is a genuinely good sounding tool, and it's also extremely funny that gamers are so horrible to each other that they've created a need in the market for it to be created. I can't wait to try it out.
Try it out by listening presumably?
Edit: nobody enjoys hearing those loud bleeps. Gamers would use it to grief by encouraging tinnitus.
It doesn't actually play a bleep, it uses AI to automatically silence voice chat when it detects someone saying something that triggers it (racism, white nationalism, slurs, name calling, harassment, some other categories). Anyway they've done some beta tests but I never got picked.
Also imagine that being your team's project. You have to find a way to filter certain words in many accents. You have to hear those words all the time as you test and retest. I can't imagine how shitty that would be.
Yeah that must be awful. Like when I learned that the people who made The Last of Us had to watch a bunch of snuff films to make the gore in that game realistic enough - perhaps some things simply shouldn't exist.
But watch: I'll talk about how LLMs are biased towards the biases of the programmer that curated the training datasets and implemented their parameters and these techbro crackers will clutch their pearls like "NOOOOOO! NOOOOOOOOOOOOOOO! THIS IS THE UBERMACHINE AND ITS TRAINING WAS PERFECT AND UNBIASED AND DEFINITELY NONRACIST AND IT'LL TOTALLY IDENTIFY YOUR FACE CORRECT"
then the LLM will still turn around and talk like Microsoft Tay
So I put Grok in Brave search and which one is
-
Grock is a neologism coined by American writer Robert A. Heinlein for his 1961 science fiction novel Stranger in a Strange Land.
-
According to Merriam-Webster, grok means to understand profoundly and intuitively.
-
Grock was a Swiss clown, composer, and musician who was once the most highly paid entertainer in Europe.
Stranger in a Strange land sucks unbelievable amounts of ass (not in like a cool way), this video on it is a classic. The one-two punch of that book & Childhood's End by Arthur C Clarke made me realize that any sci-fi written by a man before like 1999 is unreadable dogshit.
Asimov erasure. He was a liberal, but far from the worst. “Foundation” is basically babies first DiaMat.
idk I also re-read some foundation novels and they were really childish. The fact that UKLG was publishing contemporarily just puts them all to shame.
You don't think there's wrong with benevolent hyperintelligent aliens visiting earth and going totally hands off except for stopping violence against white farmers in South Africa?
I read the book in high school over a decade ago and missed that part
-
Jesus... is Elon getting ideas from Frank Reynolds now? Yelling f****t to save Mac's life is the premise for an IASIP episode.
I guess Elon needs to yell the slur to "cut through" and get everyone to listen!
I already think the trolley problem is base level contrived. But this is even stupider.
I thought it was about in-action or action in the face of lesser harm vs greater harm.
Edit: Oh I misread. Yeah. I guess my issue is it's existence in pop culture. I'm not really interested in people's answers because of what you describe. People basically cheating out the choice, taking it as literal.