Again, we're not thinking it has a political opinion. We're basically figuring out what the sum of its training data is, which is of coursed biased by the data OpenAI decided to use. But it's easier to use simpler language.
But as an aside, is there a level of AI where you would say someone truly thinking of it as a person isn't "failing the [reverse] turing test"? Because if not, that's taking a pretty hard stance on the question of potential future AI sentience. Not saying it's wrong, but it's a bit early.
i don't think i'm taking a hard line, except dreaming of the death of marketers.
we are so far from sapient AI that it's dumb to talk about it in the first place. Artificial general intelligence, if it will even ever exist and not just be more and more complex chinese rooms until we ruin the planet, will be nothing like these grammar or image remixers.
Artificial general intelligence, if it will even ever exist and not just be more and more complex chinese rooms
That's what artificial intelligence is. People seem to dream of AI as a human brain inside a computer, but it's definitionally a computer program, and therefore a complex chinese room (that's not to say it can't be sentient/sapient). "It's not AI, it's just [neural networks/machine learning/a language model]" is a common belief now, but they fail to understand that that will always be the case. As we develop AI technology, we will of course know what it is and how it works to some extent, because we made it. But we think AI has to be some mysterious sci-fi shit or else it's not actually AI.
I think most of us understand that, but it's much easier to word it as if it were actually thinking about it.
"It's making real judgement calls" vs "It's using its training data to synthesize coherent and appropriate responses."
it's kinda important that we not lazily personify some computer algorithms
We lazily personify a lot of things, and have done so long before computers existed.
nobody thinks my dog has political opinions.
a bunch of people are getting fooled by marketing jackasses who misuse the label AI for things that literally are't intelligent.
please stop failing the goddamn turing test
Again, we're not thinking it has a political opinion. We're basically figuring out what the sum of its training data is, which is of coursed biased by the data OpenAI decided to use. But it's easier to use simpler language.
But as an aside, is there a level of AI where you would say someone truly thinking of it as a person isn't "failing the [reverse] turing test"? Because if not, that's taking a pretty hard stance on the question of potential future AI sentience. Not saying it's wrong, but it's a bit early.
i don't think i'm taking a hard line, except dreaming of the death of marketers.
we are so far from sapient AI that it's dumb to talk about it in the first place. Artificial general intelligence, if it will even ever exist and not just be more and more complex chinese rooms until we ruin the planet, will be nothing like these grammar or image remixers.
That's what artificial intelligence is. People seem to dream of AI as a human brain inside a computer, but it's definitionally a computer program, and therefore a complex chinese room (that's not to say it can't be sentient/sapient). "It's not AI, it's just [neural networks/machine learning/a language model]" is a common belief now, but they fail to understand that that will always be the case. As we develop AI technology, we will of course know what it is and how it works to some extent, because we made it. But we think AI has to be some mysterious sci-fi shit or else it's not actually AI.
no that's what marketing lies say it is
if it's not intelligent then it's not fucking artificial intelligence.
It is intelligent, I think you're missing the artificial part. "It's not really intelligent" is exactly what artificial intelligence means.
no, it isn't lmao.