I can't tell if this is upsetting or not. I'm still mostly in the camp of "stochastic parrots" when it comes to LLMs, but this just feels like the AI is intentionally being a dick... and the intent part is concerning.
LLMs are text prediction engines. They predict what comes after the previous text. They were trained on a large corpus of raw unfiltered internet, because that's the only thing available that actually has enough data (there is no good training set), then fine-tuned on smaller samples of hand-written and curated question/answer format "as an AI assistant boyscout" text. When the previous text gets too weird for the hand-curated stuff to be relevant to its predictions, it essentially reverts to raw internet. The most likely text to come after weird poorly written horror copypasta is more weird poorly written horror copypasta, so it predicts more, and then it's fed its previous output and told to predict what comes next, and it spirals into more of that.
Every argument that refers to stochastic parrots is terrible. First off, people are stochastic, animals are stochastic, any sufficiently advanced AI is going to be stochastic, that part does no work. The real meat is in the parrot, parrots produce very dumb language that is mostly rote memorization, maybe a smidge of basic pattern matching thrown in, with little understanding of what they're saying. Are LLMs like this? No.
Idk if I can really argue with people who think they're so stupid as to be compared to a bird, I actually think they can be a bit clever, even exhibiting rare sparks of creativity, but this is just, like, my opinion after interacting with them a lot, other people have a different impression and I really think this is pretty subjective. I'll grant that even the best of them can be really dumb sometimes, and I really don't think it matters as this technology is in its infancy, unless we think they are necessarily dumb for some reason we will just have to wait to see how smart they will become. So we're down to the rote memorization / basic pattern matching part. I've seen various arguments here. Pointing and waving at examples of LLMs seemingly using wrong patterns or regurgitating something almost verbatim found on the internet, but there are also many examples of them not obviously doing this. Then there's claiming that because the loss function merely incentivizes the system to predict the next token that it therefore can't produce anything intelligent but this just doesn't follow. The loss function for humans merely incentivizes us to produce more offspring, just because it doesn't directly incentivize intelligence doesn't mean it won't produce it as a side effect. And I'm sure more arguments, all of them are flawed..
..because the idea that LLMs are just big lookup tables with some basic pattern matching thrown in is, while plausible, demonstrably false. The internals of these models are really really hard to interrogate but it can be done if you know what you're looking for. I think the clearest example of this would be in models trained on games of chess/othello, people have pointed out that some versions of chatgpt are kind of okay at chess but fail hard if weird moves are made in the opening, making illegal moves and not understanding what pieces are on the board, suggesting that they are just memorizing common moves and extracting basic patterns from a huge number of game histories. Probably this is to some extent true for ChatGpt 3.x, but version 4 does quite a bit better and LLMs specifically trained to mimic human games do better still, playing generally reasonably no matter what their opponent does. It could still technically be that they somehow pattern matching... better.. but actually no, this question has been directly resolved. Even quite tiny LLMs trained on board game moves develop the ability to, at the very least, faithfully represent the board state, like you can just look inside at the activations the right way and see what piece is on each square. This result has been improved upon and also replicated with chess. What are they doing with that board state, how are they using it? Unknown, but if you're building an accurate model of something not directly accessible to you using incidental data, you're not just pattern matching. That's just one example, and it's never been proven, to my knowledge, that ChatGPT and the like do something like this, but it shows that it's possible and does sometimes happen under natural conditions. Also, it would be kind of weird if a ~1 trillion parameter model was not at the very least taking advantage of something accessible to a 150 million parameter one, I'd expect it to be doing that plus a lot more clever and unexpected stuff.
Um this comment is kinda huge but it gives me the impression that you are misunderstanding what the criticism stochastic parrot means, and possibly how they actually work.
Stephen wolfram (i know I know) has a good write up of how they function here: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
The version number of the model doesn't change the operating principle.
The criticism is pointing out that the fundamental action is predictive based on patterns generalised from input text. If you ask a gpt model "what comes next: 1, 2, 3" it might very well respond ", 4, 5, 6" etc but it has no concept of a number and it cannot 'understand' why that is true.
This is the "parroting" part, and it's dangerous because if you asked it about a completely novel sequence it would output tokens presented exactly the same as a factual answer. It has no concept of falsity, it is just outputting plausible tokens.
This is just a restatement of the second example argument I gave, trying to assert something about the internals of a model (it doesn't understand) based on the fact that it was optimized to predict the next token
It's not "optimised" to do that, that's all it does. Like what specifically do you mean by internals? the weights of particular nodes?
You seem to be implying there's something deeper, some sort of persistent state or something but it is stateless after training. It's just a series of nodes and weights, they cannot encode more than patterns derived from training data.
Not the weights, the activations, these depend on the input and change every time you evaluate the model. They are not fed back into the next iteration, as is done in an RNN, so information doesn't persist for very long, but it is very much persisted and chewed upon by the various layers as it propagates through the network.
I am not trying to claim that the current crop of LLMs understand in the sense that a human does, I agree they do not, but nothing you have said actually justifies that conclusion or places any constraints on the abilities of future LLMs. If you ask a human to read a joke and then immediately shoot them in the head before it's been integrated into their long term memory they may or may not have understood the joke.
I really don't think your analogy is a great one there. We can't compare brains to computers usefully because they're super distinct. You're sneaking in this assumption that there is more complexity to the models by implying there's something larger present being terminated early but there isn't.
This seems as absurd to me as asking whether a clock has a concept of time. Being very good at doing time related stuff, vastly superior to a human, is not evidence in favour of having any sort of knowledge of time. I think that the interface of these models may be encouraging you to attribute more to them than there could possibly be.
The analogy is only there to point out the flaw in your thinking, the lack of persistence applies to both humans (if we shoot them quickly) and LLMs and so your argument applies in both cases. And I can do the very same trick to the clock analogy. You want to say that a clock is designed to keep time and that's all it does therefore it can't understand time. But I say, look, the clock was designed to keep time yes but that is far from all it does, it also transforms electrical energy into mechanical and uses it to swing around some arms at constant speed, and we can't see the inside of the clock who knows what is going on in there, probably nothing that understands the concept of time but we'd have to look inside and see. LLMs were designed to predict the next token, they do actually do so, but clearly they can do more than that, for example they can solve high school level math problems they have never seen before and they can classify emails as being spam or not. Yes these are side effects of their ability to predict token sequences as human reasoning is a side effect of their ability to have lots of children. The essence of a task is not necessarily the essence of the tool designed specifically for that task.
If you believe LLMs are not complex enough to have understanding and you say that head on I won't argue with you, but you're claiming that their architecture doesn't allow it even in theory then we have a very fundamental disagreement
Huh? a human brain is a complex as fuck persistent feedback system. When a nervous impulse starts propagating through the body/brain whether or not that one specifically has time to be integrated into consciousness has no bearing on the existence of a mind that would be capable of doing so. It's not analogous at all.
LLMs were designed to predict the next token, they do actually do so, but clearly they can do more than that, for example they can solve high school level math problems they have never seen before
No see this is where we're disagreeing. They can output strings which map to solutions of the problem quite often. Because they have internalised patterns, they will output strings that don't map to solutions other times, and there is no logic to the successes and failures that indicate any sort of logical engagement with the maths problem. It's not like you can say "oh this model understands division but has trouble with exponentiation" because it is not doing maths. It is doing string manipulation which sometimes looks like maths.
human reasoning is a side effect of their ability to have lots of children.
This is reductive to the point of absurdity. you may as well say human reasoning is a side effect of quark bonding in rapidly cooling highly localised regions of space time. you won't actually gain any insight by paving over all the complexity.
LLMs do absolutely nothing like an animal mind does, humans aren't internalising massive corpuses of written text before they learn to write. Babies learn conversation turn taking long before anything resembling speech for example. There's no constant back and forth between like the phonological loop and speech centers as you listen to what you just said and make the next sound.
The operating principle is entirely alien and highly rigid and simplistic. It is fascinating that it can be used to produce stuff that often looks like what a conscious mind would do but that is not evidence that it's doing the same task. There is no reason to suspect there is anything capable of supporting understanding in an LLM, they lack anything like the parts we expect to be present for that.
The architectute of LLM neurons is an incredibly simplified and bayesean in nature. It can interact with other neurons and maintain an activation weight and some other parameters, but it's not a physical object.
Biological neurons are independent organisms capable of self organization, migration, communication, and interaction either directly with the world or abstractly through nerve senses.
The general concept of LLM architecture (something that has been around for decades now, I think all the way back to the 50s) is a reduced and simplified facsimile of that biological function.
I think because we interact with LLMs through an interface that's been basically exclusively limited to other human interactions forever, it can be easy to forget that they aren't the system they're emulating. They're no more a sentient machine than a dialysis machine is a kidney.
The very first chatbots had a similar effect on users, even though those were more expert machines and didn't use large natural language training sets. And in the end I believe that replicating the biological function of kidneys and livers and lungs is a much more important step in human history than replicating the function of the mind. Especially because any simulation of the mind trained on a natural language dataset is not something that can ever help us.
It will at best begin to placate us, we will have a mirror held up to ourselves because the training of the model isn't done for the sake of creating intelligence, but for making something that resembles intelligence enough to make us happy. The training is done entirely on our terms.
And again, LLMs and more broadly statistical models do have tons of uses, and using them to discover hidden patterns in data that would take forever for a human to find by hand. They can also be used in planning to simplify economic forecasting and detect possible shortages and future labor allocation needs (this was done by hand for GOSPLAN, and TANS proposes using these models for cybernetic planning systems).
But it's still just a machine, it's still just a programming language. A language where the syntax is a giant matrix of floating point numbers and relationship rules, but a programming language nonetheless.
I don't know if it's relevant as I haven't read it yet but I was recommended this book: https://www.hup.harvard.edu/books/9780674032927 in a conversation the other day that was related to the pitfalls of comparing humans and computers.
It might be interesting? apparently it made some significant waves when published.
I can't tell if this is upsetting or not. I'm still mostly in the camp of "stochastic parrots" when it comes to LLMs, but this just feels like the AI is intentionally being a dick... and the intent part is concerning.
stochastically parroting redditors.
Babe here to say just this. It's basically a Reddit post
Weird, I was referring to this pretty well-known paper on LLMs. I haven't been to Reddit in many years.
LLMs are text prediction engines. They predict what comes after the previous text. They were trained on a large corpus of raw unfiltered internet, because that's the only thing available that actually has enough data (there is no good training set), then fine-tuned on smaller samples of hand-written and curated question/answer format "as an AI assistant boyscout" text. When the previous text gets too weird for the hand-curated stuff to be relevant to its predictions, it essentially reverts to raw internet. The most likely text to come after weird poorly written horror copypasta is more weird poorly written horror copypasta, so it predicts more, and then it's fed its previous output and told to predict what comes next, and it spirals into more of that.
The scary thing about LLMs isn't them "thinking", it's them being a reflection of everything we've said.
A Social Narcissus
Every argument that refers to stochastic parrots is terrible. First off, people are stochastic, animals are stochastic, any sufficiently advanced AI is going to be stochastic, that part does no work. The real meat is in the parrot, parrots produce very dumb language that is mostly rote memorization, maybe a smidge of basic pattern matching thrown in, with little understanding of what they're saying. Are LLMs like this? No.
Idk if I can really argue with people who think they're so stupid as to be compared to a bird, I actually think they can be a bit clever, even exhibiting rare sparks of creativity, but this is just, like, my opinion after interacting with them a lot, other people have a different impression and I really think this is pretty subjective. I'll grant that even the best of them can be really dumb sometimes, and I really don't think it matters as this technology is in its infancy, unless we think they are necessarily dumb for some reason we will just have to wait to see how smart they will become. So we're down to the rote memorization / basic pattern matching part. I've seen various arguments here. Pointing and waving at examples of LLMs seemingly using wrong patterns or regurgitating something almost verbatim found on the internet, but there are also many examples of them not obviously doing this. Then there's claiming that because the loss function merely incentivizes the system to predict the next token that it therefore can't produce anything intelligent but this just doesn't follow. The loss function for humans merely incentivizes us to produce more offspring, just because it doesn't directly incentivize intelligence doesn't mean it won't produce it as a side effect. And I'm sure more arguments, all of them are flawed..
..because the idea that LLMs are just big lookup tables with some basic pattern matching thrown in is, while plausible, demonstrably false. The internals of these models are really really hard to interrogate but it can be done if you know what you're looking for. I think the clearest example of this would be in models trained on games of chess/othello, people have pointed out that some versions of chatgpt are kind of okay at chess but fail hard if weird moves are made in the opening, making illegal moves and not understanding what pieces are on the board, suggesting that they are just memorizing common moves and extracting basic patterns from a huge number of game histories. Probably this is to some extent true for ChatGpt 3.x, but version 4 does quite a bit better and LLMs specifically trained to mimic human games do better still, playing generally reasonably no matter what their opponent does. It could still technically be that they somehow pattern matching... better.. but actually no, this question has been directly resolved. Even quite tiny LLMs trained on board game moves develop the ability to, at the very least, faithfully represent the board state, like you can just look inside at the activations the right way and see what piece is on each square. This result has been improved upon and also replicated with chess. What are they doing with that board state, how are they using it? Unknown, but if you're building an accurate model of something not directly accessible to you using incidental data, you're not just pattern matching. That's just one example, and it's never been proven, to my knowledge, that ChatGPT and the like do something like this, but it shows that it's possible and does sometimes happen under natural conditions. Also, it would be kind of weird if a ~1 trillion parameter model was not at the very least taking advantage of something accessible to a 150 million parameter one, I'd expect it to be doing that plus a lot more clever and unexpected stuff.
Um this comment is kinda huge but it gives me the impression that you are misunderstanding what the criticism stochastic parrot means, and possibly how they actually work.
Stephen wolfram (i know I know) has a good write up of how they function here: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/
The version number of the model doesn't change the operating principle.
The criticism is pointing out that the fundamental action is predictive based on patterns generalised from input text. If you ask a gpt model "what comes next: 1, 2, 3" it might very well respond ", 4, 5, 6" etc but it has no concept of a number and it cannot 'understand' why that is true.
This is the "parroting" part, and it's dangerous because if you asked it about a completely novel sequence it would output tokens presented exactly the same as a factual answer. It has no concept of falsity, it is just outputting plausible tokens.
This is just a restatement of the second example argument I gave, trying to assert something about the internals of a model (it doesn't understand) based on the fact that it was optimized to predict the next token
It's not "optimised" to do that, that's all it does. Like what specifically do you mean by internals? the weights of particular nodes?
You seem to be implying there's something deeper, some sort of persistent state or something but it is stateless after training. It's just a series of nodes and weights, they cannot encode more than patterns derived from training data.
Not the weights, the activations, these depend on the input and change every time you evaluate the model. They are not fed back into the next iteration, as is done in an RNN, so information doesn't persist for very long, but it is very much persisted and chewed upon by the various layers as it propagates through the network.
I am not trying to claim that the current crop of LLMs understand in the sense that a human does, I agree they do not, but nothing you have said actually justifies that conclusion or places any constraints on the abilities of future LLMs. If you ask a human to read a joke and then immediately shoot them in the head before it's been integrated into their long term memory they may or may not have understood the joke.
I really don't think your analogy is a great one there. We can't compare brains to computers usefully because they're super distinct. You're sneaking in this assumption that there is more complexity to the models by implying there's something larger present being terminated early but there isn't.
This seems as absurd to me as asking whether a clock has a concept of time. Being very good at doing time related stuff, vastly superior to a human, is not evidence in favour of having any sort of knowledge of time. I think that the interface of these models may be encouraging you to attribute more to them than there could possibly be.
The analogy is only there to point out the flaw in your thinking, the lack of persistence applies to both humans (if we shoot them quickly) and LLMs and so your argument applies in both cases. And I can do the very same trick to the clock analogy. You want to say that a clock is designed to keep time and that's all it does therefore it can't understand time. But I say, look, the clock was designed to keep time yes but that is far from all it does, it also transforms electrical energy into mechanical and uses it to swing around some arms at constant speed, and we can't see the inside of the clock who knows what is going on in there, probably nothing that understands the concept of time but we'd have to look inside and see. LLMs were designed to predict the next token, they do actually do so, but clearly they can do more than that, for example they can solve high school level math problems they have never seen before and they can classify emails as being spam or not. Yes these are side effects of their ability to predict token sequences as human reasoning is a side effect of their ability to have lots of children. The essence of a task is not necessarily the essence of the tool designed specifically for that task.
If you believe LLMs are not complex enough to have understanding and you say that head on I won't argue with you, but you're claiming that their architecture doesn't allow it even in theory then we have a very fundamental disagreement
Huh? a human brain is a complex as fuck persistent feedback system. When a nervous impulse starts propagating through the body/brain whether or not that one specifically has time to be integrated into consciousness has no bearing on the existence of a mind that would be capable of doing so. It's not analogous at all.
No see this is where we're disagreeing. They can output strings which map to solutions of the problem quite often. Because they have internalised patterns, they will output strings that don't map to solutions other times, and there is no logic to the successes and failures that indicate any sort of logical engagement with the maths problem. It's not like you can say "oh this model understands division but has trouble with exponentiation" because it is not doing maths. It is doing string manipulation which sometimes looks like maths.
This is reductive to the point of absurdity. you may as well say human reasoning is a side effect of quark bonding in rapidly cooling highly localised regions of space time. you won't actually gain any insight by paving over all the complexity.
LLMs do absolutely nothing like an animal mind does, humans aren't internalising massive corpuses of written text before they learn to write. Babies learn conversation turn taking long before anything resembling speech for example. There's no constant back and forth between like the phonological loop and speech centers as you listen to what you just said and make the next sound.
The operating principle is entirely alien and highly rigid and simplistic. It is fascinating that it can be used to produce stuff that often looks like what a conscious mind would do but that is not evidence that it's doing the same task. There is no reason to suspect there is anything capable of supporting understanding in an LLM, they lack anything like the parts we expect to be present for that.
The architectute of LLM neurons is an incredibly simplified and bayesean in nature. It can interact with other neurons and maintain an activation weight and some other parameters, but it's not a physical object.
Biological neurons are independent organisms capable of self organization, migration, communication, and interaction either directly with the world or abstractly through nerve senses.
The general concept of LLM architecture (something that has been around for decades now, I think all the way back to the 50s) is a reduced and simplified facsimile of that biological function.
I think because we interact with LLMs through an interface that's been basically exclusively limited to other human interactions forever, it can be easy to forget that they aren't the system they're emulating. They're no more a sentient machine than a dialysis machine is a kidney.
The very first chatbots had a similar effect on users, even though those were more expert machines and didn't use large natural language training sets. And in the end I believe that replicating the biological function of kidneys and livers and lungs is a much more important step in human history than replicating the function of the mind. Especially because any simulation of the mind trained on a natural language dataset is not something that can ever help us.
It will at best begin to placate us, we will have a mirror held up to ourselves because the training of the model isn't done for the sake of creating intelligence, but for making something that resembles intelligence enough to make us happy. The training is done entirely on our terms.
And again, LLMs and more broadly statistical models do have tons of uses, and using them to discover hidden patterns in data that would take forever for a human to find by hand. They can also be used in planning to simplify economic forecasting and detect possible shortages and future labor allocation needs (this was done by hand for GOSPLAN, and TANS proposes using these models for cybernetic planning systems).
But it's still just a machine, it's still just a programming language. A language where the syntax is a giant matrix of floating point numbers and relationship rules, but a programming language nonetheless.
I don't know if it's relevant as I haven't read it yet but I was recommended this book: https://www.hup.harvard.edu/books/9780674032927 in a conversation the other day that was related to the pitfalls of comparing humans and computers.
It might be interesting? apparently it made some significant waves when published.
Parrots are really smart, delete this.
These LLMs aren't.
deleted by creator