This is going to wreck society even more.
Please, for the love of Marx, do not take ChatGPT at its word on anything. It has no intelligence. No ability to sort the truth from fiction. It is just a very advanced chat bot that knows how to string words together in a sentence and paragraph.
DO NOT BELIEVE IT. IT DOES NOT CITE SOURCES.
spoiler
I feel like my HS English/Science teacher begging kids to not use Wikipedia, right now.
But even Wikipedia is better than ChatGPT because. Wikipedia. Cites. Sources.
From my current understanding, I'm not sure referring to GPT models as "stochastic parrots" is accurate. There is evidence the LLM builds internal "world models," even if it emerges through probabilistic mechanisms: https://thegradient.pub/othello/
Let me preface this by saying I'm stupid, I can't even do my own research work well, let alone comment on cutting edge stuff with any degree of confidence:
I don't think this is incompatible with the concept of the stochastic parrot. Like, by the time "On the dangers of stochastic parrots" came out, it was already known that language models have rich representations of language structure:
So I don't think we can take this, or the probing/interpretability work later in the paper as a refutation of LLMs as stochastic parrots, because it was never about memorization:
the concept is useful primarily as a way of delimiting how far this "understanding" really goes:
the metaphor of the crow is kind of apt, I think. Like an LLM, it is working only with form, not meaning:
Someone smarter please feel free to correct/dunk on me.
I'm going to maybe dunk on the authors of that ICLR paper (even giving them the benfit of the doubt, they should really know better if they got into ICLR). You can't conclude a lack of memorization from observation that the model in question maintains training accuracy on the holdout set. I really hope they meant to say that they tested on the skewed dataset and saw that the model maintained performance (without seeing any of the skewed data in training). However, if they simply repeated the training step on the skewed data and saw the same performance, all we know is that the model might have memorized the new training set.
I also agree with your conclusions about the scant interpretability results not necessarily refuting the mere stochastic parrot hypotheses.
Yeah, that's it, I should have provided the full quote, but thought it would make no sense without context so abbreviated it. They generate synthetic training and test data separately, and for the training dataset those games could not start with C5.
I don't know much about Othello so don't really know if this is a good way of doing it or not. In chess it wouldn't make much sense, but in this game's case the initial move is always relevant for the task of "making a legal move" I think(?) It does seem to make sense for what they want to prove:
Anyway, I don't think it's that weird that the model has an internal representation of the current board state, as that is directly useful for the autoregressive task. Same thing for GPT picking up on syntax, semantic content etc. Still a neat thing to research, but this kind of behavior falls within the stochastic parrot purview in the original paper as I understand it.
The term amounts to: "Hey parrot, I know you picked up on grammar and can keep the conversation on topic, but I also know what you say means nothing because there's really not a lot going on in that little tiny head of yours" :floppy-parrot:
hmmm still feels like not as strong a result as the authors want us to read it as. I'd be much more impressed if they trained on the original training set and observed that the model maintains the performance observed on the original test set when tested on the skewed test set, but I bet they didn't find that result
Totally agree. At least with biological parrots, they learn in the physical world and maybe have some chance of associating the sounds with some semantically relevant physical processes.
Transformers trained after 2022 can't cook, all they know is match queries to values, maximize likelihoods by following error gradients, eat hot chip and lie
Actually, the more I think about the experiment and their conclusions, the worse it gets. They synthesized the skewed dataset by sampling from a distribution that they assumed for both the synthetic training and synthetic testing set, so in a way, they've deliberately engineered the result.
deleted by creator
The article doesn't make such a bold claim. It presents its goal as "exploring" the question, so not sure why the redditor started off with that.
Why?
Appreciate the (semi-anonymous?) critique regardless.
deleted by creator
Doesn't look it's the same guy. This is the Kenneth Li that wrote the article: https://twitter.com/ke_li_2021