This is going to wreck society even more.
Please, for the love of Marx, do not take ChatGPT at its word on anything. It has no intelligence. No ability to sort the truth from fiction. It is just a very advanced chat bot that knows how to string words together in a sentence and paragraph.
DO NOT BELIEVE IT. IT DOES NOT CITE SOURCES.
spoiler
I feel like my HS English/Science teacher begging kids to not use Wikipedia, right now.
But even Wikipedia is better than ChatGPT because. Wikipedia. Cites. Sources.
I'm going to maybe dunk on the authors of that ICLR paper (even giving them the benfit of the doubt, they should really know better if they got into ICLR). You can't conclude a lack of memorization from observation that the model in question maintains training accuracy on the holdout set. I really hope they meant to say that they tested on the skewed dataset and saw that the model maintained performance (without seeing any of the skewed data in training). However, if they simply repeated the training step on the skewed data and saw the same performance, all we know is that the model might have memorized the new training set.
I also agree with your conclusions about the scant interpretability results not necessarily refuting the mere stochastic parrot hypotheses.
Yeah, that's it, I should have provided the full quote, but thought it would make no sense without context so abbreviated it. They generate synthetic training and test data separately, and for the training dataset those games could not start with C5.
I don't know much about Othello so don't really know if this is a good way of doing it or not. In chess it wouldn't make much sense, but in this game's case the initial move is always relevant for the task of "making a legal move" I think(?) It does seem to make sense for what they want to prove:
Anyway, I don't think it's that weird that the model has an internal representation of the current board state, as that is directly useful for the autoregressive task. Same thing for GPT picking up on syntax, semantic content etc. Still a neat thing to research, but this kind of behavior falls within the stochastic parrot purview in the original paper as I understand it.
The term amounts to: "Hey parrot, I know you picked up on grammar and can keep the conversation on topic, but I also know what you say means nothing because there's really not a lot going on in that little tiny head of yours" :floppy-parrot:
Actually, the more I think about the experiment and their conclusions, the worse it gets. They synthesized the skewed dataset by sampling from a distribution that they assumed for both the synthetic training and synthetic testing set, so in a way, they've deliberately engineered the result.
hmmm still feels like not as strong a result as the authors want us to read it as. I'd be much more impressed if they trained on the original training set and observed that the model maintains the performance observed on the original test set when tested on the skewed test set, but I bet they didn't find that result
Totally agree. At least with biological parrots, they learn in the physical world and maybe have some chance of associating the sounds with some semantically relevant physical processes.
Transformers trained after 2022 can't cook, all they know is match queries to values, maximize likelihoods by following error gradients, eat hot chip and lie