This is going to wreck society even more.

Please, for the love of Marx, do not take ChatGPT at its word on anything. It has no intelligence. No ability to sort the truth from fiction. It is just a very advanced chat bot that knows how to string words together in a sentence and paragraph.

DO NOT BELIEVE IT. IT DOES NOT CITE SOURCES.

spoiler

I feel like my HS English/Science teacher begging kids to not use Wikipedia, right now.

But even Wikipedia is better than ChatGPT because. Wikipedia. Cites. Sources.

  • dat_math [they/them]
    ·
    2 years ago

    ): A potential explanation for these results may be that Othello-GPT is simply memorizing all possible transcripts. To test for this possibility, we created a skewed dataset of 20 million games to replace the training set of synthetic dataset[…] Othello-GPT trained on the skewed dataset still yields an error rate of 0.02%. Since Othello-GPT has seen none of these test sequences before, pure sequence memorization cannot explain its performance.

    I'm going to maybe dunk on the authors of that ICLR paper (even giving them the benfit of the doubt, they should really know better if they got into ICLR). You can't conclude a lack of memorization from observation that the model in question maintains training accuracy on the holdout set. I really hope they meant to say that they tested on the skewed dataset and saw that the model maintained performance (without seeing any of the skewed data in training). However, if they simply repeated the training step on the skewed data and saw the same performance, all we know is that the model might have memorized the new training set.

    I also agree with your conclusions about the scant interpretability results not necessarily refuting the mere stochastic parrot hypotheses.

    • BabaIsPissed [he/him]
      ·
      2 years ago

      really hope they meant to say that they tested on the skewed dataset and saw that the model maintained performance (without seeing any of the skewed data in training).

      Yeah, that's it, I should have provided the full quote, but thought it would make no sense without context so abbreviated it. They generate synthetic training and test data separately, and for the training dataset those games could not start with C5.

      A potential explanation for these results may be that Othello-GPT is simply memorizing all possible transcripts. To test for this possibility, we created a skewed dataset of 20 million games to replace the training set of synthetic dataset. At the beginning of every game, there are four possible opening moves: C5, D6, E3 and F4. This means the lowest layer of the game tree (first move) has four nodes (the four possible opening moves). For our skewed dataset, we truncate one of these nodes (C5), which is equivalent to removing a quarter of the whole game tree. Othello-GPT trained on the skewed dataset still yields an error rate of 0.02%. Since Othello-GPT has seen none of these test sequences before, pure sequence memorization cannot explain its performance

      I don't know much about Othello so don't really know if this is a good way of doing it or not. In chess it wouldn't make much sense, but in this game's case the initial move is always relevant for the task of "making a legal move" I think(?) It does seem to make sense for what they want to prove:

      Note that even truncated the game tree may include some board states in the test dataset, since different move sequences can lead to the same board state. However, our goal is to prevent memorization of input data; the network only sees moves, and never sees board state directly.

      Anyway, I don't think it's that weird that the model has an internal representation of the current board state, as that is directly useful for the autoregressive task. Same thing for GPT picking up on syntax, semantic content etc. Still a neat thing to research, but this kind of behavior falls within the stochastic parrot purview in the original paper as I understand it.

      The term amounts to: "Hey parrot, I know you picked up on grammar and can keep the conversation on topic, but I also know what you say means nothing because there's really not a lot going on in that little tiny head of yours" :floppy-parrot:

      • dat_math [they/them]
        ·
        2 years ago

        Actually, the more I think about the experiment and their conclusions, the worse it gets. They synthesized the skewed dataset by sampling from a distribution that they assumed for both the synthetic training and synthetic testing set, so in a way, they've deliberately engineered the result.

      • dat_math [they/them]
        ·
        2 years ago

        They generate synthetic training and test data separately, and for the training dataset those games could not start with C5.

        hmmm still feels like not as strong a result as the authors want us to read it as. I'd be much more impressed if they trained on the original training set and observed that the model maintains the performance observed on the original test set when tested on the skewed test set, but I bet they didn't find that result

        Anyway, I don’t think it’s that weird that the model has an internal representation of the current board state, as that is directly useful for the autoregressive task. Same thing for GPT picking up on syntax, semantic content etc. Still a neat thing to research, but this kind of behavior falls within the stochastic parrot purview in the original paper as I understand it.

        Totally agree. At least with biological parrots, they learn in the physical world and maybe have some chance of associating the sounds with some semantically relevant physical processes.

        Transformers trained after 2022 can't cook, all they know is match queries to values, maximize likelihoods by following error gradients, eat hot chip and lie