My summary: we need to democratize all powerful institutions like yesterday. Seriously y'all we're running out of time

  • dualmindblade [he/him]
    hexagon
    ·
    2 years ago

    It's hard to say how close we are to the theoretical limit for these low prior models which make virtually no assumptions about the data, the transformer was a big leap forward in efficiency so further improvement isn't out of the question. But if you want a machine that just learns human languages and that's literally it, obviously there's room for improvement. Like, gpt-3 was designed for language but it can just as well learn how to generate images or audio or whatever you throw at it, as long as you encode that data as a heap of tokens. We already know that these models transfer what they've learned about one language to another, for example if you only have a few hundred pages of mandarin the models will do very poorly, but add a few terabytes of English to the training data and they will learn the Chinese much much better. As far as general purpose learning is concerned, there are impressive examples of few shot learning in a lot of the big language model research, and of course AlphaZero used no training data at all to become superhuman at go, or put another way it generated its own training data by playing millions of games against itself. So the idea that AI is merely parroting by detecting patterns in mountains of human generated data is kind of dead and I'd expect it to become much more obviously so rather soon. As for compute, I wouldn't expect you to be able to train anything with the capability of these large transformers on a laptop any time soon if ever, but they can already run on your laptop (slowly) and the few shot learning capabilities they've picked up will of course be carried along, so possibly you might be able to run software that can learn a new skill or even an entire language.