https://transformer-circuits.pub/2024/scaling-monosemanticity/index.html

Show

  • dualmindblade [he/him]
    hexagon
    ·
    7 months ago

    It really is, another thing I find remarkable is that all the magic vectors (features) were produced automatically without looking at the actual output of the model, only activations in a middle layer of the network, and using a loss function that is purely geometric in nature, it has no idea the meaning of the various features it is discovering.

    And the fact that this works seems to confirm, or at least almost confirm, a non trivial fact about how transformers do what they do. I always like to point out that we know more about the workings of the human brain than we do about the neural networks we have ourselves created. Probably still true, but this makes me optimistic we'll at least cross that very low bar in the near future.