• autismdragon [he/him, they/them]
    ·
    1 year ago

    neural networks actually pretty much never completely "forget" anything they've been trained on,

    Autism moment but this reminds me of how ChatGPT is, last I checked, utterly convinced that Greg Buis from Survivor Borneo died in either a motorcycle or hiking accident. The man is alive and well and no similar incident has happened to anyone else on Survivor. I have no clue how it got this idea. I have to assume something like that happened to someone with a similar name that I can't find anywhere?

    Two other Survivor Borneo contestants have died, but they both died of old age not accidents. And it doesnt say other contestants from that season are dead.

    • drhead [he/him]
      ·
      1 year ago

      I had to try this and it doesn't seem that it thinks that, so, I guess it got fixed.

      To clarify, you can still overwrite old knowledge, so maybe one of their finetune runs had info that reinforced that he is still alive. It's just that if you go a long time without training a model on something specific, it won't forget it (but it might get to the point where it's "noisy" -- I don't know how this would work on text models, I mainly do image models). Like if you train a model on a dataset like LAION which is just a bunch of random various images from the internet, then train it for a while exclusively on something specific like anime pictures, the resulting model will still be able to make "photorealistic" content, or content of subjects not in the more recent dataset, though the results might be somewhat degraded.