• Rojo27 [he/him]
    ·
    1 year ago

    This is the cultural Marxism chuds warn everyone about all the time.

  • AlicePraxis
    ·
    edit-2
    4 months ago

    deleted by creator

  • socialnuju [she/her]
    ·
    1 year ago

    Is it just me or does Karl Marx look like the lovechild of Marx and Taika Waititi?

  • Othello
    ·
    edit-2
    25 days ago

    deleted by creator

    • Budwig_v_1337hoven [he/him]
      ·
      1 year ago

      a M̷̮͂Á̷̼O̶͉͂V̷̭̚E̴͕̓L̴͈̕ ̴͇̕S̸̫̾Ť̵̫U̸͍͆I̸̹̅D̶́ͅL̷͓̎Ś̶̤ ̵̗̂ production

  • JohnBrownsBussy2 [she/her, they/them]
    ·
    1 year ago

    This issue is interesting, because it was noted that this particular Captain Marvel pose shows up duplicated in a at least one key AI dataset since it's not technically a duplicate (different posters or promo images), but because central figure is identical in so many of these images overfitting/memorization is pretty likely.

    We don't know anything about DALLE-3 architecture wise (it has a LLM text encoder and it's almost certainly a latent diffusion models), but presumably it's a pretty big model so that can also increase the likelihood of overfitting.

    • dualmindblade [he/him]
      hexagon
      ·
      1 year ago

      Interesting. Just a clarification, overfitting and memorization are not quite the same thing to my understanding. Overfitting is when a model memorizes rather than generalizing, but very large models can and will do both. If you ask an image generator for "a reproduction of starry night by van gogh hanging on the wall", or a LLM to complete "to be or not to be, that is _" you are referring to something very specific that you'd like reproduced exactly. If the model outputs what you wanted you would call that memorization but not overfitting. Still you may want to suppress memorization and you certainly don't want overfitting. Side note, massively overparameterized models are better at both memorization and generalization and are naturally resistant to overfitting as I define it, that last thing would have surprised early ML researchers since they had noticed the opposite trend, but that trend reverses when you go large enough. Also, they will sometimes memorize on a single pass through the data, even if there's no duplication, which is quite remarkable.

      • JohnBrownsBussy2 [she/her, they/them]
        ·
        1 year ago

        That's a fair interpretation, although I still consider it a failure state. These models shouldn't be used as storage/retrieval tools.