:yea:

The context of the reddit thread was discussing how to best make money from AI generators btw

  • drhead [he/him]
    ·
    2 years ago

    Actually, this change wasn't even intentional. It was an indirect result of switching to a different CLIP model (the part that helps convert words to something the model knows) that doesn't have as much in it but is open source.

    The change they did that was intentional, that people complained about, was filtering the dataset for NSFW imagery. Which is a good thing (even if people are going to finetune it back in anyways). But they did it extremely aggressively by flagging anything with a LAION-NSFW score >0.1. You are supposed to use that for >0.99, which covers nearly all NSFW content, beneath that and it's almost all false positives. 0.1 filtered 6% of the dataset, and from my experiences it just doesn't generate pictures of people as well as I remember it doing (all of my Stalin photos look wrong on 2.0) so it may be affecting that. Last I heard they acknowledged that doing >0.1 was a mistake and are doing 0.99 for future models.