:sicko-blur:

  • drhead [he/him]
    ·
    9 months ago

    I wouldn't be confident about that. Usually people training a LORA will be training the text encoder as well as the Unet that does the actual diffusion process. If you pass the model images that visually look like cats, are labeled as "a picture of a cat", and that the text encoder is aligned towards thinking is "a picture of a dog" (the part that Nightshade does), you would in theory be reinforcing what pictures of cats look like to the text encoder, and it would end up moving the vectors of "picture of a cat" and "picture of a dog" to where they are very well clear of each other. Nightshade essentially relies on being able to line up the Unet to the wrong spots on the text encoder, which shouldn't happen if the text encoder is allowed to move as well.