Glaze protects our work by ruining ai generative output. Not only should we protect our work with Glaze, we should avoid mentioning that it’s protected so they keep feeding it into the machine, effectively poisoning the AI :). pic.twitter.com/Bd6xvf3JWS— 🏳️🌈Chris Shehan 👓🔪 (@ChrisShehanArt) March 17, 2023
Check it out if you do art: https://glaze.cs.uchicago.edu/
But ai can be trained to look for an obvious watermark. This seems less easy to identify, or at least for a person to tell the computer which artwork hasn't been glazed.
A deep learning model can look for a watermark, but is it worth it to remove the water mark or just leave it out of the training set? It's not a perfect solution, and if someone was really determined they could get around it, but it will be more trouble than it's worth a lot of the time.
But ai can be trained to look for an obvious watermark. This seems less easy to identify, or at least for a person to tell the computer which artwork hasn't been glazed.
they can be, but that didn't stop stablediffusion from fucking it up apparently
A deep learning model can look for a watermark, but is it worth it to remove the water mark or just leave it out of the training set? It's not a perfect solution, and if someone was really determined they could get around it, but it will be more trouble than it's worth a lot of the time.