Glaze protects our work by ruining ai generative output. Not only should we protect our work with Glaze, we should avoid mentioning that it’s protected so they keep feeding it into the machine, effectively poisoning the AI :). pic.twitter.com/Bd6xvf3JWS— 🏳️🌈Chris Shehan 👓🔪 (@ChrisShehanArt) March 17, 2023
Check it out if you do art: https://glaze.cs.uchicago.edu/
A deep learning model can look for a watermark, but is it worth it to remove the water mark or just leave it out of the training set? It's not a perfect solution, and if someone was really determined they could get around it, but it will be more trouble than it's worth a lot of the time.
A deep learning model can look for a watermark, but is it worth it to remove the water mark or just leave it out of the training set? It's not a perfect solution, and if someone was really determined they could get around it, but it will be more trouble than it's worth a lot of the time.