Glaze protects our work by ruining ai generative output. Not only should we protect our work with Glaze, we should avoid mentioning that it’s protected so they keep feeding it into the machine, effectively poisoning the AI :). pic.twitter.com/Bd6xvf3JWS— 🏳️🌈Chris Shehan 👓🔪 (@ChrisShehanArt) March 17, 2023
Check it out if you do art: https://glaze.cs.uchicago.edu/
They mention that the image cloak is supposed to be resilient to edits of the image, but I'd be really surprised if it could survive having a photo of a screen taken like a boomer who doesn't know how to screenshot.
I peeked on the stable diffusion subreddit to see reactions and people don't really seem to care. Apparently they also took some code from an AI project in violation of GPL so uhhhh . There's also lots of people saying it doesn't really work and/or destroys image quality but I'm not invested enough to verify any of that
Oh I'm sure. Given the mission of Glaze, though, it's an especially bad look to use code without proper credit and disclosure. You would think they'd be extra invested in making sure they're not swiping anyone else's work.
Also find it funny they say they "reused" code in that tweet while referring to use of AI in training models as "stealing" and "plagiarizing" in their white paper
They mention that the image cloak is supposed to be resilient to edits of the image, but I'd be really surprised if it could survive having a photo of a screen taken like a boomer who doesn't know how to screenshot.
I peeked on the stable diffusion subreddit to see reactions and people don't really seem to care. Apparently they also took some code from an AI project in violation of GPL so uhhhh . There's also lots of people saying it doesn't really work and/or destroys image quality but I'm not invested enough to verify any of that
It's pretty easy to accidentally violate open source licenses.
Oh I'm sure. Given the mission of Glaze, though, it's an especially bad look to use code without proper credit and disclosure. You would think they'd be extra invested in making sure they're not swiping anyone else's work.
Also find it funny they say they "reused" code in that tweet while referring to use of AI in training models as "stealing" and "plagiarizing" in their white paper