Not sure yet, gotta see what colab can handle. Each panel is 32x32px, but that was basically just so I could finish before going to sleep last night. I'm going to do some optimization today and see if I can get the resolution up, but honestly I kind of like the low-res look.
the original panels are 175x200px, and downscaled to 32x32 the model took probably 2 hours to train. I wrote the project from scratch so its not impossible that if we just feed it into stylegan or something we might get something approaching readable text. But will garfield be as dummy thicc as he is in that last panel?
Do you have any of this up on github? One of my friends really loves Garfield (the cat, not so much the comic lol), and I'd love to send her some ML-generated garfields.
Or even if you could just point me in the right direction regarding how to replicate the project myself, like have you found any good introductions to doing this I could reference?
Don't want to link my github because it's under my real name. I can send you the code if you like or point you in the right direction. How much coding experience do you have?
Understandable. If you could just point me in the right direction, I guess? I have a decent amount of coding experience, but I'm a bit rusty. I do some stuff with python and bash for work, used to work with C, but haven't in years.
Also - I have a single RTX 2070 Super, is that powerful enough to even run stuff like this? I worry I might not have the right hardware.
I would start with these two links. If you understand python syntax then you're most of the way there. The first link basically just describes the concepts behind adversarial networks, and the second actually has code that should run fine on a 2070 if you want to use that as a springboard. If you run into hardware limitations, Google Colab is basically a free python notebook that will do all the gpu processing on their machines, with some limitations that can be removed for $10 a month (how long your sessions can last etc). The code on that page should run fine on the free tier though. Otherwise, just youtube GANs and watch talks and stuff. The computerphile video is a good place to start for someone who already knows a bit of math and programming. https://www.youtube.com/watch?v=Sw9r8CL98N0
You just have to fuzz your eyes a little bit. Or send me a gpu.
deleted by creator
Not sure yet, gotta see what colab can handle. Each panel is 32x32px, but that was basically just so I could finish before going to sleep last night. I'm going to do some optimization today and see if I can get the resolution up, but honestly I kind of like the low-res look.
deleted by creator
the original panels are 175x200px, and downscaled to 32x32 the model took probably 2 hours to train. I wrote the project from scratch so its not impossible that if we just feed it into stylegan or something we might get something approaching readable text. But will garfield be as dummy thicc as he is in that last panel?
deleted by creator
https://imgur.com/a/fCDd9LO there is a good Gon in the third panel of this one
deleted by creator
Do you have any of this up on github? One of my friends really loves Garfield (the cat, not so much the comic lol), and I'd love to send her some ML-generated garfields.
Or even if you could just point me in the right direction regarding how to replicate the project myself, like have you found any good introductions to doing this I could reference?
Don't want to link my github because it's under my real name. I can send you the code if you like or point you in the right direction. How much coding experience do you have?
Understandable. If you could just point me in the right direction, I guess? I have a decent amount of coding experience, but I'm a bit rusty. I do some stuff with python and bash for work, used to work with C, but haven't in years.
Also - I have a single RTX 2070 Super, is that powerful enough to even run stuff like this? I worry I might not have the right hardware.
https://machinelearningmastery.com/what-are-generative-adversarial-networks-gans/ https://machinelearningmastery.com/how-to-develop-a-generative-adversarial-network-for-a-cifar-10-small-object-photographs-from-scratch/
I would start with these two links. If you understand python syntax then you're most of the way there. The first link basically just describes the concepts behind adversarial networks, and the second actually has code that should run fine on a 2070 if you want to use that as a springboard. If you run into hardware limitations, Google Colab is basically a free python notebook that will do all the gpu processing on their machines, with some limitations that can be removed for $10 a month (how long your sessions can last etc). The code on that page should run fine on the free tier though. Otherwise, just youtube GANs and watch talks and stuff. The computerphile video is a good place to start for someone who already knows a bit of math and programming. https://www.youtube.com/watch?v=Sw9r8CL98N0