AI couldn’t do this a year ago, it required computer hardware that was supercomputer levels of expensive to even create something like this. IMO development was actually held back by crypto and covid-19. Now AI is the #1 focus of the techbros and it isn’t going to slow down. This shit is going to put so many journalists, artists, and even programmers out of work. I don’t know how else to explain this, HUMANITY LITERALLY CREATED AI THIS YEAR. WE MADE FUCKING SKYNET!

You want to talk about technological progress, this shit mogs fusion, it mogs the vaccines, it mogs whatever dumb space colonization shit we did. We made fucking AI! I bet we will have sentient AI in our lifetime. And what are we going to do with this stuff? Porn, lots of porn. Deepfakes of celebrities and politicians sacrificing children to moloch, dead actors staring in new movies, a new album by Tupac, fake war footage, fake everything.

Have you ever heard about how a monkey can write Shakespeare given enough time? We have fucking done that, we pressed random buttons enough times that we ended up with something legible. Now we turn memes into real people.

We need a butlarian jihad or some shit.

  • drhead [he/him]
    ·
    2 years ago

    Nah, we didn't create this in a year. People have worked on this since the 1960s. They got significantly better around 2014 when people switched to using GANs, and the widespread release of latent diffusion models released in the past year (that we've also been working on since 2015 or so) was another huge change. They also aren't close to being AI in the sci-fi sense, AI is basically just a marketing term in this case and it's probably more accurate or less misleading to just call it machine learning or deep learning.

    It's likely that people are overestimating how much this will put people out of work. These things can't really create anything novel without a lot of manual guidance, which is a fundamental limitation as of now and it's not certain when that may be overcome. Mostly it combines concepts well, and as you get more specific it starts to make more and more mistakes. Text gets incoherent and inaccurate, images with too much in the prompt end up very distorted if they even follow all of the prompt at all, code that's more complicated than what you might find in documentation samples is unlikely to run.

    There is quite a lot it can do to make someone already in one of these positions able to do more. AI upscalers are amazing, people have been using them for years now to restore old videos with great results (they were also trained off of unlicensed works and yet nobody complains about that). You can also get a lot more out of AI image generators if you actually have some amount of art skills -- if you know how to compose a scene or draw hands better than the AI, you can sketch those things out and let it fill in the details, so you can make up for its deficiencies. As far as text models go, I don't think we're at the same point of progression for those as we are for image models, and I would think we may need a breakthrough on the level of what latent diffusion did to supersede GANs. GPT-3 is pretty great compared to other text models but its hardware requirements are astronomical. Half the reason that AI image generation is so widespread right now is the fact that you can run or train it on widely available consumer hardware, meanwhile the hardware to run GPT-3 (if the model weights were even public) costs as much as a luxury car. You probably won't see anything too flashy until (and unless) that is solved.