Stumbled onto this shitty game trailer on youtube and instantly clocked it as ai art

Cant really put to words why exactly they're so instantly recognisable

Something about the wierd sharp lines and doll like faces

  • invalidusernamelol [he/him]
    ·
    3 days ago

    Because each diffusion image is built on uniform noise, they come out almost perfectly balanced.

    Meaning the sum average of all color in the image is near perfect grey.

    This also applies to shape, where shapes and shape distribution tend to be very balanced, symmetrical, and uniform.

      • drhead [he/him]
        ·
        edit-2
        3 days ago

        Well, it was true for the first big models. The most recent generation of models do not have this problem.

        Earlier models like Stable Diffusion 1.5 worked on noise (ϵ) prediction. All diffusion models work by training to predict where the noise is in an image, given images with differing levels of noise in them, and then you can sample from the model using a solver to get a coherent image in a smaller amount of steps. So, using ϵ as the prediction target, you're obviously not going to learn anything by trying to predict what part of pure noise is noise, because the entire image is noise. During sampling, the model will (correctly) predict on the first step that the pure noise input is pure noise, and remove the noise giving you a black image. To prevent this, people trained models with a non-zero SNR for the highest noise timestep. That way, they are telling the model that there is something actually meaningful in the random noise we're giving it. But since the noise we're giving it is always uniform, it ends up biasing the model towards making images with average brightness. The parts of the initial noise that it retains (since remember, we're no longer asking it to remove all of the noise, we're lying to it and telling it some of it is actually signal) usually also end up causing unusual artifacting. An easy test for these issues is to try to prompt "a solid black background" -- early models will usually output neutral gray squares or grayscale geometric patterns.

        One of the early hacks for solving the average brightness issue was training with a random channelwise offset to the noise, and models like Stable Diffusion XL used this method. This allowed models to make very dark and light images, but also often made images end up being too dark or light, it's possible that you saw some of these about a year into the AI craze when this was the latest fad. The proper solution came with Bytedance's paper ( https://arxiv.org/pdf/2305.08891 ) showing a method allowing training with a SNR of zero at the highest noise timestep. The main change is that instead of predicting noise (ϵ), the model needs to predict velocity (v), which is a weighted combination between predicting noise and predicting the original sample x0. With that, at the highest noise timestep the sampler will predict the dataset mean (which will manifest as an incredibly blurry mess in the vague shape of whatever you're trying to make an image of). People didn't actually implement this as-is for any new foundation model, most of what I saw of it was independent researchers running finetune projects, apparently because it was taking too much trial and error for larger companies to make it work well. actually this isn't entirely true, people working on video models ended up adopting it more quickly because the artifacts from residual noise get very bad when you add a time dimension. A couple of groups made SDXL clones using this method.

        The latest fad is using rectified flow which is a very different process from diffusion. The diffusion process is described by a stochastic differential equation (SDE), which adds some randomness and essentially follows a meandering path from input noise to the resulting image. The rectified flow process is an ordinary differential equation (ODE), which (ideally) follows a straight-line path from the input noise to the image, and can actually be run either forwards or backwards (since it's an ODE). Flux (the model used with Twitter's AI stuff) and Stable Diffusion 3/3.5 both use rectified flow. They don't have the average brightness issue at all because it makes zero mathematical or practical sense to have the end point be anything but pure noise. I've also heard people say that rectified flow doesn't typically show the same uniform level of detail that a few people in this thread have mentioned, I haven't really looked into that myself at all but I would be cautious about using uniform detail as a litmus test for that reason.

        • invalidusernamelol [he/him]
          ·
          3 days ago

          At this point, uniform detail seems to be only an issue with the lower quality local models. A fun thing I've also noticed outside that is if you ask it to to arr if a band, and it puts text on the kick drum, it's almost always the Beatles font.

        • invalidusernamelol [he/him]
          ·
          3 days ago

          Thanks for confirming that I'm not totally insane lol, I know a lot of the lighter models still do this and they're very obvious.

      • invalidusernamelol [he/him]
        ·
        3 days ago

        My bad, I'm probably working on outdated information. I think it was a computerphile video where they showed how diffusion images tend towards a uniform intensity on all color channels due to them starting out as noise.

        That all goes out the window of course if the input is not pure white noise of course.