• drhead [he/him]
    ·
    edit-2
    1 year ago

    It's already that way, from what I can tell.

    AI classifier models are garbage. Most of them are only particularly good at identifying images processed through a specific model's autoencoder, which if you don't specifically try to mask that (which is possible) they have a fairly high recall rate on. They have MASSIVE false positive rates though with a variety of known and unknown triggers, in particular I've seen a lot of images which upon closer inspection looked plausibly real if you consider how fucking awful postprocessing on some cameras can be.

    And it's not even images that would make sense to AI generate that people are pulling this on. I would think that you would pull the AI generated card on AI generate propaganda images of something that is incredibly damning yet also hard to disprove. But most of the claims for "AI-generated" propaganda images I see are over things that don't really prove the claim the propagandist is trying to make, or that don't even show anything particularly abnormal. That's more than just falsely assuming something, that's just outright failing to understand how propaganda works in the first place which is a much more serious problem.