We've been doing it for genetic modification for ages. If it's possible to stop people from making human-chihuahua baby hybrids en masse, why is it impossible to stop people from culturally devaluing art en masse?

I don't think it's reactionary to have a cultural concern like this, either. Especially when the concern boils down to hyper-commodification. I'm not concerned about some abstract "rot" of society, but rather the commodification of art itself.

  • buckykat [none/use name]
    ·
    9 months ago

    A) there's not really much of a profit motive to make human-chihiahua baby hybrids

    B) you can't just make human-chihiahua baby hybrids on a big pile of commodity gaming hardware

    • WithoutFurtherBelay
      hexagon
      ·
      edit-2
      9 months ago

      well, companies would FIND a profit motive if they were allowed to. remove the ability to use AI willy-nilly and you'd remove the profit motive. also you can grow weed with some soil pots and that didn't stop it being illegal from disallowing it from being mass corporate

      • dat_math [they/them]
        ·
        9 months ago

        remove the ability to use AI willy-nilly

        How do you propose to do this, mechanically speaking?

        • WithoutFurtherBelay
          hexagon
          ·
          9 months ago

          make it so that art, produced by an ai model, and without significant modification, is illegal to use for commercial purposes

          • dat_math [they/them]
            ·
            9 months ago

            make it so

            Right, so I'm asking how to do that mechanically speaking. We can't build useful general purpose computers that fundamentally can't run neural networks and other ML models, so how would enforcement operate? We don't have an oracle that can tell us how much human effort went into modifying an AI derived work, let alone merely classify if a work was produced by generative ML with high accuracy, so I think trying to repair modern notions of IP law to account for this isn't a dead end as much as an arms race (and kind of an interesting fractal when you think about how most generative ML models are trained by adjusting their parameters to maximize the likelihood that they fool a so-called discriminator model)

            • WithoutFurtherBelay
              hexagon
              ·
              edit-2
              9 months ago

              Heavily, heavily fine and possibly jail people who break that law? Y’know, the stuff we do when we find someone with 2 ounces of weed on them but applying it to companies instead of random innocent black people?

              Most of the law is subjective anyways, so just compare the unmodified version and the modified one and if the modified one is barely recognizable, call it a day

            • drhead [he/him]
              ·
              9 months ago

              and kind of an interesting fractal when you think about how most generative ML models are trained by adjusting their parameters to maximize the likelihood that they fool a so-called discriminator model

              This isn't as common anymore, most modern image models are diffusion models which do not rely on a discriminator but which transform noise into an image using an iterative refinement process. GANs are annoying to train and don't work quite as well for image synthesis but they are still somewhat used as components (like as an encoder to transform an image into a latent image so it is easier to process and decode it back at the end, e.g. Stable Diffusion's VAE) or as extra models for other processing (like ESRGAN and its derivatives which is fairly old at this point, often used for image upscaling or sometimes for removing compression noise). The main force that pushes AI model output to be less detectable is that AI models are built to represent the distribution of the dataset they are trained on, and over time better designed models and training regimes will fit that distribution better, which by definition includes outputs becoming more indistinguishable from the dataset.

              As far as I have seen, the AI classifier arms race is already very far behind on the classifier side. I have seen far more cases of things like ZeroGPT returning false positives than I have seen true positives that don't include "As a large language model...". I have seen plenty of instances of photos of the current conflict in Israel where people fed a photo to an AI classifier site and confidently said it was 97% chance of being AI when visually the photo doesn't even show any signs of being fake, and it's more likely that the photo is just a real photo that doesn't actually show what is claimed (which shows that people need to learn more about propaganda in general -- the base unit of propaganda is not lies, it is emphasis, because of this you need to be more wary of context than whether information is factual in most cases). The fact that people blindly trust AI classifiers is arguably somewhat more damaging right now than generative AI models.

              • dat_math [they/them]
                ·
                9 months ago

                oh huh I guess it has been ages (in research time) since GANs were the hot new algorithm.

                The fact that people blindly trust AI classifiers is arguably somewhat more damaging right now than generative AI models.

                Absolutely agree! I'm dreading the day I have to tell a doctor that I want a proper examination that they're saying is unnecessary because an ML model decided I'm healthy despite symptoms

          • buckykat [none/use name]
            ·
            9 months ago

            How do you propose to do this, mechanically speaking? Who would enforce such a law, and how?

  • BodyBySisyphus [he/him]
    ·
    9 months ago

    Weird analogy imho. It's possible to stop people from making human-chihuahua baby hybrids because it's impossible to make human-chihuahua baby hybrids. Limiting ourselves to the realm of the technically feasible, there isn't really anything in place to prevent someone from trying to make glowing E. coli in their garage - heck, there are kits available. Still, the fact that it's expensive to set up, the success rates are often low, and the payoff isn't really that big is enough to keep it from becoming a problem. Easier to set up a python environment and buy a gaming PC than it is to maintain bacterial cultures.

  • jaeme
    ·
    9 months ago

    Capitalism has already commodifed so many aspects of art production even without large neural nets. Think about the fact that the "industry-accepted" tools are all proprietary and behind exorbitant SaaSS/pay to rent models (both hardware and software). Neural nets didn't change much but just make copyright law even more a failure than it already is.

  • The_Walkening [none/use name]
    ·
    9 months ago

    Noo we need to let tech companies scrape the Internet and include CSAM material in their datasets because otherwise something something China