• KnilAdlez [none/use name]
    ·
    3 years ago

    While having racial bias in training data is a massive issue, it is actually a symptom of a much larger issue with neural networks: we have no idea why they do what they do. There is no actual way to be certain why the model is giving what output. Even if you track the data from one side of the model to the other, each parameter does not necessarily imply anything in particular. It is, after all, just a statistical model that is prone to the same issues that simply linear regression is, but it's no longer as simple as saying correlation doesn't imply causation. Often there are as many if not more variables than there is training data, and way more parameters. This means that overfitting is a massive issue and the model cannot be used to make inferences on data that is in any way outside of the domain of the data it was trained on. So the question becomes what are neural networks learning? Are they even learning or can we just give them enough randomly assigned parameters that we end up with a 'golden ticket' path in the network? All we can really say is that this network with these weights hits a (local!) minimum on some loss function.

    Deepfakes and stuff seem like this crazy technology, but ultimately it's just a parlor trick. It's literally training the network to do this one thing for one face and one video clip. NNs mean nothing of substance and should NOT be used in any manner where they are depended on for the safety of a human and especially the public as a whole until these issues are solved.

      • dave297 [none/use name]
        ·
        edit-2
        3 years ago

        yeah machine learning is at the level of being good for sorting data

    • Olredeye [she/her]
      ·
      3 years ago

      If you are using bad feature engineering or you aren't using cross validation then yeah, you're probably going to overfit, but that's not really a problem with neural networks. They're not an all purpose tool that can solve all of humanity's problems, but they definitely have their uses.

      • sooper_dooper_roofer [none/use name]
        ·
        3 years ago

        They’re not an all purpose tool that can solve all of humanity’s problems, but they definitely have their uses.

        the problem is that humans are too stupid to understand this, and will try to use them for everything

      • KnilAdlez [none/use name]
        ·
        3 years ago

        I know there are ways to mitigate overfitting, but I figured explaining it wasn't important for a comment made for the layman (I also have my doubts about the real world effectiveness of some of these techniques anyway, but that's just based on my own experiences, so I didn't mention it). Things have gotten better than when I first started playing with NNs, but overfitting is still an issue, especially since good, large datasets are hard to come by. The lack of generalization, whether through overfitting or as a natural consequence of NNs is a very large issue if there were to be used in a potentially life threatening scenario. But again, without the ability to know why a neural network has made given it's answer they cannot be trusted with decision making.

  • POKEMONGOTOTHEGULAG [none/use name]
    ·
    edit-2
    3 years ago

    I have to do ML for my bachelor thesis and it's so fucking lame but so easy. Makes me depressed because I know I'm worth more than just shoving data into a pre-made algorithm but I also know I probably won't ever amount to anything more.

    Anybody who claims they are doing anything with AI is a dumb piece of shit not worth your time. AI can fit basically anything without human intervention.

    • RNAi [he/him]
      hexagon
      ·
      3 years ago

      ML is just Statistics 3, that can be really cool and useful,

      AI in the other hand is some nightmarish cool shit

  • shiny [he/him]
    ·
    3 years ago

    ML just solved protein folding. It’s a large part of Google core search. It’s how translation works. It beat the best air force fighter pilot in dogfight sims a while back, the best Go player at Go, the best chess player at chess. That’s important because so many things can be mathematically described as games (as we will start to see with dogfights, killchains, etc).

    Knil brings up an often-discussed point about whether ML is just interpolating data or actually ideating, but most of the rest of this thread is smug posturing. Elon Musk level of talking about things you’re unaware of, it’s embarrassing and discouraging

    • RNAi [he/him]
      hexagon
      ·
      3 years ago

      Yes I know, the tweet is still funny

    • 8006 [they/them]
      ·
      3 years ago

      No it didn't, even the wiki shoots that down lol

      • shiny [he/him]
        ·
        3 years ago

        Sure. ML just "made automatic drug discovery using protein structure predictions possible, earning a Nature paper in the process." I oversimplified for effect, but you're right

  • dave297 [none/use name]
    ·
    3 years ago

    This is why a big part of machine learning is making sure you don't use racist data

  • SolidaritySplodarity [they/them]
    ·
    3 years ago

    It's worse than that because most of it fails. You burn down a forest trying to do phrenology but you don't even get to the point where you can claim you did it.

  • StellarTabi [none/use name]
    ·
    3 years ago

    also: https://www.google.com/search?q=machine+learning+horror+images&client=firefox-b-1-d&source=lnms&tbm=isch&sa=X&ved=2ahUKEwiY2frDn97zAhV-RDABHb3cAe0Q_AUoAXoECAEQAw&biw=1061&bih=640&dpr=2