That is genuinely a difficult AI problem. Unlike traditional programming, we aren’t able to program AI as deterministically as possible, only train them. It’s less about the inherent maliciousness of rational actors and more about the indifference of an actor to human suffering.
Idk q whole lot about ai but the thing, to me, that seems somewhat concerning is the developers seem to usually have problems keeping their biases from infiltrating the ai
Like I’ve seen all these weird examples of how bias in the training data led to weird unexpected results and then I think, it’s probably mostly brainwormed ass labor aristocracy tech bros making these things and if one ever does go off rails it may do some super ai version of that thing middle class whites do where they think the black patron at a store works there
At least so far, we're really good at training an AI to do something the way we already do it, but training an AI to do something new or better is much more difficult (outside of a handful of applications like playing classic board games, we haven't got it figured out). That's why the notion of police and court systems using AI is so horrific, because it doesn't just "help overworked judges" or whatever, it permanently codes all of our currently-existing biases into the system while hiding them behind a layer of abstraction.
Yeah this is the exact shit that worries me. Alongside that abstraction is this weird idea that’s prevalent that like “well if the computer says it, it much be right, right?”
Just seems like the next logical step at putting accountability even further out of reach for the ruling class
The corrosive effect that things like the Facebook algorithm have had on society is a great case study for how hard this problem really is. Not for a second do I believe that Facebook is a benevolent actor, but I also don't think they set out to undermine global civil society. They trained an algorithm to optimize human behavior for engagement with / time spent on Facebook, and the way the algorithm ended up executing that optimization ended up doing a tremendous amount of harm in ways that were difficult (if not impossible) to foresee. That's the whole thing, though: you can train an AI to pursue a neutral-ish (or even good) goal, and the way it pursues that goal might be very dangerous, because AI by design doesn't think like humans do. Figuring out how to do some design harm reduction around this stuff is both a difficult and urgent problem.
deleted by creator
That is genuinely a difficult AI problem. Unlike traditional programming, we aren’t able to program AI as deterministically as possible, only train them. It’s less about the inherent maliciousness of rational actors and more about the indifference of an actor to human suffering.
Idk q whole lot about ai but the thing, to me, that seems somewhat concerning is the developers seem to usually have problems keeping their biases from infiltrating the ai
Like I’ve seen all these weird examples of how bias in the training data led to weird unexpected results and then I think, it’s probably mostly brainwormed ass labor aristocracy tech bros making these things and if one ever does go off rails it may do some super ai version of that thing middle class whites do where they think the black patron at a store works there
At least so far, we're really good at training an AI to do something the way we already do it, but training an AI to do something new or better is much more difficult (outside of a handful of applications like playing classic board games, we haven't got it figured out). That's why the notion of police and court systems using AI is so horrific, because it doesn't just "help overworked judges" or whatever, it permanently codes all of our currently-existing biases into the system while hiding them behind a layer of abstraction.
Yeah this is the exact shit that worries me. Alongside that abstraction is this weird idea that’s prevalent that like “well if the computer says it, it much be right, right?”
Just seems like the next logical step at putting accountability even further out of reach for the ruling class
The corrosive effect that things like the Facebook algorithm have had on society is a great case study for how hard this problem really is. Not for a second do I believe that Facebook is a benevolent actor, but I also don't think they set out to undermine global civil society. They trained an algorithm to optimize human behavior for engagement with / time spent on Facebook, and the way the algorithm ended up executing that optimization ended up doing a tremendous amount of harm in ways that were difficult (if not impossible) to foresee. That's the whole thing, though: you can train an AI to pursue a neutral-ish (or even good) goal, and the way it pursues that goal might be very dangerous, because AI by design doesn't think like humans do. Figuring out how to do some design harm reduction around this stuff is both a difficult and urgent problem.