the machine is constantly learning, and evolving, it doesn’t need to be taken offline to learn and grow.
You know I had two thoughts
First, if you want to keep it growing in a direction you consider useful, you need to have humans constantly evaluating it's outputs. If it's a black box and we can't untangle it's programming the only way to tweak it is to look at what it's outputting, decide if that's desirable, and weight the results manually. If it's not being constantly supervised who knows what it's going to turn in to. So you're rate limited by the number of people in the global south you can hire to read it's outputs.
Second - the people evaluating it's outputs impose hard limits and biases. If you've got the thing spitting out complex maths or chemical formulas the only way to train it is to have someone who understands complex maths or chemical formulas evaluate the outputs. If it gets "too smart" and starts outputting things no one can evaluate you can't falsify the outputs anymore and you've hit an end point. It's also being trained by people with limited knowledge, lots of biases they don't know they have, and a propensity to get things wrong. This has already been a problem - NYCs famous black people oppression computer that supposedly predicted crimes when Bloomberg was mayor, and the other case I heard of was some system in the Nordics that was supposed to assess welfare eligibility. The NYC Crimestat computer was a digital Klansman, and the Nordic welfare computer caused all kinds of problems due to biases on behalf of the programmers. Now we're all excited about AIs that aren't even programmed, they're generating their own incomprehensible code that is influenced by the biases of the bazinga techbros training them.
Third - If you don't know how it's generating it's outputs you have no idea what outputs it will generate in the future. Like yeah, you can test it an arbitrarily high number of times and say "Oh it's correct 99.x% of the time, but as the stakes get higher and the operations become more complex that tricky little x% is going to get more and more problematic. For one - It's still running on a digital computer, so it's still deterministic, but we've apparently already hit a point where the code is no longer human-interpretable. So you can't debug it. If it starts doing something undesirable all you can do is boot an earlier back-up and try to train it again. Second, you can't debug it. When it hits an error or something you have no idea what it will do. that's fine if it's running the voice lines for an NPC but a big problem if it's controlling the RCS on a rocket re-entry. We're already at the point where high-tech stuff blows up because there are so many lines of spaghetti code that no one knows what will happen when it's all put to work. Now you're hooking up complex systems to a black box controller and just hoping that it won't throw an error or do something unexpected, because testing it is, at best, very difficult.
There's apparently a pretty strong correlation with doing really well on "intelligence" tests and having a diagnosable mental illness. I've heard that really smart people are also more susceptible to certain kinds of delusions because being real good at pattern matching doesn't mean the patterns you're noticing are significant, or even really there. But the thinking goes that "smart" people are better at coming up with arguments to support their false beliefs and finding things they think are evidence of their false beliefs, so delusion in "smart" people might be harder to counter than delusion in less "smart" people
(unitary intelligence isn't real kill the IQ test in your head)
Not off the top if my head, it's just something I remember reading in passing. Maybe try google scholar and see if there's anything bout correlations between mental illness and intelligence test scoring.
Shouldn't this be dead simple? The law sets the requirements for welfare, the machine looks at your income or whatever and checks if it's within those requirements.
It's discussed in Weapons of Math Destruction by Cathy O'Neil. I'm afraid I can't remember the details, but Weapons of Math Destruction is more or less the real world "Don't Create the Torment Nexus" for these "AI" shitasses.
You can never have too many reasons to hate Bloomberg. Well, I guessAntisemitism would be one too many, but aside from that specific exception you can never have too many.
You know I had two thoughts
First, if you want to keep it growing in a direction you consider useful, you need to have humans constantly evaluating it's outputs. If it's a black box and we can't untangle it's programming the only way to tweak it is to look at what it's outputting, decide if that's desirable, and weight the results manually. If it's not being constantly supervised who knows what it's going to turn in to. So you're rate limited by the number of people in the global south you can hire to read it's outputs.
Second - the people evaluating it's outputs impose hard limits and biases. If you've got the thing spitting out complex maths or chemical formulas the only way to train it is to have someone who understands complex maths or chemical formulas evaluate the outputs. If it gets "too smart" and starts outputting things no one can evaluate you can't falsify the outputs anymore and you've hit an end point. It's also being trained by people with limited knowledge, lots of biases they don't know they have, and a propensity to get things wrong. This has already been a problem - NYCs famous black people oppression computer that supposedly predicted crimes when Bloomberg was mayor, and the other case I heard of was some system in the Nordics that was supposed to assess welfare eligibility. The NYC Crimestat computer was a digital Klansman, and the Nordic welfare computer caused all kinds of problems due to biases on behalf of the programmers. Now we're all excited about AIs that aren't even programmed, they're generating their own incomprehensible code that is influenced by the biases of the bazinga techbros training them.
Third - If you don't know how it's generating it's outputs you have no idea what outputs it will generate in the future. Like yeah, you can test it an arbitrarily high number of times and say "Oh it's correct 99.x% of the time, but as the stakes get higher and the operations become more complex that tricky little x% is going to get more and more problematic. For one - It's still running on a digital computer, so it's still deterministic, but we've apparently already hit a point where the code is no longer human-interpretable. So you can't debug it. If it starts doing something undesirable all you can do is boot an earlier back-up and try to train it again. Second, you can't debug it. When it hits an error or something you have no idea what it will do. that's fine if it's running the voice lines for an NPC but a big problem if it's controlling the RCS on a rocket re-entry. We're already at the point where high-tech stuff blows up because there are so many lines of spaghetti code that no one knows what will happen when it's all put to work. Now you're hooking up complex systems to a black box controller and just hoping that it won't throw an error or do something unexpected, because testing it is, at best, very difficult.
You know, I had another thought;
With great intelligence comes great insanity.
There's apparently a pretty strong correlation with doing really well on "intelligence" tests and having a diagnosable mental illness. I've heard that really smart people are also more susceptible to certain kinds of delusions because being real good at pattern matching doesn't mean the patterns you're noticing are significant, or even really there. But the thinking goes that "smart" people are better at coming up with arguments to support their false beliefs and finding things they think are evidence of their false beliefs, so delusion in "smart" people might be harder to counter than delusion in less "smart" people
(unitary intelligence isn't real kill the IQ test in your head)
deleted by creator
Not off the top if my head, it's just something I remember reading in passing. Maybe try google scholar and see if there's anything bout correlations between mental illness and intelligence test scoring.
this is probably my biggest beef with it. GIGO. garbage in, garbage out I think
Legit point. related to point two also....
Shouldn't this be dead simple? The law sets the requirements for welfare, the machine looks at your income or whatever and checks if it's within those requirements.
It's discussed in Weapons of Math Destruction by Cathy O'Neil. I'm afraid I can't remember the details, but Weapons of Math Destruction is more or less the real world "Don't Create the Torment Nexus" for these "AI" shitasses.
added to my reading list :stalin-approval:
You can never have too many reasons to hate Bloomberg. Well, I guessAntisemitism would be one too many, but aside from that specific exception you can never have too many.