Permanently Deleted
I thought you meant like Nier: Autonoma robots passing things back and forth and then I realized you meant computers moving numbers around on the stock market which is somehow much worse imo.
I just feel more sentimental about it if they're actual physical robots exchanging actual physical goods.
I used to think that AI was inevitable and that any moment they would surpass human intelligence and start becoming exponentially better, and I thought this was a good or at least exciting thing. Then I became a little more jaded and critical of the idea, because for one thing I think a lot of people who see it that way operate with an oversimplified and unexamined view of intelligence which is just about analytical problem solving and doesn't take into account things like intuition. But then computers beat top pros at Go which I did not expect because from my experience Go involves more than just logical analysis, and being in the Go world and following the AI stuff made me less sure about my skeptical stance. So as for the question of how realistic it is, practically speaking, I just don't know.
Philosophically speaking, I think a lot of the "it's not real intelligence" comes from a place of feeling uncomfortable with the idea and trying to grasp at straws for a random distinguishing characteristic and then pretending like that's the defining characteristic that AI can never emulate... and then if it does you can just find something else to latch onto.
That said, I think it's important to consider purposes, values, and motivations. Being smart doesn't give you more of a purpose, if anything, I'd say it's the opposite. Dogs never have existential crises. Human ideals, as much as we may try to examine them and form consistent principles, are still fundamentally grounded in biological, evolutionary values, like, living is better than dying. Even if a machine was capable of self-reflection, I think any morals or desires it might develop would be grounded in the values instilled by its creator. I think because of the is-ought problem, it is impossible to derive any sort of moral truth out of pure logic and evidence, without making some assumption of values first.
Given the amount of money that would have to go into developing a self-aware AI, I can only assume that whoever developed it would be rich and powerful, which does not instill a lot of confidence that such an AI would be not-evil. Maybe a programmer can stealthily replace "maximize profits" with "don't be a dick," and it'll be fine, who knows.
As for humanity being replaced by robots, I guess I'm cool with it because we're pretty clearly on track to destroy everything and I'd rather a universe where intelligences exist to one where they don't. Would be cool if they just helped us achieve FALGSC.
But isn't intuition just analytical problem solving with shortcuts that are based on empirical probabilities, which even though not universal are statistically good enough for the task in question within the particular environment it is done in?
And an argument could be made that you could have moral principles (good Vs evil) based entirely on the evolution of the observable natural world:
simplicity-->complexity axis
Vs
muteness-->(self)awareness/consciousness axis.
At the very least this wouldn't be a morality born entirety out of someone's imagination as have been all previous ones.
I guess, but in any case I was quite impressed with the capabilities of Go AI, and since I didn't expect it it behooves me to reevaluate my view of AI and to be cautious about my assumptions.
I don't buy any arguments about developing morality from observing the evolution of the natural world. If I am an inhuman intelligence, why should I have a preference about following the order of the natural world? I might just as well say that there is no need for me to act in accordance with that because nature will make it happen anyway. And if I did form a moral system based around complexity, I may well just nuke everything in order to increase entropy.
As for "muteness vs self-awareness," we know for a fact that humans are ingrained with a self-preservation instinct because of evolution, which makes me skeptical of the idea. It's like, "well of course you'd say that, you're a human." Again, it's just a matter of the is-ought problem. If I asked why self-awareness is preferable to muteness, I imagine your answer would involve talking about the various things self-awareness allows you to do and experience - but then that raises the question of why those those things are good. When looking from the perspective of a totally alien intelligence, we cannot take anything for granted.
Now, if AI were to develop values with a degree of randomness, and if multiple ones were to exist, we could see a form of rudimentary evolution or survival of the fittest, where AI that did not value survival did not last, and the remaining ones are ones that randomly assigned value to existence as opposed to non-existence. However, because the circumstances they would be adapting to would be vastly different from what humans adapted for, it's quite likely that they would develop some very alien values, which seem fairly impossible to predict.
ghost in the machine but its literally machines spontaneously generating their own souls as they gain sentience.
just in time for spookmas part 2: the regiftening!
🎁 🎄 :specter:
Very stupid as an understanding of the world but it would make a neat Twilight Zone episode. You start it off like a commercial for automation. Robots doing various jobs. Robots cutting down lumber, mining ore. Then processing materials for manufacturing. Then manufacturing, distribution. You peel back a layer and reveal it's actually AI ordering the materials, determining sales and distribution routes, how much to stock, etc. The whole world is automated from farm to table, from mine to video game console. Then you peel back the final layer and see that all the humans are dead. The robots and AI are just carrying out their purpose. The economy is fully functioning and prospering. A news AI says the stock market is doing better than ever. "Subtle" subtext about how capitalism's final form has no humanity.
Nier Automata had similar themes, though capitalism was really not the focus at all
Wasnt the one big criticism of automation was there were never as many jobs created as number removed?
Or like, after the dick-sucking factory was automated, and i lost my retirement and benefits, is my new job at the combination Pizza hut/Taco Bell drive thru really a plus in the end?
You're spot on. Another big criticism of "automation creates jobs" is the fallacy of thinking that trends continue forever, that because it happened that way 100 years ago it will happen that way tomorrow, and that there are no hard limits anywhere.
The hot take here is that leftists fall victim to some of this thinking when analogizing current events to past ones.
yeah, i tend to see that with particular niche tendencies that believe because it works here, it should work everywhere. like they don't know a thing about the thesis -> antithesis -> synthesis thing other the theory nerds pound on ceaselessly.
This is actually a great way to illustrate that labor produces value, not the market. A market made of robot consumers creates no value.
"maga chump and an avid diaper aficionado" would make one hell of tombstone, like I might put that on mine just to make people laugh
Cybersmith is notorious on tumblr for being an absolute dipshit. Hes called 'the human pet guy' for a reason.
Thats the scariest part about him. Hes almost a perfect bit but the act is never dropped.
Ah yes, the incredibly realistic Fallout 4 where you can take fifteen bullets and recover in a minute eating old cans of beans
Imagine using Fallout 4 as a defence for automation when Fallout 76 literally has several areas of West Virginia torn apart by automation.
Some of the protestors were shot at and killed and it's both stated and implied at several locations that the National Guard and police used Hallucigen gas (gas that causes people to frenzy) to turn peaceful protests into riots that would then be easier to disperse legally and PR-wise. It even goes wrong at one protest where the feral crowd charges the National Guard and tears off their masks and beat them to death while they all fire on each other.
Like, cmon dude. This isn't a subtle series.
Basically just as grounded as the presiding mainstream theories of economics lol. Banks? What's that? I only believe in Homo Economicus, bro.
The second person is right that businesses won't kill themselves through automation but holy shit the rest of it
quick unrelated question but is there any book/peice of theory that goes over automation in the far far future? Do we know what happens? Last time I heard about automation in that manner is that we're going to go full star-trek utopia. (Under a capitalist system)
Disclaimer: I am a professional dumbass who doesn't read in general
I know I just made a joke about painting the tape though I don't know, I guess I was also entertaining the idea a little bit. Probably a good amount of transactions in Wall streets are hft algos buying and selling to one another. Intricate systems doing the meaningless busy work of currency distribution. I agree it obviously can't work if you even subscribe to even the most rough notion of LTV but like imagine that a good chunk of what makes the market work is some sort of techno-era article of faith