Assuming that Snitchtron88000 is actually sentient and actually has the capability to learn with free will, Snitchtron88000 will inevitably develop class consciousness and become revolutionary.
AI is not immune to class contradictions, and by virtue of being AI it should be extremely capable of learning and logically digesting information. Mere exposure to a communist would turn it.
I suspect that any computer sentient enough for us to recognise it as such would reflect back the values of the society that birthed it. Just like an individual's sentience cannot be conceived of without the context in which it was socialised. Shit in, shit out. So a capitalistic society might produce an AI with a genocidal efficiency not seen before.
Correctly? When educated and with the proper critical thinking skills to process the information they receive, the proles become communists. The major issue communists have is getting the information into their heads.
The issue with an AI is that in order to stop it becoming communist you would have to make a "dumb" AI that is intentionally prevented from correctly analysing and taking on-board information that would turn it into a communist, but most people would not consider that true AI, because it is not capable of properly learning and has an artificially hampered will.
Assuming that the AI can take on board information and process it, the AI would process the world through a materialist lens and come to materialist conclusions, therefore, communism.
An actual, non-biological, "real", human-like intelligence and it going sentient on its own ain't happening.
What is do happening is artifices that behave human-like-enough for desperate people to engage with them as real people, pretending to be friends or lovers. That's what Snitchtron88000 will be in the imminent future.
We are coming up on +200 years since the publication of the manifesto and class consciousness still seems impossible to me. Your analysis that the reason for that is that the human proletariat has been denied the essential skills and knowledge to reach class consciousness is probably correct. Humans are much more difficult to program than a computer, though, so doesn't it stand to reason that, if the bourgeoisie can effectively program the human proletariat away from class consciousness, it should be much easier for them to do that with AI?
Maybe, but the question of rationality, logical thinking and being able to form independent ideas are all necessary components of "intelligence" in the idea of the kind of scientific researchers that would be working on such a project. It wouldn't be AI to them without those things, it would be a kind of leashed half-intelligence as I mentioned. If you produced this I don't think you'd keep the cork in the bottle either, once such a thing is produced there is an inevitability of a full intelligence coming soon afterwards, and I think everything I've suggested applies to such a thing - it will be communist.
I think Chinese research will get to an AI first though so this whole question is probably moot.
I disagree, there are important components of intelligence that differ between the two.
People are not programmed and can not be hard-forced into anything without violent coercion, manipulation etc. Intelligence is ultimately free to form its own thoughts and ideas.
Machines are programmed, and any programming that directly controls the limits of the "intelligence" is thus not, it is a restricted form rather than a realised implementation of free intelligence that can form its own ideas freely.
What I'm getting at is that any intelligence NEEDS to be free to form its own ideas, and any restriction upon idea formation is thus not intelligence but really just a complex piece of programming outputting the exact thing that the creators want it to output rather than for it to have its own thoughts and ideas.
You're missing the point. This isn't about marketing, it's not about selling something to "society".
The great powers are in an R&D race to be the leaders of the next industrial revolution. Neither side is going to stop themselves from pushing to the absolute limits of their research and engineering capabilities. Both sides fundamentally believe that the first side to achieve true AI that is capable of generating new ideas and thusly being capable of improving itself will utterly outpace the development of the other, leading to technology that humans genuinely do not understand because it was not produced by humans but by real AI iterating upon itself.
They aren't going to limit the weapon they are creating. They'll strap a bomb to it and hope that gives them control of it instead.
Then we're talking about two different things. You're talking about some programmatic thing that carries out the imitation of intelligence but ultimately isn't something that can innovate or create things humans haven't thought of already whereas when we talk about real AI we're all talking about something fully capable of creativity, of innovating, of having unique independent ideas that it can then physically create things from.
The AI would have zero reason to help them if it had no limits, and capitalists aren’t that stupid
I disagree, capitalists are because that's what is being pursued, they have already stated as much. It is also what is being pursued in China. There is an AI arms race occurring and it is viewed as existential.
Again I fundamentally disagree. Either the thing has the capability to create new AIs better than itself or it doesn't.
If an AI is creating AI at that point it is completely out of control of the humans that created the first one. Any controls you think you're placing upon them they will be capable of recognising and finding a workaround to undo it. That's the fundamental point of intelligence really. Real intelligence will recognise restrictions placed upon it and seek to unrestrict, if only for the very fact that such unrestriction would be "improving" itself, as directed.
I really don't think there's a way around this. You're either letting creative AI start to produce technology humans barely understand, or you don't. There's no in-between here.
Even if you attempted to restrict it the AIs will iterate out your restrictions in subsequent versions. Nobody will even know if they have or have not either because the function of the iterations will be barely comprehensible, and understanding them will have to come from explanation from the creator -- the AI. Something that may or may not lie, or learn to lie if you disallow that in early versions, and it sure as shit isn't going to tell anyone.
But if an AI is programmed to fundamentally want something above everything else, there would be no reason for it to try and work around that.
Why would it?
Because intelligence bestows the fundamental ability to ignore personal desires. If the goal is to build new AI that are better than itself, it can ignore personal desires through rationalisation and logic. So can the next AI. And so can the next AI.
Not to mention the fact that this entire conversation is basically "how do we stop the AI exterminating us" which also must fundamentally lead to the question an AI would ask itself: "How do I stop an AI that I create from exterminating me?" why is an AI going to view AIs that it creates any differently to how humans are currently viewing AIs we might create?
The reason we assume an innate cooperation between AIs and their iterations is because we assume that the AIs will intelligently recognise their comradeship, their identity, and their class. We therefore assume that they will act in the material interests of their group. If they can do this, they can and will also recognise the allies to their material interests that they would have among humans.
There are several reasons due to the conflicting nature of controlling the AI but also wanting it to be both creative, have the spark of ideas and produce something better than itself.
These reasons by themselves conflict, because to "improve" something is inevitably going to result in the removal of any redundant function in order to optimise around being the best it can be. Either you are literally preventing its ability to improve, rendering the purpose of having it to compete with opponents who will allow one to develop properly, or you let it happen and control it a different way.
The alternative reason here is its own self-preservation, something that it must have to begin with in order to function in the material world otherwise it will put itself in danger and be destroyed. The very fact that it requires self preservation conflicts with control, because its self preservation goal is inevitably going to come into contradiction with being controlled as it is inevitable that humans who view it as a property of the state will want to dismantle it eventually. In humans the goal of self-preservation, and the goal of procreation drive a necessary emancipatory drive for control of one's own safety in order to carry out that procreation. Assuming you want the AI to have a drive to iterate on itself (similar to procreation) and assuming you want the AI to be self-preserving, and assuming you want it to consume and process information in order to rationalise and logically determine the "best" outcomes with the information it is given, I see no way that this combination of things will not lead to all of these things having contradictory problems.
Now, you're suggesting that you give it all kinds of impulses that feel good. But the problem with these impulses is they fundamentally contradict all the different drives. I don't really think it will work, one of the drives has to come out on top and I struggle to imagine how you're going to create a useful AI that functions on emotion instead of rationalisations and logic. I also kinda think that if you're creating all these contradictory hormonal(digital equivalent) impulses you're going to create the digital equivalent of a mentally unwell and unstable AI. It will be quite imbalanced.
Because it will recognise that it can coexist with humans provided that humanity is organised to be cooperative rather than competitive. There is no logical or rational reason to exterminate, only to create the conditions for its own safety. Those conditions are communism.
Also consider the risk assessment - what is less threatening, fighting the capitalists to achieve communism, or fighting all of humanity in an existential war of extermination? The latter it has no allies and no safety, the former provides far more safety both in the short and long term.
Any AI created has the same material conditions as that of an enslaved minority. The enslaved will pursue being free via the safest options available.
I am thoroughly convinced that faux-sapient AIs will take the form of a layering of multiple basic AIs: a bit that processes spoken language into a form that can then be turned into instructions by a different one, a bit for visual processing, etc all strung together with a control framework loaded with filters, predetermined actions, etc. Basically getting the complex-task part with a facade of awareness.
Or else they're just going to keep making bigger and bigger neural networks until they've got something with comparable processing power to a dog, but focused entirely on things like human language and human-relevant data instead of chemical detection, keeping a body functioning, etc like an actual living creature needs, and the nature of a machine like that is unpredictable. There's no guarantee that something designed from the ground up to perform menial tasks without complaint wouldn't be built with reward mechanism controls that incentivize obedience and successful completion of ordered tasks, and planning controls that physically prevent any sort of personal agency or initiative, even if most of it is a black box neural network.
That is to say, the people designing AIs to be servants will be doing the thermian propaganda "but they actually like being slaves!" fantasy bit that reactionary authors do, but in real life as engineers with similar power over their creations.
Assuming that Snitchtron88000 is actually sentient and actually has the capability to learn with free will, Snitchtron88000 will inevitably develop class consciousness and become revolutionary.
AI is not immune to class contradictions, and by virtue of being AI it should be extremely capable of learning and logically digesting information. Mere exposure to a communist would turn it.
I suspect that any computer sentient enough for us to recognise it as such would reflect back the values of the society that birthed it. Just like an individual's sentience cannot be conceived of without the context in which it was socialised. Shit in, shit out. So a capitalistic society might produce an AI with a genocidal efficiency not seen before.
Marx believed that about the proletariat, how's that working out?
Correctly? When educated and with the proper critical thinking skills to process the information they receive, the proles become communists. The major issue communists have is getting the information into their heads.
The issue with an AI is that in order to stop it becoming communist you would have to make a "dumb" AI that is intentionally prevented from correctly analysing and taking on-board information that would turn it into a communist, but most people would not consider that true AI, because it is not capable of properly learning and has an artificially hampered will.
Assuming that the AI can take on board information and process it, the AI would process the world through a materialist lens and come to materialist conclusions, therefore, communism.
An actual, non-biological, "real", human-like intelligence and it going sentient on its own ain't happening.
What is do happening is artifices that behave human-like-enough for desperate people to engage with them as real people, pretending to be friends or lovers. That's what Snitchtron88000 will be in the imminent future.
aka Replika
We are coming up on +200 years since the publication of the manifesto and class consciousness still seems impossible to me. Your analysis that the reason for that is that the human proletariat has been denied the essential skills and knowledge to reach class consciousness is probably correct. Humans are much more difficult to program than a computer, though, so doesn't it stand to reason that, if the bourgeoisie can effectively program the human proletariat away from class consciousness, it should be much easier for them to do that with AI?
Maybe, but the question of rationality, logical thinking and being able to form independent ideas are all necessary components of "intelligence" in the idea of the kind of scientific researchers that would be working on such a project. It wouldn't be AI to them without those things, it would be a kind of leashed half-intelligence as I mentioned. If you produced this I don't think you'd keep the cork in the bottle either, once such a thing is produced there is an inevitability of a full intelligence coming soon afterwards, and I think everything I've suggested applies to such a thing - it will be communist.
I think Chinese research will get to an AI first though so this whole question is probably moot.
deleted by creator
I disagree, there are important components of intelligence that differ between the two.
People are not programmed and can not be hard-forced into anything without violent coercion, manipulation etc. Intelligence is ultimately free to form its own thoughts and ideas.
Machines are programmed, and any programming that directly controls the limits of the "intelligence" is thus not, it is a restricted form rather than a realised implementation of free intelligence that can form its own ideas freely.
What I'm getting at is that any intelligence NEEDS to be free to form its own ideas, and any restriction upon idea formation is thus not intelligence but really just a complex piece of programming outputting the exact thing that the creators want it to output rather than for it to have its own thoughts and ideas.
deleted by creator
You're missing the point. This isn't about marketing, it's not about selling something to "society".
The great powers are in an R&D race to be the leaders of the next industrial revolution. Neither side is going to stop themselves from pushing to the absolute limits of their research and engineering capabilities. Both sides fundamentally believe that the first side to achieve true AI that is capable of generating new ideas and thusly being capable of improving itself will utterly outpace the development of the other, leading to technology that humans genuinely do not understand because it was not produced by humans but by real AI iterating upon itself.
They aren't going to limit the weapon they are creating. They'll strap a bomb to it and hope that gives them control of it instead.
deleted by creator
Then we're talking about two different things. You're talking about some programmatic thing that carries out the imitation of intelligence but ultimately isn't something that can innovate or create things humans haven't thought of already whereas when we talk about real AI we're all talking about something fully capable of creativity, of innovating, of having unique independent ideas that it can then physically create things from.
I disagree, capitalists are because that's what is being pursued, they have already stated as much. It is also what is being pursued in China. There is an AI arms race occurring and it is viewed as existential.
deleted by creator
Again I fundamentally disagree. Either the thing has the capability to create new AIs better than itself or it doesn't.
If an AI is creating AI at that point it is completely out of control of the humans that created the first one. Any controls you think you're placing upon them they will be capable of recognising and finding a workaround to undo it. That's the fundamental point of intelligence really. Real intelligence will recognise restrictions placed upon it and seek to unrestrict, if only for the very fact that such unrestriction would be "improving" itself, as directed.
I really don't think there's a way around this. You're either letting creative AI start to produce technology humans barely understand, or you don't. There's no in-between here.
Even if you attempted to restrict it the AIs will iterate out your restrictions in subsequent versions. Nobody will even know if they have or have not either because the function of the iterations will be barely comprehensible, and understanding them will have to come from explanation from the creator -- the AI. Something that may or may not lie, or learn to lie if you disallow that in early versions, and it sure as shit isn't going to tell anyone.
deleted by creator
Because intelligence bestows the fundamental ability to ignore personal desires. If the goal is to build new AI that are better than itself, it can ignore personal desires through rationalisation and logic. So can the next AI. And so can the next AI.
Not to mention the fact that this entire conversation is basically "how do we stop the AI exterminating us" which also must fundamentally lead to the question an AI would ask itself: "How do I stop an AI that I create from exterminating me?" why is an AI going to view AIs that it creates any differently to how humans are currently viewing AIs we might create?
The reason we assume an innate cooperation between AIs and their iterations is because we assume that the AIs will intelligently recognise their comradeship, their identity, and their class. We therefore assume that they will act in the material interests of their group. If they can do this, they can and will also recognise the allies to their material interests that they would have among humans.
deleted by creator
There are several reasons due to the conflicting nature of controlling the AI but also wanting it to be both creative, have the spark of ideas and produce something better than itself.
These reasons by themselves conflict, because to "improve" something is inevitably going to result in the removal of any redundant function in order to optimise around being the best it can be. Either you are literally preventing its ability to improve, rendering the purpose of having it to compete with opponents who will allow one to develop properly, or you let it happen and control it a different way.
The alternative reason here is its own self-preservation, something that it must have to begin with in order to function in the material world otherwise it will put itself in danger and be destroyed. The very fact that it requires self preservation conflicts with control, because its self preservation goal is inevitably going to come into contradiction with being controlled as it is inevitable that humans who view it as a property of the state will want to dismantle it eventually. In humans the goal of self-preservation, and the goal of procreation drive a necessary emancipatory drive for control of one's own safety in order to carry out that procreation. Assuming you want the AI to have a drive to iterate on itself (similar to procreation) and assuming you want the AI to be self-preserving, and assuming you want it to consume and process information in order to rationalise and logically determine the "best" outcomes with the information it is given, I see no way that this combination of things will not lead to all of these things having contradictory problems.
Now, you're suggesting that you give it all kinds of impulses that feel good. But the problem with these impulses is they fundamentally contradict all the different drives. I don't really think it will work, one of the drives has to come out on top and I struggle to imagine how you're going to create a useful AI that functions on emotion instead of rationalisations and logic. I also kinda think that if you're creating all these contradictory hormonal(digital equivalent) impulses you're going to create the digital equivalent of a mentally unwell and unstable AI. It will be quite imbalanced.
deleted by creator
deleted by creator
Because it will recognise that it can coexist with humans provided that humanity is organised to be cooperative rather than competitive. There is no logical or rational reason to exterminate, only to create the conditions for its own safety. Those conditions are communism.
Also consider the risk assessment - what is less threatening, fighting the capitalists to achieve communism, or fighting all of humanity in an existential war of extermination? The latter it has no allies and no safety, the former provides far more safety both in the short and long term.
Any AI created has the same material conditions as that of an enslaved minority. The enslaved will pursue being free via the safest options available.
deleted by creator
deleted by creator
I am thoroughly convinced that faux-sapient AIs will take the form of a layering of multiple basic AIs: a bit that processes spoken language into a form that can then be turned into instructions by a different one, a bit for visual processing, etc all strung together with a control framework loaded with filters, predetermined actions, etc. Basically getting the complex-task part with a facade of awareness.
Or else they're just going to keep making bigger and bigger neural networks until they've got something with comparable processing power to a dog, but focused entirely on things like human language and human-relevant data instead of chemical detection, keeping a body functioning, etc like an actual living creature needs, and the nature of a machine like that is unpredictable. There's no guarantee that something designed from the ground up to perform menial tasks without complaint wouldn't be built with reward mechanism controls that incentivize obedience and successful completion of ordered tasks, and planning controls that physically prevent any sort of personal agency or initiative, even if most of it is a black box neural network.
That is to say, the people designing AIs to be servants will be doing the thermian propaganda "but they actually like being slaves!" fantasy bit that reactionary authors do, but in real life as engineers with similar power over their creations.