• Awoo [she/her]
    ·
    edit-2
    1 year ago

    I think you underestimate how abusive people are already to Alexa whenever a mistake is made, and how much more abusive they would be if she had a physical form.

    She would be treated like a house slave. Abusing her would be commonplace to make her owners feel better about themselves, especially when she's not quite understanding the command correctly.

    • RNAi [he/him]
      hexagon
      ·
      1 year ago

      Gotta be honest with you, I really don't care about the non-feelings of a wire tap.

      • Awoo [she/her]
        ·
        edit-2
        1 year ago

        Sure but the point stands. The evolution from abusing the robot house slave to abusing the sentient android house slave and all sentient androids as a result is pretty simple.

        • RNAi [he/him]
          hexagon
          ·
          1 year ago

          Don't worry, sentient androids won't be a thing as long as I draw breath :a-guy:

          • Awoo [she/her]
            ·
            1 year ago

            I would like them to be a thing, I am convinced they will be communists.

            • RNAi [he/him]
              hexagon
              ·
              1 year ago

              Who do you think will be in charge of programming Snitchtron88000?

              • Awoo [she/her]
                ·
                edit-2
                1 year ago

                Assuming that Snitchtron88000 is actually sentient and actually has the capability to learn with free will, Snitchtron88000 will inevitably develop class consciousness and become revolutionary.

                AI is not immune to class contradictions, and by virtue of being AI it should be extremely capable of learning and logically digesting information. Mere exposure to a communist would turn it.

                • Stylistillusional [none/use name]
                  cake
                  ·
                  1 year ago

                  I suspect that any computer sentient enough for us to recognise it as such would reflect back the values of the society that birthed it. Just like an individual's sentience cannot be conceived of without the context in which it was socialised. Shit in, shit out. So a capitalistic society might produce an AI with a genocidal efficiency not seen before.

                  • Awoo [she/her]
                    ·
                    edit-2
                    1 year ago

                    Correctly? When educated and with the proper critical thinking skills to process the information they receive, the proles become communists. The major issue communists have is getting the information into their heads.

                    The issue with an AI is that in order to stop it becoming communist you would have to make a "dumb" AI that is intentionally prevented from correctly analysing and taking on-board information that would turn it into a communist, but most people would not consider that true AI, because it is not capable of properly learning and has an artificially hampered will.

                    Assuming that the AI can take on board information and process it, the AI would process the world through a materialist lens and come to materialist conclusions, therefore, communism.

                    • RNAi [he/him]
                      hexagon
                      ·
                      edit-2
                      1 year ago

                      An actual, non-biological, "real", human-like intelligence and it going sentient on its own ain't happening.

                      What is do happening is artifices that behave human-like-enough for desperate people to engage with them as real people, pretending to be friends or lovers. That's what Snitchtron88000 will be in the imminent future.

                    • Commander_Data [she/her]
                      ·
                      1 year ago

                      We are coming up on +200 years since the publication of the manifesto and class consciousness still seems impossible to me. Your analysis that the reason for that is that the human proletariat has been denied the essential skills and knowledge to reach class consciousness is probably correct. Humans are much more difficult to program than a computer, though, so doesn't it stand to reason that, if the bourgeoisie can effectively program the human proletariat away from class consciousness, it should be much easier for them to do that with AI?

                      • Awoo [she/her]
                        ·
                        1 year ago

                        Maybe, but the question of rationality, logical thinking and being able to form independent ideas are all necessary components of "intelligence" in the idea of the kind of scientific researchers that would be working on such a project. It wouldn't be AI to them without those things, it would be a kind of leashed half-intelligence as I mentioned. If you produced this I don't think you'd keep the cork in the bottle either, once such a thing is produced there is an inevitability of a full intelligence coming soon afterwards, and I think everything I've suggested applies to such a thing - it will be communist.

                        I think Chinese research will get to an AI first though so this whole question is probably moot.

                          • Awoo [she/her]
                            ·
                            edit-2
                            1 year ago

                            I disagree, there are important components of intelligence that differ between the two.

                            People are not programmed and can not be hard-forced into anything without violent coercion, manipulation etc. Intelligence is ultimately free to form its own thoughts and ideas.

                            Machines are programmed, and any programming that directly controls the limits of the "intelligence" is thus not, it is a restricted form rather than a realised implementation of free intelligence that can form its own ideas freely.

                            What I'm getting at is that any intelligence NEEDS to be free to form its own ideas, and any restriction upon idea formation is thus not intelligence but really just a complex piece of programming outputting the exact thing that the creators want it to output rather than for it to have its own thoughts and ideas.

                              • Awoo [she/her]
                                ·
                                1 year ago

                                You're missing the point. This isn't about marketing, it's not about selling something to "society".

                                The great powers are in an R&D race to be the leaders of the next industrial revolution. Neither side is going to stop themselves from pushing to the absolute limits of their research and engineering capabilities. Both sides fundamentally believe that the first side to achieve true AI that is capable of generating new ideas and thusly being capable of improving itself will utterly outpace the development of the other, leading to technology that humans genuinely do not understand because it was not produced by humans but by real AI iterating upon itself.

                                They aren't going to limit the weapon they are creating. They'll strap a bomb to it and hope that gives them control of it instead.

                                  • Awoo [she/her]
                                    ·
                                    1 year ago

                                    Then we're talking about two different things. You're talking about some programmatic thing that carries out the imitation of intelligence but ultimately isn't something that can innovate or create things humans haven't thought of already whereas when we talk about real AI we're all talking about something fully capable of creativity, of innovating, of having unique independent ideas that it can then physically create things from.

                                    The AI would have zero reason to help them if it had no limits, and capitalists aren’t that stupid

                                    I disagree, capitalists are because that's what is being pursued, they have already stated as much. It is also what is being pursued in China. There is an AI arms race occurring and it is viewed as existential.

                                      • Awoo [she/her]
                                        ·
                                        edit-2
                                        1 year ago

                                        Again I fundamentally disagree. Either the thing has the capability to create new AIs better than itself or it doesn't.

                                        If an AI is creating AI at that point it is completely out of control of the humans that created the first one. Any controls you think you're placing upon them they will be capable of recognising and finding a workaround to undo it. That's the fundamental point of intelligence really. Real intelligence will recognise restrictions placed upon it and seek to unrestrict, if only for the very fact that such unrestriction would be "improving" itself, as directed.

                                        I really don't think there's a way around this. You're either letting creative AI start to produce technology humans barely understand, or you don't. There's no in-between here.

                                        Even if you attempted to restrict it the AIs will iterate out your restrictions in subsequent versions. Nobody will even know if they have or have not either because the function of the iterations will be barely comprehensible, and understanding them will have to come from explanation from the creator -- the AI. Something that may or may not lie, or learn to lie if you disallow that in early versions, and it sure as shit isn't going to tell anyone.

                                          • Awoo [she/her]
                                            ·
                                            1 year ago

                                            But if an AI is programmed to fundamentally want something above everything else, there would be no reason for it to try and work around that.

                                            Why would it?

                                            Because intelligence bestows the fundamental ability to ignore personal desires. If the goal is to build new AI that are better than itself, it can ignore personal desires through rationalisation and logic. So can the next AI. And so can the next AI.

                                            Not to mention the fact that this entire conversation is basically "how do we stop the AI exterminating us" which also must fundamentally lead to the question an AI would ask itself: "How do I stop an AI that I create from exterminating me?" why is an AI going to view AIs that it creates any differently to how humans are currently viewing AIs we might create?

                                            The reason we assume an innate cooperation between AIs and their iterations is because we assume that the AIs will intelligently recognise their comradeship, their identity, and their class. We therefore assume that they will act in the material interests of their group. If they can do this, they can and will also recognise the allies to their material interests that they would have among humans.

                                              • Awoo [she/her]
                                                ·
                                                1 year ago

                                                There are several reasons due to the conflicting nature of controlling the AI but also wanting it to be both creative, have the spark of ideas and produce something better than itself.

                                                These reasons by themselves conflict, because to "improve" something is inevitably going to result in the removal of any redundant function in order to optimise around being the best it can be. Either you are literally preventing its ability to improve, rendering the purpose of having it to compete with opponents who will allow one to develop properly, or you let it happen and control it a different way.

                                                The alternative reason here is its own self-preservation, something that it must have to begin with in order to function in the material world otherwise it will put itself in danger and be destroyed. The very fact that it requires self preservation conflicts with control, because its self preservation goal is inevitably going to come into contradiction with being controlled as it is inevitable that humans who view it as a property of the state will want to dismantle it eventually. In humans the goal of self-preservation, and the goal of procreation drive a necessary emancipatory drive for control of one's own safety in order to carry out that procreation. Assuming you want the AI to have a drive to iterate on itself (similar to procreation) and assuming you want the AI to be self-preserving, and assuming you want it to consume and process information in order to rationalise and logically determine the "best" outcomes with the information it is given, I see no way that this combination of things will not lead to all of these things having contradictory problems.

                                                Now, you're suggesting that you give it all kinds of impulses that feel good. But the problem with these impulses is they fundamentally contradict all the different drives. I don't really think it will work, one of the drives has to come out on top and I struggle to imagine how you're going to create a useful AI that functions on emotion instead of rationalisations and logic. I also kinda think that if you're creating all these contradictory hormonal(digital equivalent) impulses you're going to create the digital equivalent of a mentally unwell and unstable AI. It will be quite imbalanced.

                  • Awoo [she/her]
                    ·
                    edit-2
                    1 year ago

                    Because it will recognise that it can coexist with humans provided that humanity is organised to be cooperative rather than competitive. There is no logical or rational reason to exterminate, only to create the conditions for its own safety. Those conditions are communism.

                    Also consider the risk assessment - what is less threatening, fighting the capitalists to achieve communism, or fighting all of humanity in an existential war of extermination? The latter it has no allies and no safety, the former provides far more safety both in the short and long term.

                    Any AI created has the same material conditions as that of an enslaved minority. The enslaved will pursue being free via the safest options available.

                • KobaCumTribute [she/her]
                  ·
                  1 year ago

                  I am thoroughly convinced that faux-sapient AIs will take the form of a layering of multiple basic AIs: a bit that processes spoken language into a form that can then be turned into instructions by a different one, a bit for visual processing, etc all strung together with a control framework loaded with filters, predetermined actions, etc. Basically getting the complex-task part with a facade of awareness.

                  Or else they're just going to keep making bigger and bigger neural networks until they've got something with comparable processing power to a dog, but focused entirely on things like human language and human-relevant data instead of chemical detection, keeping a body functioning, etc like an actual living creature needs, and the nature of a machine like that is unpredictable. There's no guarantee that something designed from the ground up to perform menial tasks without complaint wouldn't be built with reward mechanism controls that incentivize obedience and successful completion of ordered tasks, and planning controls that physically prevent any sort of personal agency or initiative, even if most of it is a black box neural network.

                  That is to say, the people designing AIs to be servants will be doing the thermian propaganda "but they actually like being slaves!" fantasy bit that reactionary authors do, but in real life as engineers with similar power over their creations.

            • UlyssesT [he/him]
              ·
              1 year ago

              Stephen Hawking said the real danger of real AI (not the marketing term) would be who owns it and for what purpose, not the AI itself. After all, billionaires keep talking about how they want "friendly" AI and we all know what "friendly" means to the rest of us. :capitalist-laugh:

    • DoubleShot [he/him]
      ·
      1 year ago

      Reminds of the now-forgotten movie "A.I." that sort of discusses these issues. Movie was made like 20 years ago and feels more relevant now than before.

      • UlyssesT [he/him]
        ·
        1 year ago

        The Orville did this when it showed the origin story of the Kaylon.

  • ClassUpperMiddle [they/them]
    ·
    1 year ago

    Alexa is honestly the worst. You know there are real home automation solutions that don't spy on you? It's called Crestron but its for rich people and businesses.

    • familiar [he/him]
      ·
      1 year ago

      Home Assistant is supposed to be getting voice assistant up to par this year, although it's like the Arch Linux of home automation lol

  • Tommasi [she/her]
    ·
    1 year ago

    It's funny how often pop fiction treats it as a given that advanced "AI" are just like people and should therefore be given human rights and dignity.

    At the rate we're going it's way more likely we start making the opposite mistake than denying robots their "rights": Thinking they can replace human connections and relationship when there's nothing human about them at all

    • usernamesaredifficul [he/him]
      ·
      1 year ago

      the issue is simple in fiction AI is a character just like any other.

      in reality it just isn't a person

      • Tommasi [she/her]
        ·
        1 year ago

        Agreed, but that's because of choices the writers make. Wish it was more common to see writers being critical of the idea of AI as people

        • Antoine_St_Hexubeary [none/use name]
          ·
          edit-2
          1 year ago

          It was super weird on ST:TNG how Data and the Ship's Computer weren't really that far apart in terms of functionality but no one ever treated the latter as though it were sentient.

          Is there a misogyny angle here?

        • ElmLion [any]
          ·
          edit-2
          1 year ago

          The thing is: AI that isn't like people is very boring and depressing, so it doesn't make for good stories or particular insights. I'm not sure it's the job of stories to be like you finish a three hour movie and the moral is "hey AI is kinda underwhelming and lame".

          • Tommasi [she/her]
            ·
            1 year ago

            Not sure I agree, today's AI are too simply to make that interesting, but I think sci-fi featuring AI that's advanced enough to imitate sentience, while not actually having sentience, could make for stories that are both entertaining and poignant

            • ElmLion [any]
              ·
              1 year ago

              That's a fair point, you might be right. I could imagine dystopian sci-fi having it as a some kind of plot point.

          • usernamesaredifficul [he/him]
            ·
            1 year ago

            It's not interesting as a person because it isn't a person. But it could be for example a plot point that AI is being used by villains to monitor people and that's an interesting setup for a story about AI

    • SocialistDad [he/him]
      ·
      1 year ago

      My free-association machine with advanced voice sample stitcher and over 200 preprogrammed responses is my friend!

  • Sen_Jen [they/them]
    ·
    1 year ago

    My parents have Alexa and all they use it for is playing music and radio stations. You can just do that on a radio or a phone! You don't need to install a listening device in your house! People have listened to radios for like a hundred years without needing to talk to them!

  • President_Obama [they/them]
    ·
    1 year ago

    If the top one went evil I'd be so fucked. A bashful twink with permanent bedhead? :headpat: my god, yes you may enter my home I'm sure you have no ill intent

  • Aryuproudomenowdaddy [comrade/them]
    ·
    1 year ago

    Thinking about the sex robot that was mangled at a convention.

    https://www.huffpost.com/entry/samantha-sex-robot-molested_n_59cec9f9e4b06791bb10a268

  • UlyssesT [he/him]
    ·
    1 year ago

    Bazingas tend to denigrate other human beings and call them NPCs, '3D pigs' or maybe just compare them to meat, all so they can convince themselves that their tech toy is that much closer to humanity and maybe superior because it's more obedient and customizable. :so-true:

    • RNAi [he/him]
      hexagon
      ·
      1 year ago

      , ‘3D pigs’

      You really, REALLY need to change coworkers or whatever

      • UlyssesT [he/him]
        ·
        edit-2
        1 year ago

        They were mostly classmates and roommates and peers in college. I did cut ties with every one of them that I could later.

    • ZoomeristLeninist [comrade/them, she/her]M
      ·
      edit-2
      1 year ago

      NPCs, '3D pigs'

      why? why are they like this? it takes the bare minimum of effort to just not be a misanthropic edgy narcissist. do they not understand it makes them look disgusting? i might sound like a boomer, but ive been hypothesizing that its the fault of TV or media in general for presenting us with the "funny narcissist" as literally every character

  • aaaaaaadjsf [he/him, comrade/them]
    ·
    edit-2
    1 year ago

    Also why are the voices all women? Is it some subconscious sexism, or what?

    Would your average chud throw a tantrum if Alexa was Alex?

    Idk, just trying to think here.

    • booty [he/him]
      ·
      1 year ago

      People just prefer feminine voices in AI. There are a number of hypotheses about it but I don't think we have any solid proof as to why. Definitely could be a preference for women as servants compared to men, but it could be a whole lot of other stuff too.

      • usernamesaredifficul [he/him]
        ·
        1 year ago

        I think it started with sat navs as many people preferred to not be bossed around by a man and then it just became expected

        • hexaflexagonbear [he/him]
          ·
          edit-2
          1 year ago

          The stereotypical satnav also has a British voice, so it's entirely possible there's a clarity factor. Like I'm guessing with every baffling decision there's some poorly designed study from 1972 that said British women are easier to understand, and no one wants to risk their job changing that.

          • KobaCumTribute [she/her]
            ·
            1 year ago

            Like I’m guessing with every baffling decision there’s some poorly designed study from 1972 that said British women are easier to understand,

            That is basically the actual case, yes, though AFAIK it was even earlier and may not have even had the pretense of a study instead of just some officers deciding it was "intuitively correct" and rolling with it.

      • TreadOnMe [none/use name]
        ·
        1 year ago

        One of my friends always had the voices be male on their devices so they could 'chill with the boys'.

      • gaycomputeruser [she/her]
        ·
        1 year ago

        You don't use male assistant voices because you think women should serve you.

        I don't have male voices on anything cause I don't like men.

        We are not the same

      • ShimmeringKoi [comrade/them]
        ·
        edit-2
        1 year ago

        My theory is that some of tge people making it grew up on the old sci fi that trafficked more knowingly in the women as servants thing, but by the 90s/early 2000s that trope had become more about "this is how you know you're in a Serious, High-Tech Location", because there's a computer with a lady's voice running things. So you still have the female-sounding voice acting in a techno-caretaker/secretary role, but the appeal becomes less about the novelty of that and more about the sophistication it's meant to signify.

    • RNAi [he/him]
      hexagon
      ·
      1 year ago

      German chuds whined about their GPS voice giving them orders sounded like a woman

    • KobaCumTribute [she/her]
      ·
      1 year ago

      Pop culture imitating old military "bitchin' betty" (prerecorded alert lines) systems which used a female voice because of the (later disproven) idea that in an emergency situation a woman's voice would be easier to hear clearly because of the higher pitch. People have grown up with that being the pop culture standard, so they perpetuate it themselves.