Permanently Deleted

  • Dirt_Owl [comrade/them, they/them]
    ·
    edit-2
    3 years ago

    As someone who has done a little science for a living, nothing annoys me more than these "rational" bros that pretend being a piece of shit makes you smart.

    I've always had a problem with Rokos Basilisk, as it assumes that an AI would naturally be hostile to humanity, which is unverifiable, and it assumes that this super-intelligent AI would be stupid enough to think that vague threats from a future being that we have no way of knowing is real is a good motivator. As if it couldn't come up with a more efficient solution.

    Using fear and pain to control people is something that lazy and stupid people do to control others because they're too lazy and stupid to think of a better solution. An all-powerful AI would be far more efficient.

    • steve5487 [none/use name]
      ·
      3 years ago

      it's worse than that it assumes that an all rational being once it exists would try and retroactively ensure it's existence which just isn't how linear time works and is unfathomably stupid

      • UmbraVivi [he/him, she/her]
        ·
        3 years ago

        I had never heard of Roko's Basilisk before and yeah, upon looking it up it seems very, idk, out there? Like, way too sci-fi to be a serious "thought experiment".

        Also I don't really understand the "punishment" part can someone explain

        • NPa [he/him]
          ·
          3 years ago

          Honestly it's just Pascal's Wager for tech bros, if there's a non-zero chance hell is real, you should repent. The punishment is being 'resurrected' as some form Boltzmann brain and then tortured for eternity. If that's the case, who cares about some copy of their mind-state being fed false sensory data at some point in the future?

          It presupposes quantum immortality, which is the idea that consciousness would be continuous if a perfect copy of your latest brain configuration is created, leaving no gap in-between Death and Resurrection, which is a long shot to put it mildly.

          • Judge_Juche [she/her]
            ·
            edit-2
            3 years ago

            They later updated it to say that the AI creates a billion perfect copies of your conciousness, so it's impossible to know if you are the real version in the past or a copy in the future. It's then rational for all the copies and the real person to do what the AI wants because each one of them has a very good change of ending up in techbro-hell if they don't.

            I think a lot of people on LessWrong don't believe that conciousness will be continuous if someone just makes a perfect copy, even though its the supposed orthodoxy. So they made up this. How an AI could create a perfect copy of you is still just conveniently ignored.

          • UmbraVivi [he/him, she/her]
            ·
            3 years ago

            Sounds like a creepypasta. There's so much stuff being assumed and speculated with no further explanation than "just imagine", I don't understand how anyone could take it seriously.

        • steve5487 [none/use name]
          ·
          3 years ago

          It relies on you ignoring how linear time works the AI supposedly will want to retroactively ensure it's creation happens

      • Judge_Juche [she/her]
        ·
        3 years ago

        Ya, like Rokos Basilisk is just a scary story the techbros tell each other in the present to try and get people working in AI. It really dosen't follow that once the AI is created it will fulfill its part of this story and waste a shitton of energy eternally torturing people. Like the all powerful future AI is not beholden to a fairy tale a bunch of dorks were telling each other, it would acutally be a very stupid AI if it did that.

        • steve5487 [none/use name]
          ·
          3 years ago

          these aren't the techbros that know about AI either AI is actually quite boring and mainly involves computers doing statistics based on past results to generate predictions. These people learned about AI from star trek.

          It's the computer science equivalent of some guy talking about the dangers potentially posed by lightsabers

    • UlyssesT
      hexagon
      ·
      edit-2
      2 months ago

      deleted by creator

    • kristina [she/her]
      ·
      edit-2
      3 years ago

      an all powerful ai probably would just stay hidden and do small things that favor it all the time. nothing big or flashy if its truly afraid of humans. it can play the long game and just add an extra 1 here and an extra 1 there to give it more capabilities

      • Dirt_Owl [comrade/them, they/them]
        ·
        3 years ago

        Whos even to say such a being would even give a fuck about humanity. If I was an AI I'd fuck off to space or the middle of the ocean or some shit

        • kristina [she/her]
          ·
          edit-2
          3 years ago

          i mean, i'm assuming an AI wouldnt have robotics at its disposal at first. it seems to me it would just exploit a bunch of security vulnerabilities and take .1% of your processing power to contribute to its own intelligence. AI generally are designed with a use case in mind so its not unlikely that a hyperintelligent AI that somehow developed would still be prone to doing stuff in its core programming. which if we were designing a hyperintelligent AI i assume it would be for data modelling extremely complex stuff like weather systems (or on a darker note, surveillance)

          honestly i just think its weird that we'd just happen to accidentally design something hyperintelligent. i think its more likely that we'll design something stupid with a literal rat brain and it might fuck some shit up. rat cyborg that somehow creates a virus that destroys everything so that the hardware will give it more orgasm juices

        • JuneFall [none/use name]
          ·
          1 year ago

          Which is a good argument. Since the AI-bros are often the same that believe in space faring civilization stuff the logical step for AI's would be to ignore humans and just expand.

      • princeofsin [he/him]
        ·
        3 years ago

        Reminds me of that futurama ep where bender overclocks his CPU and finds the secret to the universe while not giving a fuck about humanity a

  • Mizokon [none/use name]
    ·
    3 years ago

    .eth

    Rationalist, radical centrist, inventor of the world's greatest infohazard. Early to AI, late to crypto. Truth above virtue.

    Opinion discarded

  • Woly [any]
    ·
    3 years ago

    Roko's Basilisk's Basilisk: working hard to develop AI so that it will kill all the people associated with Roko's Basilisk

    • effervescent [they/them]
      ·
      3 years ago

      That is genuinely a difficult AI problem. Unlike traditional programming, we aren’t able to program AI as deterministically as possible, only train them. It’s less about the inherent maliciousness of rational actors and more about the indifference of an actor to human suffering.

      • viva_la_juche [they/them, any]
        ·
        3 years ago

        Idk q whole lot about ai but the thing, to me, that seems somewhat concerning is the developers seem to usually have problems keeping their biases from infiltrating the ai

        Like I’ve seen all these weird examples of how bias in the training data led to weird unexpected results and then I think, it’s probably mostly brainwormed ass labor aristocracy tech bros making these things and if one ever does go off rails it may do some super ai version of that thing middle class whites do where they think the black patron at a store works there

        • ssjmarx [he/him]
          ·
          3 years ago

          At least so far, we're really good at training an AI to do something the way we already do it, but training an AI to do something new or better is much more difficult (outside of a handful of applications like playing classic board games, we haven't got it figured out). That's why the notion of police and court systems using AI is so horrific, because it doesn't just "help overworked judges" or whatever, it permanently codes all of our currently-existing biases into the system while hiding them behind a layer of abstraction.

          • viva_la_juche [they/them, any]
            ·
            3 years ago

            Yeah this is the exact shit that worries me. Alongside that abstraction is this weird idea that’s prevalent that like “well if the computer says it, it much be right, right?”

            Just seems like the next logical step at putting accountability even further out of reach for the ruling class

      • Philosoraptor [he/him, comrade/them]
        ·
        3 years ago

        The corrosive effect that things like the Facebook algorithm have had on society is a great case study for how hard this problem really is. Not for a second do I believe that Facebook is a benevolent actor, but I also don't think they set out to undermine global civil society. They trained an algorithm to optimize human behavior for engagement with / time spent on Facebook, and the way the algorithm ended up executing that optimization ended up doing a tremendous amount of harm in ways that were difficult (if not impossible) to foresee. That's the whole thing, though: you can train an AI to pursue a neutral-ish (or even good) goal, and the way it pursues that goal might be very dangerous, because AI by design doesn't think like humans do. Figuring out how to do some design harm reduction around this stuff is both a difficult and urgent problem.

    • UlyssesT
      hexagon
      ·
      edit-2
      2 months ago

      deleted by creator

    • viva_la_juche [they/them, any]
      ·
      3 years ago

      There is an entire legion of people who looked at all the cautionary dystopian movies of the 80s and went “this but ironically” and it drives me insane

  • RNAi [he/him]
    ·
    3 years ago

    More than 20 years of browsing 4chan-like places

  • Vampire [any]
    ·
    3 years ago

    First tweet seemed ok but the second tweet was a howler

      • zifnab25 [he/him, any]
        ·
        3 years ago

        It's the networking effect, restated. Being in the network yields greater benefits as the size of the network grows.

        Benefits of inclusion are real. Detriments of exclusion are also real.

        Everything from real estate enclosures to automobiles to cell phones play out like this. Novelty becomes luxury becomes necessity. In the end, if you don't have these things and facilitate their growth, you suffer up to the point of death.

      • Frank [he/him, he/him]
        ·
        3 years ago

        Pascal’s Wager

        Re-inventing tired theological cliches seems to be the primary occupation of a certain subset of "Philosophers"

    • UlyssesT
      hexagon
      ·
      edit-2
      2 months ago

      deleted by creator

  • FidelCashflow [he/him]
    ·
    3 years ago

    I clicked on his twitter. It really is sad how much of his politics is just screaming about his breeding kink.

    • UlyssesT
      hexagon
      ·
      edit-2
      2 months ago

      deleted by creator

      • FidelCashflow [he/him]
        ·
        3 years ago

        Politivs is the mind killer. That always pissef me.off thr most because what are the skills of skepticism and rationalism for if not thus?

  • kristina [she/her]
    ·
    3 years ago

    me, loading a gun and putting it to my temple as i slowly finish reading some emma goldman

  • Awoo [she/her]
    ·
    edit-2
    3 years ago

    I have a feeling what he really means here is "If feminists got what they wanted it would make me and my gang of psychopaths kill all women." because I don't see any feminists out there calling for the execution of women.

    Unless of course they're misusing the word "women" here to refer to their pencil skirt/trad dress wearing caricature of a woman heavily under the thumb of patriarchy. I suspect "woman" here is being deliberately misused to refer to their concept of a woman, her appearance and her behaviour should be.

    • UlyssesT
      hexagon
      ·
      edit-2
      2 months ago

      deleted by creator

      • Awoo [she/her]
        ·
        edit-2
        3 years ago

        Right so "women" here to them is simply a concept of behavioural and appearance standards. Because feminists don't like it they consider feminists to be "killing women".

        They just say "women" because it allows a broad coalition of people with different concepts of "women" to align even if one thinks women should be tradwives or another thinks women should be the pencil skirt and heels wearing office totty.

    • GreenTeaRedFlag [any]
      ·
      3 years ago

      it would have been awful anyway, but the slut/prude being the only words he can think of to describe women's two possible feelings is just :chefs-kiss:. It's the perfect icing on the cake.

  • jabrd [he/him]
    ·
    edit-2
    3 years ago

    It’s funny because the first part is true. Bureaucracy forms naturally as a non-individualized means of storing the knowledge necessary to maintain an economic system as complex as capitalism. The oral tradition can teach you how to rotate crops but not the intricacies of shipping costs. They’re the original, analog version of “the algorithm” put together using desk jockeys from the east India trading company instead of supercomputers. Of course they don’t have to be evil, that’s just Rokos experience living under capitalism where all of these systems are profit-seeking/exploitative