https://subscriber.politicopro.com/article/eenews/2023/07/06/a-faster-supercomputer-will-help-scientists-assess-the-risk-of-controlling-sunlight-00104815

  • UlyssesT [he/him]
    ·
    edit-2
    1 year ago

    I wouldn't worry about that the people who talk about friendly vs unfriendly AI are all very stupid and none of what they say is grounded in reality.

    A lot of them are billionaires, and even if they aren't very smart, they do have power and connections and can do destructively arrogant and ignorant shit with it.

    It doesn't even have to be "true" AI. See my-hero 's "TruthGPT" project which is just a butchered ChatGPT that is edgier and talks more like 4chan. That can go worse places, especially because contemporary society by and large still thinks that chatbots as they are are a good substitute for thinking people to arbitrate decisions. "TruthGPT" could quite easily be applied to criminal profiling, airport security screenings, or just filling reddit-logo with more hate, and it doesn't even take any science fiction elements, just time.

    • usernamesaredifficul [he/him]
      ·
      1 year ago

      The bottleneck with creating something like chatGPT is data collection for training. chatGPT cost half a billion dollars to make.

      I would assume you don't get much more data for $1 billion than $0.5 billion and you get diminishing returns so I doubt we will see many improvements on generative large language models until they find new better sources of training data which is more an organisational than a technical problem

      AI for criminal profiling would be a nightmare especially if you just gave it details of convicted criminals as that would basically produce an institutional racism machine

      sorry lots of diverse elements to respond to here

      • UlyssesT [he/him]
        ·
        1 year ago

        Impoverished people that often systemically suffer from racism as part of what impoverishes them can and in some ways already do suffer further from machine learning technology pressed against them.

        The reason I brought up "nonpolitical" as a common techbro conceit elsewhere is applicable here: they can claim (and already do) that it's "just nonpolitical objective data" that is saying poor minorities are poor because they are minorities.

        • usernamesaredifficul [he/him]
          ·
          1 year ago

          Only someone completely boneheaded and ignorant of the nature of statistics would conclude that because data shows black people are imprisoned more black people are more criminal. So I am not surprised many tech people think that

          racism in AI is a real issue that not enough is done to combat

          • UlyssesT [he/him]
            ·
            1 year ago

            Only someone completely boneheaded and ignorant of the nature of statistics

            many tech people think that

            nicholson-yes

            I actually took a fair number of statistics courses, and tried to explain some basic stuff like what margins of error mean and why there's significance to sample sizes of as few as a thousand, but I've had roommates dismiss data that didn't fit what they already believed then immediately embrace something else that fit what they believed.

            The most glaring example was "not a racist, but" chudlings talking about "do you know that us-foreign-policy commit X% of the violent crimes" talking points and I'd counter, just to fuck with them, with statistics about what percentage of violent crimes are done by men in general and they'd go full wojak-nooo about how unfair that is.

            They wanted to be Dwight Schrute style violent nerd warriors but didn't want to seem like a statistical violence risk. ok