Yeah...

  • Frank [he/him, he/him]
    hexagon
    ·
    1 year ago

    I'm reading this article

    https://link.springer.com/article/10.1007/s13347-023-00621-y

    And I like it because it conforms to my biases - ChatGPT has no cognitive process, it cannot evaluate the semantic content of the text it produces (which I would argue means there is no semantic content. We interpret the text as having meaning, but in truth it is meaningless, pure noise that simply resembles speech and is sometimes "correct" by accident) and it's basically a neat magic trick.

    Also, turns out that ChatGPTs miraculous abilities are partially due to workers in the Kenya who were paid a pittance to read the most awful content imaginable and label it. CW: Mention of CSA, SA, violence, trauma, worker exploitation. The man-made horrors are well within my comprehension

    According to the article the company was also doing content moderation for Facebook for a while but pulled out because doing content moderation for facebook is a one-way ticket to horrific trauma and it doesn't take very long to get there.

    Shiny cool new toys and gross violations of ethics and human dignity foisted on test subjects in Africa. Name a better combination.

    One massive issue I have with trying to make sense of any of this; I don't know enough about how these machines work to assess whether a given article is written by someone capable of giving informed opinions, or if it's written by a bazinga true believer high on their own supply. This is especially problematic with "AI" language models; Bazingas may not actually know a goddamn thing about neuroscience, linguistics, or semantics and are just failing the Turing Test left and right attributing human like attributes to a weighted random number generator. And the linguists et al may not understand what's going on on the math side. Love me some known unknowns and unknown unknowns. This is why I usually don't fuck with Philosophy - You have to spend years studying philosophy to even get to the point where you can evaluate whether a philosopher has something useful to say or is just completely full of shit.

    • DiltoGeggins [none/use name]
      ·
      edit-2
      1 year ago

      BTW, the "G" in GPT simply stands for "generative", it means that the AI "machine", or "model", or "instance", or whatever we want to call it, "software", even... learns as it goes, and incorporates that learning to evolve, basically in real time. In the past, an AI machine would be taught in a lab, and if the maker wanted it to learn and grow, it would have to take the machine offline, and feed it new data, check that it was giving the desired results, and then bring back online. with the latest version of this technology, the machine is constantly learning, and evolving, it doesn't need to be taken offline to learn and grow.

      Which leads to the next point, the real point, this technology may be capable of ending humanity, but probably not any time soon. Looking at an app like chatGPT, its horribly innacurate, mistake-prone, full of logical errors. It's not anything you can even count on to write a reasonably succinct article that hits all the salient points, unless its a really simple topic. It's easy to test my assertion, simply log in to chatGPT and start feeding it prompts. It becomes quite apparent that the technology is nowhere near ready for primetime, at least not for your average, every day user.

      • Frank [he/him, he/him]
        hexagon
        ·
        edit-2
        1 year ago

        this technology may be capable of ending humanity

        I agree, but it's gonna be because some C-Suite dweeb fires all some nuclear engineer tasked with emergency response and replaces them with a madlibs generator, and then a crisis happens...

        And from what I understand, all they "learn" to do is predict what letter goes next. There's still no cognitive process, no manipulation of symbols, no abstraction of concepts. Nothing that a mind does. It's still just a fancy weighted random number generator. The resulting strings of text have no semantic value and only resemble speech. We're interpreting them as having meaning because most people don't look under the hood. It's linguistic Pareidolia. Our highly tuned pattern recognition that we rely on to communicate using speech, body language, and so forth is failing in the face of an object that closely resembles speech.

        this is a major plot point in (CW: body horror, violence, profound existential dread, mental illness) Blindsight. Spoilers for Blindsight:

        spoiler

        A linguist spends hours talking with an alien entity before concluding that it has no semantic understanding of it's speech. It built a model of human speech by intercepting radio communications and was using that model to "communicate", but it was just mimicking human behavior it had observed. it had no understanding or awareness of why humans were engaging in that behavior or what it did, it just knew that they did it and was mimicking their behavior to get closer to them, the way some predators mimic their prey

        Looking at an app like chatGPT, its horribly innacurate, mistake-prone, full of logical errors.

        I think we need to push back against claiming that it makes mistakes, logical failures, "hallucinations" or "lies". It can't do those things. It's a computer, and computers do exactly what they're told. The problem is that the user often doesn't understand exactly what they're telling the computer to do. People prompt ChatGPT to answer questions because they think it's smart, that it understands what they're typing, that it can think and consider and solve problems. But it doesn't do any of that. It just compares their prompt to it's data set and assembles a string of letters that resembles the data set. It's not making mistakes, it's doing exactly what it was designed to do; Take an input and produce an output based on the statistical weights of it's data set. We're regularly fooling ourself because the output is in letters, not numbers, and we're attributing meaning to those letters where none exists. If it gets the answer "right" it's pure luck; There just happened to be enough text strings in it's training set, and it weighted the values in it's set in the right way, for a string of output that happens to resemble a correct answer.

        This is compounded enormously because, as I understand it, the bazingas who designed these things built a complete black box - They have no way of determining why the LLM generated the outputs that it did. Presumably, unless there is some really weird shit going on, those outputs are deterministic - Given the same inputs the machine should produce the same outputs.

        • DiltoGeggins [none/use name]
          ·
          1 year ago

          And from what I understand, all they “learn” to do is predict what letter goes next. There’s still no cognitive process, no manipulation of symbols, no abstraction of concepts.

          Fascinating but not surprising I guess

          • Frank [he/him, he/him]
            hexagon
            ·
            1 year ago

            There's a lot of argument about this. I know some people who think it's manipulating concepts, it can abstract ideas, shit like that. But my hard counter is that the image generators can't draw hands. And the reason they can't draw hands is that they're incapable of abstraction. Despite sampling likely millions or hundreds of millions of images of hands the model has no awareness that all of those inputs are part of a class of objects we call "hands", and that most hands have similar attributes.

            We can look at a person with extra fingers, a person with fewer or missing fingers, a monkey, a robot, a crab, a space alien, and a snow man and we'll understand that whatever is at the end of the upper limbs, to a certain degree of difference, is a hand and has the attributes of hand - It manipulates and grasps objects, etc.

            If someone asks us how many fingers are at the end of a hand we know it's five, but we also know that James Doohan, despite having four fingers, still has a hand. "Hand" is an abstract object we can manipulate.

            But the plagiarism machine can't do that. All it does is reproduce variations of it's data set with no semantic understanding of that data set. It can't draw hands because in it's data set there are countless variations of hands, hands in all shapes, hands in all positions, hands of varying colors. We could look at all of those hands and recognize them as hands, and if asked to draw a hand in teh style of X we'd still give it five fingers. If we had more or fewer fingers we'd be doing it on purpose, knowing that we're deviating from the "ideal" hand object we understand.

            But the LLM can't abstract, it can't conceive of "hand". it just looks for statistical weights in it's data sets. Since hands are so variable the data set is a mess. There are trends in color, there are trends in lines that we would recognize as fingers. But the LLM just generates statistically likely color values. It doesn't know aht hands or fingers are, so it doesn't know that the human prompting it wants a hand with five fingers, etc. It just outputs a string of numbers that are statistically similer to it's training set.

            Idk if I'm explaining this well, but to me that inability to draw hands, and it's not just hands, is a silver bullet to the idea that these things think or manipulate symbols. Because it's not just hands, it doesn't recognize anything. When you look at the details of the images, the little things like buttons, jewelry, complex gadgets, they're almost always blobs of noise in roughly the right shape. It has no awareness that it's being asked to draw an abstracted object from a set of objects. It's just reproducing weighted data. It can do faces because there are a vast, vast number of faces in it's data set, probably far more than most other objects, and faces are very consistent in their shape and layout. So the probability that whatever nonsense it generates will be interpreted by human observers as a face is pretty high. But when you ask it to do something that isn't as consistently shaped and as massively represented in the set as faces it chokes.

            The tells I look for for plagiarism machine "art" are generally things like jewelry, buttons, anything that should be symmetrical. They're really bad at symmetry, presumably because they can't abstract and so aren't aware that the buttons on each side of a coat are the same object, or the same class of objects and should be similar in most respects. Jewelry too - It's so varied, and the machine isn't understanding that they're discrete objects made up of smaller objects, so it just outputs a blur that, if you actually look at it, isn't actually jewelry.

            Like maybe I'm wrong, maybe there is some weird totally alien process in there, but whatever it's doing, it's not doing anything like what we do. (Unless I am totally, completely wrong and just don't know enough to know I'm wrong, which would be really annoying).

            • DiltoGeggins [none/use name]
              ·
              1 year ago

              Like that old story about the monkeys if given enough time, and a typewriter with endless ribbon and paper (and bananas I guess) will randomly produce Shakespeare's works. Might take the monkey 10,000 years, but dangit they'll get it done. And of course, by then we'll have forgotten all context and imagine this could only have been done because they are actually Superior to us and we will begin worshiping them with bananas as the main form of adoration.... 🍌🍌🍌🍌🍌

              • Frank [he/him, he/him]
                hexagon
                ·
                1 year ago

                And as near as I can determine all this system does is give the monkeys a banana when they hit a key that, based on an analysis of shakespeare, is statistically likely to be next. And eventually the monkeys are trained to assemble words in ways that resemble shaespeare, but they're still monkeys with no idea what they're doing.

                • DiltoGeggins [none/use name]
                  ·
                  1 year ago

                  We can perhaps hope that eventually they begin to learn from the experience. (in a strictly evolutionary way..) :P

                  • Frank [he/him, he/him]
                    hexagon
                    ·
                    1 year ago

                    I don't think it's possible. The monkeys aren't monkeys, it's a prediction engine that decides what the next token - be it a letter, word, number, whatever - there's never any point in that process where it's going to start having self reference. It's a dead end. They're trying to work backwards from the end point of 6.5 billion years of brutal selection to re-create a process they don't understand.

                      • Frank [he/him, he/him]
                        hexagon
                        ·
                        1 year ago

                        Yeah, I was reading a reply where some guy said he could be a turing machine if he had enough spare sheets of paper to work with and that's not how human working memory works. If we assume that a cow is a spherical object in a vacuum then sure, buddy, you can simlulate a turing machine. But in the real world your meatsack can only manage so much stuff in your head and eventually you'd reach a point where you would no longer be able to keep performing the tasks necessary to do your turning machine thing. that's one of the most important things computers have going - You can store shitloads of information in memory and hard storage without losing track of it

                • 0karin728 [any]
                  ·
                  1 year ago

                  This is just the whole Chinese room argument, it confuses consciousness for intelligence. Like, you're completely correct, but the capabilities of these things scale with compute used during training, with no sign of diminishing returns any time soon.

                  It could understand Nothing and still outsmart you because it's good at predicting the next token that corresponds with behavior that would achieve the goals of the system. All without having any internal human-style conscious experience. In the short term this means that essentially every human being with an internet connection now suddenly has access to a genius level intelligence that never sleeps and does whatever it's told, which has both good and bad implications. In long term, they could (and likely will) become far more intelligent than humans with, which will make them increasingly difficult to control.

                  It doesn't matter if the monkey understands what it's doing if gets so good at "randomly" hitting the typewriter that businesses hire the monkey instead of you, and then as the monkey becomes better and better starts handing out instructions to produce chemical weapons and other bio warfare agents to randos on the street. We need to take this technology seriously if we're going to prevent Microsoft, OpenAI, Facebook, Google, etc. from accidentally Ending the World with it, or deliberately making the world Worse with it.

                  • Frank [he/him, he/him]
                    hexagon
                    ·
                    1 year ago

                    It's not the chinese room problem, it's a practical limitation of the ChatGPT plagiarism machines. We're not talking about a thought experiment where the guy in the room has the vast, vast, vast amount of rules needed to respond to any arbitrary input in a way the chinese speaker will interpret as semantically meaningful output. We're talking about a machine that exists right now, that far from being trained on an ideal, complete model of chinese is trained on billions and billions of shitposts on the internet.

                    Maybe someone will make a machine like that in the future, but this ain't it. This is a machine that predicts letters, has no ability to manipulate symbols, no semantic understanding, and no way to asses the truth value of it's outputs. And for various reasons, including being trained on billions of internet shitposts, it's unlikely to ever develop these things.

                    I'm really not interested in speculation about future potential intelligent systems and AIs. it's boring, it's been done to death, there's nothing new to add. Right now I want to better understand what these things do so I can own my friends who think they're manipulating abstract symbols and understand the semantic value of those symbols.

                    • UlyssesT
                      ·
                      edit-2
                      24 days ago

                      deleted by creator

                    • 0karin728 [any]
                      ·
                      1 year ago

                      Yeah, obviously. Current AI is shit. But it's a proof that deep learning scales well enough to perform (or at least somewhat consistently replicate, depending on your outlook) behavior that humans recognize as intelligent.

                      Three years ago these things could barely write coherent sentences, now they can replace a substantial number of human workers, three years from now? Who the fuck knows, emergent abilities are hard to predict in these models by definition, but new ones Keep Appearing when they train larger and larger ones in higher quality data. This means large scale social disruption at best and catastrophe (everything from AI enabled bioterrorism to AI propaganda-driven fascism) at worst.

                  • UlyssesT
                    ·
                    edit-2
                    24 days ago

                    deleted by creator

                    • 0karin728 [any]
                      ·
                      1 year ago

                      They're starting a dangerous arms race where they release increasingly dangerous and poorly tested AI into the public, while dramatically overselling their safety. Pointing out that this technology is dangerous is the exact opposite of what they want.

                      You're playing into their grift by acting like the entire idea of AI is some bullshit techbro hype cycle, which is exactly what microsoft, openai, Facebook, etc want. The more people pay attention and think "hey maybe we shouldn't be integrating enormous black box neural networks deep in all of our infrastructure and replacing key human workers with them", the more difficult it will be for them to continue doing this.

                      • UlyssesT
                        ·
                        edit-2
                        24 days ago

                        deleted by creator

                        • 0karin728 [any]
                          ·
                          1 year ago

                          What talking points then? I seem to be misunderstanding your criticism (or it's meaninglessly vague, but I'm trying to be charitable). What specifically have I said that you take issue with?

            • UlyssesT
              ·
              edit-2
              24 days ago

              deleted by creator

        • 0karin728 [any]
          ·
          edit-2
          1 year ago

          LLMs definitely are not the Magic that a lot of idiot techbros think they are, but it's a mistake to underestimate the technology because it "only generates the next token". The human brain only generates the next set of neural activations given the previous set of neural activations, and look at how far our intelligence got us.

          The capabilities of these things scale with compute used during training, and some of the largest companies on earth are currently in an arms race to throw more and more compute at them. This Will Probably Not End Well. We went from AI barely being able to form a coherent sentence to AI suddenly being a bioterrororism risk in like 2 years because a bunch of chemistry papers were in its training data and now it knows how to synthesize novel chemical warfare agents.

          It doesn't matter whether or not the machine understands what it's doing when it's enabling the proliferation of WMDs, or going rogue to achieve some Incoherent goal it extrapolated from it's training, you're still Dead at the end.

          • UlyssesT
            ·
            edit-2
            24 days ago

            deleted by creator

          • Frank [he/him, he/him]
            hexagon
            ·
            1 year ago

            The human brain only generates the next set of neural activations given the previous set of neural activations

            :doubt:

            As near as anyone can tell humans are not deterministic, there's a lot more to cognition than neuronal activity, and these "humans are analogous to computers" arguments are enormously reductive at best, but usually just completely wrong.

            Building some novel plague is a possibility, and a good reason to burn all of these things and shoot the idiots who made them.

            • 0karin728 [any]
              ·
              1 year ago

              Yes, obviously it's an oversimplification, but fundamentally every computational system is either Turing complete or it isn't, that's the idea I was getting at. The human brain is not magic, and it's not doing anything that a sophisticated enough algorithm running on a computer couldn't do given sufficient memory and power.

              • UlyssesT
                ·
                edit-2
                24 days ago

                deleted by creator

                • Frank [he/him, he/him]
                  hexagon
                  ·
                  1 year ago

                  Word. I don't see any bits getting flipped in the brain, this "brain as a computer" thing seems pretty sketchy.

                  • 0karin728 [any]
                    ·
                    1 year ago

                    Computational universality has nothing to do with digital computer flipping bits. It just means that any system which manipulates information (performs computation), and can do so at a certain level of complexity (there's lots of equivalent ways of formulating it but the simplest is that it can do integer arithmetic) are exactly equivalent, in that they can all do the same set of computations.

                    It's pretty obvious that the human brain is at least Turing complete, since we can do integer arithmetic. It's also impossible for any computational system to be "more" than Turing complete (whatever that would even mean) since every single algorithm that can be computed in finite time can be expressed in terms of integer arithmetic, which means that a Turing machine could perform it.

                    Obviously the human brain is many, many, many layers of abstraction and us FAR more complicated than modern computers. Plus neurons aren't literally performing a bunch of addition and subtraction operations on data, the point is that whatever they are doing logically must be equivalent to some incomprehensibly vast set of simple arithmetic operations that could be performed by a Turing machine, because if the human brain can do a single thing that a general Turing machine can't, then it would either take infinite time or require infinite resources to do so.

                  • UlyssesT
                    ·
                    edit-2
                    24 days ago

                    deleted by creator

                    • 0karin728 [any]
                      ·
                      1 year ago

                      This is why I fucking hate singularity cultist techbros. They convince the entire rest of society that AI is fake or that true AI is impossible or whatever by basically starting a religious cult around it.

                      This is harmful because AI is Incredibly dangerous and we need people to acknowledge that to start taking action to ensure that it's developed safely and don't suddenly have capabilities spike by 300% one month and now suddenly we have 30% unemployment, or a super-plague gets released because chatGPT 5 in 2026 told some idiot how to make flu viruses 10x more transmissible and 10x as deadly or whatever.

                      • UlyssesT
                        ·
                        edit-2
                        24 days ago

                        deleted by creator

                        • 0karin728 [any]
                          ·
                          1 year ago

                          My worry isn't sapient AI, I genuinely do not care whether it's sapient, my worry is that in the short term it will enable people to commit bioterrorism and mass produce high quality propaganda, and in the longer term that it's capabilities might increase to the point of being difficult to control.

                          This is exactly the shit I'm talking about, you seem to dismiss the entire Idea that AI might outstrip human intelligence (and that this would likely be very bad) out of hand. I think this is a mistake born from not being familiar enough with the field

                • 0karin728 [any]
                  ·
                  1 year ago

                  Do you even know what the Church-Turing thesis is?

            • UlyssesT
              ·
              edit-2
              24 days ago

              deleted by creator

              • Frank [he/him, he/him]
                hexagon
                ·
                1 year ago

                I want to be a hard determinist but those quantum mechanics assholes say there are truly random events at the quantum level so *shrug*

              • 0karin728 [any]
                ·
                1 year ago

                Ohhhhh, this is why your comment was so rude lmfao. Honestly fair. Sam Harris is a fucking idiot and I'm not a determinist, since quantum events are probably just random (though who the fuck knows tbh.)

                I am a strict materialist, but in more of a "everything can be explained by natural forces and interactions, by definition, because we are made of matter and something that wasn't composed of natural forces and interactions would be completely unobservable and therefore irrelevant" sort of way.

        • Serdan [he/him]
          ·
          1 year ago

          And from what I understand, all they “learn” to do is predict what letter goes next.

          https://thegradient.pub/othello/

          It can be difficult to tease out exactly how a neural network is modeling its training data, but claiming that it's solely predicting the next letter is reductive to the point of being wrong.

          That aside, I also just think people are being silly. If an AI can write working code (or beat chess grand masters every time), then obviously something interesting is going on, and protestations that it's not really thinking and reasoning for realz in a real way, are just kinda obnoxious.

          • UlyssesT
            ·
            edit-2
            24 days ago

            deleted by creator

      • dat_math [they/them]
        ·
        edit-2
        1 year ago

        it means that the AI “machine”, or “model”, or “instance”, or whatever we want to call it, “software”, even…

        I'm not trying to be a pedant when I say this, but I thought that the term generative means that it produces data, in contrast to a discriminative model, which produces discrete (or even fuzzy) classifications from some data. I also thought the term for what you're describing where the system learns as it does the generation (or inference for discriminators) was "online learning"

        • DiltoGeggins [none/use name]
          ·
          1 year ago

          I'm not an expert so I will defer to your definition. I am just going with what chatGPT told me. (no, I am not joking..) cheers :)

          • dat_math [they/them]
            ·
            edit-2
            1 year ago

            please use this as an example of chatGPT's lack of anything even remotely resembling awareness, let alone understanding of the semantics of what it says

      • Findom_DeLuise [she/her, they/them]
        ·
        1 year ago

        For your consideration:
        https://www.lihpao.com/what-cultures-don-t-circumcise/

        There's also a bonus struggle session in the comments section on the fully AI-generated article.

          • Findom_DeLuise [she/her, they/them]
            ·
            edit-2
            1 year ago

            Yep. It's a delightful little microcosm of Everything Wrong With the Internet These Days(TM). It has literal garbage content about a hot-button issue, and bad-faith debatebros going at it in the article's comments like raccoons and possums fighting over the scraps at the bottom of the trashcan of ideology. It belongs in a museum. Oh, and the "author" is obviously some kind of ChatGPT bot that has posted just an ungodly amount of this shit to that site.

          • Findom_DeLuise [she/her, they/them]
            ·
            1 year ago

            I still can't get over the fact that a pro-circumcision keyboard warrior saw that article, looked at the second image, and thought to himself, "Yes! This is the right place to disprove those pseudoscientific intactivists with my facts and logic!"

            On the same website that has this gem of a DIY article, no less:
            https://www.lihpao.com/how-to-make-helicopter-car-at-home/

            Coming down from the trees was a fucking mistake. lol. :monke-beepboop:

            • Frank [he/him, he/him]
              hexagon
              ·
              1 year ago

              I'm pretty convinced people are going to start building bots that just flood entire forums with chatGPT generated noise, idiot boxes throwing garbage at each other. If you could get around the account registration problem it'd be a great information warfare weapon - Train your model on wreckers and shitlibs, inject in to a forum you want shut down, and render it non-functional by spiking it with so much trash the actual humans can't use it.

      • Frank [he/him, he/him]
        hexagon
        ·
        1 year ago

        the machine is constantly learning, and evolving, it doesn’t need to be taken offline to learn and grow.

        You know I had two thoughts

        First, if you want to keep it growing in a direction you consider useful, you need to have humans constantly evaluating it's outputs. If it's a black box and we can't untangle it's programming the only way to tweak it is to look at what it's outputting, decide if that's desirable, and weight the results manually. If it's not being constantly supervised who knows what it's going to turn in to. So you're rate limited by the number of people in the global south you can hire to read it's outputs.

        Second - the people evaluating it's outputs impose hard limits and biases. If you've got the thing spitting out complex maths or chemical formulas the only way to train it is to have someone who understands complex maths or chemical formulas evaluate the outputs. If it gets "too smart" and starts outputting things no one can evaluate you can't falsify the outputs anymore and you've hit an end point. It's also being trained by people with limited knowledge, lots of biases they don't know they have, and a propensity to get things wrong. This has already been a problem - NYCs famous black people oppression computer that supposedly predicted crimes when Bloomberg was mayor, and the other case I heard of was some system in the Nordics that was supposed to assess welfare eligibility. The NYC Crimestat computer was a digital Klansman, and the Nordic welfare computer caused all kinds of problems due to biases on behalf of the programmers. Now we're all excited about AIs that aren't even programmed, they're generating their own incomprehensible code that is influenced by the biases of the bazinga techbros training them.

        Third - If you don't know how it's generating it's outputs you have no idea what outputs it will generate in the future. Like yeah, you can test it an arbitrarily high number of times and say "Oh it's correct 99.x% of the time, but as the stakes get higher and the operations become more complex that tricky little x% is going to get more and more problematic. For one - It's still running on a digital computer, so it's still deterministic, but we've apparently already hit a point where the code is no longer human-interpretable. So you can't debug it. If it starts doing something undesirable all you can do is boot an earlier back-up and try to train it again. Second, you can't debug it. When it hits an error or something you have no idea what it will do. that's fine if it's running the voice lines for an NPC but a big problem if it's controlling the RCS on a rocket re-entry. We're already at the point where high-tech stuff blows up because there are so many lines of spaghetti code that no one knows what will happen when it's all put to work. Now you're hooking up complex systems to a black box controller and just hoping that it won't throw an error or do something unexpected, because testing it is, at best, very difficult.

        • Frank [he/him, he/him]
          hexagon
          ·
          1 year ago

          You know, I had another thought;

          With great intelligence comes great insanity.

          There's apparently a pretty strong correlation with doing really well on "intelligence" tests and having a diagnosable mental illness. I've heard that really smart people are also more susceptible to certain kinds of delusions because being real good at pattern matching doesn't mean the patterns you're noticing are significant, or even really there. But the thinking goes that "smart" people are better at coming up with arguments to support their false beliefs and finding things they think are evidence of their false beliefs, so delusion in "smart" people might be harder to counter than delusion in less "smart" people

          (unitary intelligence isn't real kill the IQ test in your head)

          • tagen
            ·
            edit-2
            1 year ago

            deleted by creator

            • Frank [he/him, he/him]
              hexagon
              ·
              1 year ago

              Not off the top if my head, it's just something I remember reading in passing. Maybe try google scholar and see if there's anything bout correlations between mental illness and intelligence test scoring.

        • DiltoGeggins [none/use name]
          ·
          1 year ago

          Second - the people evaluating it’s outputs impose hard limits and biases.

          this is probably my biggest beef with it. GIGO. garbage in, garbage out I think

          Third - If you don’t know how it’s generating it’s outputs you have no idea what outputs it will generate in the future.

          Legit point. related to point two also....

        • ssjmarx [he/him]
          ·
          1 year ago

          assess welfare eligibility

          Shouldn't this be dead simple? The law sets the requirements for welfare, the machine looks at your income or whatever and checks if it's within those requirements.

          • Frank [he/him, he/him]
            hexagon
            ·
            1 year ago

            It's discussed in Weapons of Math Destruction by Cathy O'Neil. I'm afraid I can't remember the details, but Weapons of Math Destruction is more or less the real world "Don't Create the Torment Nexus" for these "AI" shitasses.

              • Frank [he/him, he/him]
                hexagon
                ·
                1 year ago

                You can never have too many reasons to hate Bloomberg. Well, I guessAntisemitism would be one too many, but aside from that specific exception you can never have too many.

      • UlyssesT
        ·
        edit-2
        24 days ago

        deleted by creator