Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • Dirt_Owl [comrade/them, they/them]
    ·
    7 months ago

    For fucks sake it's just an algorithm. It's not capable of becoming sentient.

    Have I lost it or has everyone become an idiot?

    • UlyssesT [he/him]
      ·
      7 months ago

      Crude reductionist beliefs such as humans being nothing more than "meat computers" and/or "stochastic parrots" have certainly contributed to the belief that a sufficiently elaborate LLM treat printer would be at least as valid a person as actual living people.

      • daisy
        ·
        7 months ago

        This is verging on a religious debate, but assuming that there's no "spiritual" component to human intelligence and consciousness like a non-localized soul, what else can we be but ultra-complex "meat computers"?

        • oktherebuddy
          ·
          edit-2
          7 months ago

          yeah this is knee-jerk anti-technology shite from people here because we live in a society organized along lines where creation of AI would lead to our oppression instead of our liberation. of course making a computer be sentient is possible, to believe otherwise is to engage in magical (chauvinistic?) thinking about what constitutes consciousness.

          When I watched blade runner 2049 I thought the human police captain character telling the Officer K (replicant) character she was different from him because she had a soul a bit weird, since sci-fi settings are pretty secular. Turns out this was prophetic and people are more than willing to get all spiritual if it helps them invent reasons to differentiate themselves from the Other.

          • CannotSleep420@lemmygrad.ml
            ·
            7 months ago

            One doesn't need to assert the existence of an immaterial soul to point out that the mechanisms that lead to consciousness are different enough from the mechanisms that make computers work that the former can't just be reduced to an ultra complex form of the latter.

            • oktherebuddy
              ·
              7 months ago

              There isn't a materialist theory of consciousness that doesn't look something like an ultra complex computer. We're talking like an alternative explanation exists but it really does not.

              • CannotSleep420@lemmygrad.ml
                ·
                7 months ago

                In what way does consciousness resemble an ultra complex computer? Nobody has consciousness fully figured out of course, but I would at least expect there to be some relevant parallel between computer hardware and brain hardware if this is the case.

                • drhead [he/him]
                  ·
                  7 months ago

                  What stops me from doing the same thing that neurons do with a sufficiently sized hunk of silicon? Assuming that some amount of abstraction is fine.

                  If the answer is "nothing", then that demonstrates the point. If you can build an artificial brain, that does all of the things a brain does, then there is nothing special about our brains.

                  • Egon [they/them]
                    ·
                    edit-2
                    7 months ago

                    But can you actually build an artificial brain with a hunk of silicon? We don't know enough about brains or consciousness to do that, so the point is kinda moot

                • oktherebuddy
                  ·
                  edit-2
                  7 months ago

                  When people say computer here they mean computation as computer scientists conceive of it. Abstract mathematical operations that can be modeled by boolean circuits or Turing machines, and embodied in physical processes. Computers in the sense you're talking about (computer hardware) are one method of embodying these operations.

                  • CannotSleep420@lemmygrad.ml
                    ·
                    7 months ago

                    I probably should have worded my last reply differently, because modeling the human brain with boolean circuits and turing machines is mainly what I have an issue with. While I'm not particularly knowledgable on the brain side of things, I can see the resemblance between neurons and logic gates. However, my contention is that the material constraints of how those processes are embodied are going to have a significant effect on how the system works (not to say that you were erasing this effect entirely).

                    I want to say more on the topic, but now that my mind is on it I want to put some time and effort into explaining my thoughts in its own post. I'll @ you in a reply if/when I make the post.

                    • Saeculum [he/him, comrade/them]
                      ·
                      7 months ago

                      However, my contention is that the material constraints of how those processes are embodied are going to have a significant effect on how the system works

                      Sure, but that's no basis to think that a group of logic gates could not eventually be made to emulate a neuron. The neuron has a finite number of things it can do because of the same material constraints, and while one would probably end up larger than the other, increasing the physical distances between the thinking parts, that would surely only limit the speed of an emulated thought rather than its substance?

                  • silent_water [she/her]
                    ·
                    7 months ago

                    it still remains to be proved that consciousness can be emulated on a Turing machine. that's a huge open problem. you can assume it's true but your results are contingent.

              • WideningGyro [any]
                ·
                7 months ago

                I zoned out on the consciousness debate around 2015, so forgive me if this stuff is now considered outdated, but as I recall those materialist theories of consciousness all run into the hard problem, right? I might be biased in one direction, but I feel like the fact that computational models can't account for lived experience is a pretty good argument against them. Wouldn't it just be more accurate to say that we're missing a good theory of consciousness, at all?

          • VILenin [he/him]
            hexagon
            M
            ·
            7 months ago

            Nobody ever mentioned a “soul” in this conversation until you brought it up to use as an accusation.

            “Computers aren’t sentient” is not a religious belief no matter how hard you try to smear it as such.

            • oktherebuddy
              ·
              7 months ago

              It isn't "Computers aren't sentient", nobody thinks computers are sentient except some weirdos. "Computers can't be sentient", which is what is under discussion, is a much stronger claim.

              • VILenin [he/him]
                hexagon
                M
                ·
                7 months ago

                The claim is that “computers can be sentient”. That is a strong claim and requires equally strong evidence. I’ve found the arguments in support of it lackluster and reductionist for reasons I’ve outlined in other comments. In fact, I find the idea that if we compute hard enough we get sentience borders on a religious belief in extra-physical properties being bestowed upon physical objects once they pass a certain threshold.

                There are people who argue that everything is conscious, even rocks, because everything is ultimately a mechanical process. The base argument is the same, but I have a feeling that most people here would suddenly disagree with them for some reason. Is it “creationism” to find such a hypothesis absurd, or is it vulgar materialism to think it’s correct? You seem to take offense at being called “reductionist” despite engaging in a textbook case of reductionism.

                This doesn’t mean you’re wrong, or that the rock-consciousness people are wrong, it’s just an observation. Any meaningful debate about sentience right now is going to be philosophical. If you want to be scientific the answer is “I don’t know”. I don’t pretend to equate philosophy with science.

                • oktherebuddy
                  ·
                  7 months ago

                  Consciousness isn't an extra-physical property. That's the belief.

                  I don't take offense to being called reductionist, I take offense to reductionism being said pejoratively. Like how creationists say it. It's obvious to me that going deeper, understanding the mechanisms behind things, makes them richer.

                  The thing that makes your argument tricky is we do have evidence now. Computers are unambiguously exhibiting behaviors that resemble behaviors of conscious beings. I don't think that makes them conscious at this time, any more than animals who exhibit interesting behavior, but it shows that this mechanism has legs. If you think LLMs are as good as AI is ever going to get that's just really blinkered.

                  • VILenin [he/him]
                    hexagon
                    M
                    ·
                    7 months ago

                    I think that AI will get better but it’s “base” will remain the same. Going deeper to understand the mechanisms is different than just going “it’s a mechanism”, which I see a lot of people doing. I think computers can very easily replicate human behaviors and emulate emotions.

                    Obviously creating something sentient is possible since brains evolved. And if we don’t kill ourselves I think it’s very possible that we’ll get there. But I think it will be very different to what we think of as a “computer” and the only similarities they might share could be being electrically powered.

                    At the end of the road we’ll just get to arguing about philosophical zombies and the discussion usually wraps up there.

                    I’d be very happy if it turned out that I’m completely wrong.

                    • oktherebuddy
                      ·
                      7 months ago

                      Okay I think we pretty much agree. I have been thinking about what the next "category" of thing is that might function as a substrate of consciousness. I do think that the software techniques people have come up with in AI research, run on "computers" though they may be, are different enough from what we ordinarily think of as computers (CPU, GPU, fast short-term memory, slow long-term memory, etc.) to be a distinct ontological category. And new hardware is being built to specifically accelerate the sort of operations used in those software techniques. I would accept these things being called something other than a computer, even though they could be simulated on a Turing machine or with boolean circuits, because as you've said that is of limited use - similar to saying that everything is a mechanistic physical process.

                      • VILenin [he/him]
                        hexagon
                        M
                        ·
                        7 months ago

                        Not my autistic ass getting into fights online again... I'm learning my parsing skills and social skills slowly though!

                        But yeah, I just want to know what the AI thinks about communism

          • usernamesaredifficul [he/him]
            ·
            7 months ago

            the replicants are people because they are characters writen by the author same as any other.

            sentient machines is only science fiction

            • oktherebuddy
              ·
              edit-2
              7 months ago

              wow we can't speculate about things that could exist, only things that do exist. this was written on a communist website btw

            • Saeculum [he/him, comrade/them]
              ·
              7 months ago

              By that way of reasoning, the replicates aren't people because they are characters written by the author same as any other.

              They are as much fiction as sentient machines are science fiction.

              • usernamesaredifficul [he/him]
                ·
                7 months ago

                ok sure my point was the authors aren't making a point about the nature of machines informed by the limits of machines and aren't qualified to do so

                saying AI is people because of Data from star trek is like saying there are aliens because you saw a Vulcan on tv in terms of relevance

                • Saeculum [he/him, comrade/them]
                  ·
                  7 months ago

                  That's fair, though taking the idea that AI is people because of Data from Star Trek isn't inherently absurd. If a machine existed that demonstrated all the capabilities and external phenomena as Data in real life, I would want it treated as a person.

                  The authors might be delusional about the capabilities of their machine in particular, but in different physical circumstances to what's most likely happening here, they wouldn't be wrong.

                  • DamarcusArt@lemmygrad.ml
                    ·
                    7 months ago

                    Sorry to respond to this several day old comment, but I think there were quite a few episodes where Data's personhood was directly called into question, it is a tangential point, but I think it is likely that even if we had a robotic Brent Spiner running around, people might still not be 100% convinced that they are truly sapient, and might consider it an incredibly complex mechanical Turk style trick. It really is hard to tell for sure, even if we did have a "living" AI to examine.

        • Yurt_Owl
          ·
          7 months ago

          Why is the concept of a spirit relevant? Computers and living beings share practically nothing in common

          • oktherebuddy
            ·
            edit-2
            7 months ago

            You speak very confidently about two things that have seen the boundaries between them shift dramatically within the past few decades. I would also like to ask if you actually understand microbiology & how it works, or have even seen a video of ATP Synthase in action.

            • VILenin [he/him]
              hexagon
              M
              ·
              7 months ago

              Love to see the “umm ackshually scientists keep changing their minds” card on hexbear dot net. Yes neuroscience could suddenly shift to entirely support your belief, but that’s not exactly a stellar argument. I’d love to know how ATP has literally anything to do with proving computational consciousness other than that ATP kind of sort of resembles a mechanical thing (because it is mechanical).

              Sentience as a physical property does not have to stem from the same processes. Everything in the universe is “mechanical” so making that observation is meaningless. Everything is a “mechanism” so everything has that in common. Reducing everything down to their very base definition instead of taking into account what kind of mechanisms they are is literally the very definition of reductionism. You have to look at the wider process that derives from the sum of its mechanical parts, because that’s where differences arise. Of course if you strip everything down to its foundation it’s going to be the same. Is a door and a movie camera the same thing because they both consist of parts that move?

              • oktherebuddy
                ·
                7 months ago

                I have no idea what you are trying to say. I think you agree consciousness must have a mechanistic/material base, and is some kind of emergent phenomenon, so we probably agree on whatever point you're trying to make. Except I guess you think that even though it's an emergent phenomenon of some mechanistic base, that mechanistic base can't be non-biological. Which is weird.

                • VILenin [he/him]
                  hexagon
                  M
                  ·
                  7 months ago

                  My argument has nothing to do with the fact that computers aren’t biological. I’m saying that the only blueprints for consciousness we have right now are brains. And decidedly not computers, which I have no reason to believe will become sentient if you extrapolate it for some reason. I don’t think the difference between computers and brains is biological, it’s just a difference. If you replicated an entire brain I think it would be sentient even though it wouldn’t be strictly “biological”. I guess you could call that a computer, but then you’re veering into semantics. I’m referring to computers strictly in the way that they are currently built.

                  I think there’s a mechanistic road to sentience, but we know vanishingly little about it. But I think we know more than enough to conclude that computers, as they operate today, will struggle to be anything more than a crude analogy. My point is that artificial sentience needs to be more than just “a mechanism”, because literally everything in the universe is a mechanism. It needs to be a certain kind of mechanism that we don’t understand yet.

            • Yurt_Owl
              ·
              7 months ago

              Go be a computer somewhere else

              • oktherebuddy
                ·
                7 months ago

                Go be spooky somewhere else. Calling things "reductionist" like some kind of creationist.

          • daisy
            ·
            7 months ago

            Let's assume for the moment that there's no such thing as a spirit/soul/ghost/etc. in human beings and other animals, and that everything that makes me "me" is inside my body. If this is the case, computers and living brains do have something fundamental in common. They are both made of matter that obeys the laws of physics. As far as we know, there's no such thing as "living" quarks and electrons that are distinct from "non-living" quarks and electrons.

            • Yurt_Owl
              ·
              7 months ago

              How very crude and reductionist just like the source comment says.

              • daisy
                ·
                edit-2
                7 months ago

                I'm having a hard time understanding your reasoning and perspective on this. My interpretation of your comments is that you believe biological intelligence is a special phenomenon that cannot be understood by the scientific method. If I'm in error, I'd welcome a correction.

                • VILenin [he/him]
                  hexagon
                  M
                  ·
                  7 months ago

                  Biological intelligence is currently not understood. This has nothing to do with distinguishing between “living” and “non-living” matter. Brains and suitcases are also both made of matter. It’s a meaningless observation.

                  The question is what causes sentience. Arguing that brains are computers because they’re both made of matter is a non-sequitur. We don’t even know what mechanism causes sentience so there’s no point in even beginning to make comparisons to a separate mechanism. It plays into a trend of equating the current most popular technology to the brain. There was no basis for it then, and there’s no basis for it now.

                  Nobody here is arguing about what the brain is made of.

            • silent_water [she/her]
              ·
              7 months ago

              this argument fails because you've presupposed that the fundamental model of computation maps neatly onto the emergent processes conducted by brains. that we only have a single model for information processing right now does not mean that only one exists. this is an unsolved problem - you can suppose it's true but that doesn't mean the rest of your argument follows. the supposition requires proof.

        • UlyssesT [he/him]
          ·
          edit-2
          7 months ago

          Please stop doing the heavy lifting for LLM tech companies by implying that any rejection of the "AI" labeling of their products is faith healing, crystal touching, and New Age thinking.

          It is possible, and much more likely, that organic brains can be fully understood eventually but that imitating a performatively loud portion of what those organic brains seem to do with LLMs is not the same thing as a linear replication of the entire process.

        • silent_water [she/her]
          ·
          7 months ago

          saying meat computers implies that the computation model fits. it's an ontological assumption that requires evidence. this trend of assuming every complex processes is computation blinds us. are chemical processes computation? sometimes and sometimes not! you can't assume that they are and expect to get very far. processing information isn't adequate evidence for the claim.

      • CannotSleep420@lemmygrad.ml
        ·
        7 months ago

        stochastic parrots

        I could have sworn that the whole point of that paper was to point out that LLMs aren't actually intelligent, not that human intelligence is basically an LLM.

        • dat_math [they/them]
          ·
          7 months ago

          I could have sworn that the whole point of that paper was to point out that LLMs aren't actually intelligent, not that human intelligence is basically an LLM.

          big same

    • Nevoic@lemm.ee
      ·
      7 months ago

      I don't know where everyone is getting these in depth understandings of how and when sentience arises. To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don't believe in a soul, or that organic matter has special properties that allows sentience to arise.

      I could maybe get behind the idea that LLMs can't be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

      Even if we find the limit to LLMs and figure out that sentience can't arise (I don't know how this would be proven, but let's say it was), you'd still somehow have to prove that algorithms can't produce sentience, and that only the magical fairy dust in our souls produce sentience.

      That's not something that I've bought into yet.

      • TraumaDumpling
        ·
        edit-2
        7 months ago

        so i know a lot of other users will just be dismissive but i like to hone my critical thinking skills, and most people are completely unfamiliar with these advanced concepts, so here's my philosophical examination of the issue.

        the thing is, we don't even know how to prove HUMANS are sentient except by self-reports of our internal subjective experiences.

        so sentience/consciousness as i discuss it here refers primarily to Qualia, or to a being existing in such a state as to experience Qualia. Qualia are the internal, subjective, mental experiences of external, physical phenomena.

        here's the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.

        hint: you can't. the move by physicalist philosophy is simply to deny the existence of qualia, consciousness, and subjective experience altogether as 'illusory' - but illusory to what? an illusion necessarily has an audience, something it is fooling or decieving. this 'something' would be the 'consciousness' or 'sentience' or to put it in your oh so smug terms the 'soul' that non-physicalist philosophy might posit. this move by physicalists is therefore syntactically absurd and merely moves the goalpost from 'what are qualia' to 'what are those illusory, deceitful qualia decieving'. consciousness/sentience/qualia are distinctly not information processing phenomena, they are entirely superfluous to information processing tasks. sentience/consciousness/Qualia is/are not the information processing, but internal, subjective, mental awareness and experience of some of these information processing tasks.

        Consider information processing, and the kinds of information processing that our brains/minds are capable of.

        What about information processing requires an internal, subjective, mental experience? Nothing at all. An information processing system could hypothetically manage all of the tasks of a human's normal activities (moving, eating, speaking, planning, etc.) flawlessly, without having such an internal, subjective, mental experience. (this hypothetical kind of person with no internal experiences is where the term 'philosophical zombie' comes from) There is no reason to assume that an information processing system that contains information about itself would have to be 'aware' of this information in a conscious sense of having an internal, subjective, mental experience of the information, like how a calculator or computer is assumed to perform information processing without any internal subjective mental experiences of its own (independently of the human operators).

        and yet, humans (and likely other kinds of life) do have these strange internal subjective mental phenomena anyway.

        our science has yet to figure out how or why this is, and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.

        so the options we are left with in terms of conclusions to draw are:

        1. all matter contains some kind of (inhuman) sentience, including computers, that can sometimes coalesce into human-like sentience when in certain configurations (animism)
        2. nothing is truly sentient whatsoever and our self reports otherwise are to be ignored and disregarded (self-denying mechanistic physicalist zen nihilism)
        3. there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia ('soul'-ism as you might put it, but no 'soul' is required for this conclusion, it could just as easily be termed 'mystery-ism' or 'unknown-ism')

        And personally the only option i have any disdain for is number 2, as i cannot bring myself to deny the very thing i am constantly and completely immersed inside of/identical with.

        • Saeculum [he/him, comrade/them]
          ·
          7 months ago

          here's the task of people that want to prove that the human brain is a meat computer: Explain, in exact detail, how (i.e. the procsses by which) Qualia, (i.e. internal, subjective, mental experiences) arise from external, objective, physical phenomena.

          hint: you can't.

          Why not? I understand that we cannot, at this particular moment, explain every step of the process and how every cause translates to an effect until you have consciousness, but we can point at the results of observation and study and less complex systems we understand the workings of better and say that it's most likely that the human brain functions in the same way, and these processes produce Qualia.

          It's not absolute proof, but there's nothing wrong with just saying that from what we understand, this is the most likely explanation.

          Unless I'm misunderstanding what you're saying here, why is the idea that it can't be done the takeaway rather than it will take a long time for us to be able to say whether or not it's possible?

          and the usual neuroscience task of merely correlating internal experiences to external brain activity measurements will fundamentally and definitionally never be able to prove causation, even hypothetically.

          Once you believe you understand exactly what external brain activity leads to particular internal experiences, you could surely prove it experimentally by building a system where you can induce that activity and seeing if the system can report back the expected experience (though this might not be possible to do ethically).

          As a final point, surely your own argument above about an illusion requiring an observer rules out concluding anything along the lines of point 2?

          • TraumaDumpling
            ·
            7 months ago

            Why not?

            because qualia are fundamentally a subjective phenomena, and there is no concievable way to arrive at subjective phenomena via objective physical quantitites/measurements.

            Once you believe you understand exactly what external brain activity leads to particular internal experiences, you could surely prove it experimentally by building a system where you can induce that activity and seeing if the system can report back the expected experience (though this might not be possible to do ethically).

            this is not true. for example, take the example of a radio, presented to uncontacted people who do not know what a radio is. It would be reasonable for these people to assume that the voices coming from the radio are produced in their entirety inside the radio box/chassis, after all, when you interfere with the internals of the radio, it effects which voices come out and in what quality. and yet, because of a fundamental lack of understanding of the mechanics of the radio, and a lack of knowledge of how radios are used and how radio programs are produced and performed, this is an entirely incorrect assessment of the situation.

            in this metaphor, the 'radio' is analogous to the 'brain' or 'body', and the 'voices' or radio programs are the 'consciousness', that is assumed to be coming form inside the box, but is in fact coming from outside the box, from completely invisible waves in the air. the 'uncontacted people' are modern scientists trying to understand that which is unknown to humanity.

            this isn't to say that i think the brain is a radio, although that is a fun thought experiment, but to demonstrate why correlation does not, in fact, necessarily imply causation, especially in the case of the neural correlates of consciousness. consciousness definitely impinges upon or depends upon the physical brain, it is in some sense affected by it, no one would argue this point seriously, but to assume causal relationship is intellectually lazy.

            • Saeculum [he/him, comrade/them]
              ·
              7 months ago

              because qualia are fundamentally a subjective phenomena, and there is no concievable way to arrive at subjective phenomena via objective physical quantitites/measurements.

              Having done some quick reading, I can see that qualia are definitionally subjective, but I would question how anyone could assert that they possess internal mental experiences that "no amount of purely physical information includes.", or that such a thing can even exist with any level of confidence. Certainly not enough confidence to structure an argument around. The justification seems to be the idea that because we cannot do something now, that thing cannot be done. I don't find that convincing.

              This might be going too far into the analogy, but I think the problem with a comparison to a radio is that if you examine the radio down to its smallest part, and then assemble a second radio, that radio will behave in the same as the first.
              Presumably as well, with enough examination, it would come to be understood that the voices coming from the radio are produced somewhere else, and there would be no reason for anyone to think that the voices themselves are appearing from an intangible and inherently subjective origin. If consciousness is essentially a puppeteer for the physical human body, that doesn't preclude consciousness existing physically somewhere else, and that the "broadcaster" isn't something capable of examination or imitation.

              The whole argument seems to boil down to "maybe consciousness doesn't work the way science would currently suggest it does." but doesn't present any evidence that the consciousness is somehow unsolvable.

              but to assume causal relationship is intellectually lazy.

              Instead, assuming that an undetectable intangible and fundamentally improvable mechanism is behind consciousness without proof is worse than lazy, it's magical thinking. While I don't think you could ever prove that that wasn't the case, it should only seriously be entertained once every other option has been thoroughly exhausted.

              (Reading this back, this feels quite confrontational. I don't intend it to be, but I lack the ability to word it in the tone that I would prefer.)

              • TraumaDumpling
                ·
                7 months ago

                how anyone could assert that they possess internal mental experiences that "no amount of purely physical information includes.", or that such a thing can even exist with any level of confidence.

                The justification seems to be the idea that because we cannot do something now, that thing cannot be done. I don't find that convincing.

                its not just that we cannot do it now, its that it is literally definitionally impossible even conceptually to arrive at or explain subjectivity, assuming a physicalist model of the world that specifically discludes it in principle.

                the claim is not that consciousness is 'unsolveable', but that it is unsolved, and that it is irreducible to terms of pure information processing. subjectivity is entirely separate from and unnecessary for information processing.

                This might be going too far into the analogy

                correct, it was merely to elucidate the difference between causation and correlation and the scientific method and attitude. the metaphor is not designed to interrogate subjectivity.

                Instead, assuming that an undetectable intangible and fundamentally improvable mechanism is behind consciousness without proof is worse than lazy, it's magical thinking. While I don't think you could ever prove that that wasn't the case, it should only seriously be entertained once every other option has been thoroughly exhausted.

                no, instead one should assume nothing, like a scientist should. you assume that you do not know until you actually do.

                to go back to the analogy you are here like one of the uncontacted people encountering a radio, and, after much experimentation and analysis among your group has concluded that the voice cannot come from inside but form some as yet unknown source outside, you call them insane for positing even the hypothetical existence of such a thing instead of assuming it comes from inside in some way we don't yet understand (but are the assumed teleological inevitability of our current understanding which obviously never needs to be revised).

                • Saeculum [he/him, comrade/them]
                  ·
                  7 months ago

                  to go back to the analogy you are here like one of the uncontacted people encountering a radio, and, after much experimentation and analysis among your group has concluded that the voice cannot come from inside but form some as yet unknown source outside, you call them insane for positing even the hypothetical existence of such a thing instead of assuming it comes from inside in some way we don't yet understand

                  Yet they also seem to be claiming that the source of the voices is not just unknown, but unknowable, and they cannot explain even conjecturally how it might be that the voices are transmitted. When there is observable activity inside the radio that might seem to be creating the voices, but our group does not yet understand the details of how it works, it might not be insane, but it's not particularly rational to focus on the transmission theory.

                  • TraumaDumpling
                    ·
                    7 months ago

                    the voices in this analogy are not claimed to be unknowable full stop, merely irreconcilable with some or all of their previous understanding of the world. in non-analogical terms i am not saying we cannot explain subjectivity at all, but that we cannot explain it with our traditional ways of thinking (i am against dualism as much as physicalism). back to the analogy, it may be perfectly 'rational' to dismiss the transmission theory, but it would be rationally incorrect, rationally ignorant, and would prevent exploration of alternative routes of inquiry that could hypothetically lead to the truth.

            • sooper_dooper_roofer [none/use name]
              ·
              edit-2
              7 months ago

              If what you're saying is true for human consciousness though, then it means that there are other undiscovered factors (invisible non EM airwaves, astrology, aliens etc) which influence our mood and state of being. Which I'm not even arguing against, but it would be a revolution in science

              • TraumaDumpling
                ·
                7 months ago

                even just something like mental archetypes or cultural tropes are enough to influence our mood and state of being, it doesnt even have to be anything exotic

              • TraumaDumpling
                ·
                7 months ago

                Some philosophers of mind, like Daniel Dennett, argue that qualia do not exist. Other philosophers, as well as neuroscientists and neurologists, believe qualia exist and that the desire by some philosophers to disregard qualia is based on an erroneous interpretation of what constitutes science.

            • WithoutFurtherBelay
              ·
              edit-2
              7 months ago

              Donald Duck is correct here but also that’s precisely why techbros are so infuriating. They take that conclusion and then use it to disregard everything except the one thing they conveniently think isn’t based on chemicals, like free market capitalism or Eliezer “Christ the Second” Yud

              Dismissing emotions just because they are chemicals is nonsensical. It makes no sense that that alone would invalidate anything whatsoever. But these people think it does because they are conditioned by Protestantism to think that all meaning has to come from a divine and unshakeable authority. That’s why they keep reinventing God, so they have something to channel their legitimate emotions through that their delusional brain can’t invalidate.

              • UlyssesT [he/him]
                ·
                7 months ago

                My issue with, say, "love is chemicals" isn't that the experience of feeling love is neurochemical activity. It's the crude reductionist conclusion of "and therefore it is meaningless just like based Rick Sanchez said, get schwifty!" so-true

                Similarly, I don't hold a position that living brains are impossible to fully understand; it's that there's more left to know and a lot of unknowns left to explore. The implication of some people in this thread is that you must choose between "LLMs are at least as conscious as human beings or are getting there very soon" or "I am a faith healer crystal toucher sprinkled with fairy dust" which is a bullshit false dichotomy.

                • WithoutFurtherBelay
                  ·
                  7 months ago

                  Yes, I agree completely. I had to rewrite my comment multiple times to clarify that, but yeah. Sorry :(

                  • UlyssesT [he/him]
                    ·
                    7 months ago

                    I sort of regret posting that meme because it was more cheeky and silly than an actual position I was taking, myself. The "dae le meat computers" reductionism enjoyer I was replying to (with the "therefore you must believe that LLMs are that close to sapience or else you believe in souls and are living in a demon haunted world unlike my enlightened euphoric Reddit New Atheist self" take) was abrasive enough where I was trying some levity but it didn't go over well.

                    • WithoutFurtherBelay
                      ·
                      7 months ago

                      I understand, either way the meme you posted is funny though because it would piss techbros off

                      • UlyssesT [he/him]
                        ·
                        7 months ago

                        I understand, either way the meme you posted is funny though because it would piss techbros off

                        Judging by the reactions it got, it certainly did. sit-back-and-enjoy

              • sooper_dooper_roofer [none/use name]
                ·
                edit-2
                7 months ago

                He's not though

                life is necessarily more ordered and interesting than dead rocks

                therefore it is a good thing to create more life, both on earth and eventually to turn dead planets life-ful (if this is even possible)

                we are definitely conscious enough to at least massively increase the amount of life on earth (you could easily green all the world's deserts under ecocommunism)

                  • sooper_dooper_roofer [none/use name]
                    ·
                    7 months ago

                    I think enabling mass reproduction of plant species in the Sahara Desert is cool and good

                    (and yes I've done the calculations, no the Sahara doesn't "enable" the Amazon, it's like 3 grains of sand per square foot)

            • Saeculum [he/him, comrade/them]
              ·
              7 months ago

              "All knowledge is unprovable and so nothing can be known" is a more hopeless position than "existence is absurd and meaning has to come from within". I shall both fight and perish.

              • UlyssesT [he/him]
                ·
                edit-2
                7 months ago

                "All knowledge is unprovable and so nothing can be known"

                Silly meme that I had just posted aside, that isn't my actual position and I don't think that is the position others here have taken. I said that there is a lot more left to be known and the current academic leading edge of neuroscience (not tech company marketing hype or pop nihilistic reductionistic Reddit New Atheist takes) backs that up.

                I shall both fight and perish.

                From here it just looks like you're just touching the computer and doing the heavy lifting for LLM hype marketers.

                • Saeculum [he/him, comrade/them]
                  ·
                  7 months ago

                  and doing the heavy lifting for LLM hype marketers.

                  I'm not fighting for those idiots. We're a long way away from a real machine intelligence.

                  • UlyssesT [he/him]
                    ·
                    7 months ago

                    You may be doing the heavy lifting in an unexamined way because you've been comparing living organic brains to LLMs with the implication that there's no meaningful difference and nothing left out of the comparison except mysticism.

              • GarbageShoot [he/him]
                ·
                7 months ago

                I mean, "meaning has to come from within" is sort of solipsistic but, depending on your definition, completely true.

                The biggest problem with Camus (besides his credulity towards the western press and his lack of commitment to trains, oh and lacking any desire for systemic understanding) is that he views this question in an extremely antisocial manner. Yes, if you want affirmation from rocks and you will kill yourself if you don't get affirmation from rocks, there's not much to do but get some rope. However, it's hard to imagine how differently the rhetorical direction of the Myth of Sisyphus would have gone if he had just considered more seriously the idea of finding meaning in relationships with and impact on others rather than just resenting the trees for not respecting you. Seriously, go and reread it, the idea seems as though it didn't even cross his mind.

                The Myth of Solipsists kelly

        • UlyssesT [he/him]
          ·
          edit-2
          7 months ago

          I think it does a lot of undue (and hopefully unintentional) heavy lifting for tech company hype marketers when someone implies that LLM treat printers might be comparable (or synonymous) to living organic brains because of the product's imitative presentation.

          https://arxiv.org/abs/2311.09247

          • TraumaDumpling
            ·
            7 months ago

            on a related note, dropping this rare banger line from wikipedia:

            Some philosophers of mind, like Daniel Dennett, argue that qualia do not exist. Other philosophers, as well as neuroscientists and neurologists, believe qualia exist and that the desire by some philosophers to disregard qualia is based on an erroneous interpretation of what constitutes science.[2]

            citation text from the wiki page for reference

            Damasio, Antonio R. (2000). The feeling of what happens: body and emotion in the making of consciousness. A Harvest book. San Diego, CA: Harcourt. ISBN 978-0-15-601075-7. Edelman, Gerald M.; Gally, Joseph A.; Baars, Bernard J. (2011). "Biology of Consciousness". Frontiers in Psychology. 2 (4): 4. doi:10.3389/fpsyg.2011.00004. ISSN 1664-1078. PMC 3111444. PMID 21713129. Edelman, Gerald Maurice (1992). Bright air, brilliant fire: on the matter of the mind. New York: BasicBooks. ISBN 978-0-465-00764-6. Edelman, Gerald M. (2003). "Naturalizing Consciousness: A Theoretical Framework". Proceedings of the National Academy of Sciences of the United States of America. 100 (9): 5520–5524. doi:10.1111/j.1600-0536.1978.tb04573.x. ISSN 0027-8424. JSTOR 3139744. PMID 154377. S2CID 10086119. Retrieved 2023-07-19. Koch, Christof (2020). The feeling of life itself: why consciousness is widespread but can't be computed (First MIT Press paperback edition 2020 ed.). Cambridge, MA London: The MIT Press. ISBN 978-0-262-53955-5. Llinás, Rodolfo Riascos; Llinás, Rodolfo R. (2002). I of the vortex: from neurons to self. A Bradford book (1 ed.). Cambridge, Mass. London: MIT Press. pp. 202–207. ISBN 978-0-262-62163-2. Oizumi, Masafumi; Albantakis, Larissa; Tononi, Giulio (2014-05-08). Sporns, Olaf (ed.). "From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0". PLOS Computational Biology. 10 (5): e1003588. Bibcode:2014PLSCB..10E3588O. doi:10.1371/journal.pcbi.1003588. ISSN 1553-7358. PMC 4014402. PMID 24811198. Overgaard, M.; Mogensen, J.; Kirkeby-Hinrup, A., eds. (2021). Beyond neural correlates of consciousness. Routledge Taylor & Francis. Ramachandran, V.; Hirstein, W. (March 1997). "What Does Implicit Cognition Tell Us About Consciousness?". Consciousness and Cognition. 6 (1): 148. doi:10.1006/ccog.1997.0296. ISSN 1053-8100. S2CID 54335111. Tononi, Giulio; Boly, Melanie; Massimini, Marcello; Koch, Christof (July 2016). "Integrated information theory: from consciousness to its physical substrate". Nature Reviews. Neuroscience. 17 (7): 450–461. doi:10.1038/nrn.2016.44. ISSN 1471-0048. PMID 27225071. S2CID 21347087.

            • WithoutFurtherBelay
              ·
              edit-2
              7 months ago

              > be me
              > literal philosopher of mind
              > experiences things every moment of my life
              > is asked if experiences exist
              > “nah experiences aren’t real”

            • UlyssesT [he/him]
              ·
              7 months ago

              "Because there is disagreement on what consciousness is, it must be an illusion. You do not exist, you are only a weird metaphysical phantasm which is somehow a more grounded and tenable position." oooaaaaaaauhhh

            • Philosoraptor [he/him, comrade/them]
              ·
              7 months ago

              This is a bad summary of Dennett's view, or at least a misleading one. He thinks that 'qualia' as most philosophers of mind define the term doesn't refer to anything, and is just a weasel word obscuring that we really don't have much of an understanding of how brains do the things they do. Qualia get glossed as the "what-it's-like-ness" of experiences (e.g. the particular feeling of seeing the color blue), which isn't wrong, but is only part of the story. 'Qualia' is a technical term in the philosophy of mind literature, and has a lot of properties attached to it (privacy, incorrigibility, ineffability, so on). Dennett argues that qualia in that sense--the philosopher's qualia--is incoherent and internally inconsistent for a variety of reasons. This sometimes gets misrepresented as "Dennett thinks consciousness is an illusion" (a misreading that he, to be fair, could work harder to discourage), but that's not the view. His argument against the philosopher's qualia is pretty compelling, and doesn't imply that people aren't conscious. See "Quining Qualia" for a pretty accessible articulation of the argument.

              • TraumaDumpling
                ·
                7 months ago

                i look up 'daniel dennet' and the first ted talk i see is literally titled 'the illusion of consciousness'. i don't know what else to make of that.

                wikipedia defines qualia as "In philosophy of mind, qualia (/ˈkwɑːliə, ˈkweɪ-/; SG: quale /-li/) are defined as instances of subjective, conscious experience. " which is how i have been using the word. i do not care about any other usage.

                all of those things you mention - privacay, ineffability, etc - are logical consequences of being a subjective phenomena.

                i am familiar with quining qualia, i quite dislike it and disagree with its arguments fundamentally. his 'intuition pumps' are frankly nonsense.

                two examples:

                the coffee taste and brain surgery experiments claim to show that we cannot tell the difference between our qualia changing and our reflective juddgments and predispositions to those qualia being changed, in an attempt to prove that qualia cannot be directly apprehended by consciousness. in fact, this is quite unrelated to the direct apprehend-ability in consciousness of qualia. in the brain surgery case, whichever surgery is performed, whether the patient can realize this through introspection or not, there IS a particular qualia being experienced and there is a fact of the matter as to whether or not this qualia has changed and as to which of the surgeries was performed, even if the patient's memory has been altered such that they cannot know this - we could even empirically verify which surgery took place! yes, we are not necessarily infallible in our comparison of non-simultaneous Qualia - how does this mean that we do not apprehend the current Quale directly in consciousness? or that we did apprehend past Qualia? Direct conscious apprehension is not equivalent to accurate memory and consistent disposition/judgment regarding that direct conscious apprehension - these are information processing tasks, not subjectivity or qualia. To be aware of ANY qualitative state is to be aware of your current REAL qualitative state, and the fact that we might misremember it or otherwise interpret it differently in the future (due to neurosurgery or not) makes it no less directly apprehended.

                the beer argument is equally spurious - he claims that because our qualia can change in response to environmental stimuli (i.e. we 'acquire a taste' for beer and enjoy it more when we are drunk, or enjoy it by associating it with the positive drunk feelings), that qualia is not 'intrinsic' but 'relational'. no one would deny that qualia are part of a causal chain - everything is causal. qualia and consciousness obviously correlate to the physical brain, and are in a causal relationship with it and therefore less directly with the wider external world. but the existence of some kind of qualia/subjectivity in a conscious organism is not a relational property - the conscious organism, while conscious, always has qualia and subjectivity of some kind or another, regardless of what environment the consciousness exists in. specific features and minutiae of the subjects of qualia and subjective experience do have a causal relationship with the external world, but again, these are information processing tasks that are affected, not the very subjectivity of the organism. the contents of experience might change, but the fact that the current experiencer (the experiencer in its context) experiences them does not. the apprehended object might change, but the fact that it is being apprehended does not.

        • sooper_dooper_roofer [none/use name]
          ·
          edit-2
          7 months ago

          there is something special or unique or not entirely understood about biological life (at least human life if not all life with a central nervous system) that produces sentience/consciousness/Qualia ('soul'-ism as you might put it, but no 'soul' is required for this conclusion, it could just as easily be termed 'mystery-ism' or 'unknown-ism')

          This is just wrong lol, there's nothing magical about vertebrates in comparison to unicellular organisms. Maybe the depth of our emotions might be bigger, but obviously a paramecium also feels fear and happiness and anticipation, because these are necessary for it to eat and reproduce, it wouldn't do these things if they didn't feel good

          The discrete dividing line is life and non-life (don't @ me about viruses)

          • TraumaDumpling
            ·
            7 months ago

            central nervous systems are so far the only thing we almost universally recognize as producing human-like subjectivity (as our evidence is the self report of humans), so i restricted my argumentation to those parameters. for all i know every quark has a kind of subjectivity associated with it, it could be as fundamental to reality as matter. and for all i know a paramecium responds to its environment with purely unconscious instinct (or if that terminology is inaccurate, biological information processing) without an internal experience. we don't really understand how subjectivity is produced well enough to isolate it for empirical study in humans, let alone mammals, let alone microbes - but i personally think it is plausible that all life if not all matter has some kind of subjectivity.

            • sooper_dooper_roofer [none/use name]
              ·
              7 months ago

              and for all i know a paramecium responds to its environment with purely unconscious instinct (or if that terminology is inaccurate, biological information processing) without an internal experience

              unicellular organisms have been shown to learn. It's literally the same thing as a vertebrate, just less complex

          • appel@whiskers.bim.boats
            ·
            7 months ago

            I don't find that obvious at all. I agree there is nothing special dividing vertebrates from unicellular organisms, but I definitely think that some kind of CNS is required for the experience of emotions like fear, happiness etc. I do not see at all how paramecium could experience something like that. What part of it would experience it? Emotions in humans seem to be characterised by particular patterns of brain activity and concentrations of certain molecules (hormones, etc). I really cannot see how a unicellular organism has any capacity to experience emotions as we do. I would also argue that there is no dividing line between life and non-life. Whether something is alive or not is quite nebulous and hard to define. As you say, viruses are a good example but there are many others. Eg. a pregnant mammal. The foetus does not fill the classical, basic conditions of life that are taught in school (MRS H GREN, or whatever acronym) but does it really make sense to say that it is not alive? How many organisms are there when we look at a pregnant mammal. It is not clear.

            • sooper_dooper_roofer [none/use name]
              ·
              edit-2
              7 months ago

              but I definitely think that some kind of CNS is required for the experience of emotions like fear, happiness etc.

              okay, so when a scallop runs away from you it doesn't feel fear?
              and when a paramecium is being ensnared by a hydra or some weird protist on your microscope slide, and it's struggling to get away, it doesn't feel fear? lol

              Obviously every moving living thing can feel fear, that's why they're moving living things and that's why they run away from predators

              I would also argue that there is no dividing line between life and non-life. Whether something is alive or not is quite nebulous and hard to define

              With a few exceptions like viruses, it's pretty obvious. Rocks don't make more rocks, nor does water

              • appel@whiskers.bim.boats
                ·
                7 months ago

                I'm not sure if scallops can run... But if you mean something like a mollusc, for example a snail, then I think it depends on which organism it is. I think a snail probably does feel fear yes in a very primal way. A bivalve like a scallop I'm not so sure, they have very basic nervous systems. An octopus I think is capable of fear and other more advanced emotions too, most likely. However, I think when we ascribe emotions to these animals we are anthropomorphising them. We have no way to know what their experience is like and we are sticking our human labels on them. Especially for a group such as molluscs, which diverged a very long time ago from the lineage that led to us. The feeling of fear, the understanding of danger and need to get away from it could be very primal and exist in many animals, but they may also feel it very differently to how we do. For example ants, I imagine a worker and does not feel fear for itself but rather the colony.

                Unicellular organisms mostly move on the basis of concentration gradients, towards food and away from toxic things or predator signals. When one is struggling and being engulfed by a hydra or other unicellular organism, I don't think it feels anything no. I think it is just trying to move away from the predator because it detects a molecular signal that it is "programmed" to move away from. By programmed I mean that behaviour is encoded in the complex interaction of the many systems that make it up, such as through the concerted action of it's receptors, signalling pathways, enzymes, genes etc.

                Rocks and water are not what I was talking about. Take for example cell-free translation systems. These are basically all of the contents of a cell but without any of the membranes. Like empty a cell into a (small) bucket. They still perform all of the biochemical reactions that took place in the normal cell. But they are not in a sack. There is no unified "thing" and it doesn't move. If you did that to a paramecium, could that liquid still feel fear? It cant move away from anything. Is it alive? What makes something alive? Life is ultimately the sum of many complex biochemical reactions, but no one part of it is alive. Enzymes themselves are not alive surely. One single neuron is not alive.

                If you had a human brain in a jar and, for arguments sake, it could still think as normal. It is intelligent and sentient, but it cannot replicate itself. But a virus, which is still much more simple than the brain in a jar, can. When you say that rocks don't make more rocks, you seem to imply that the quality of life is in replication.

                • sooper_dooper_roofer [none/use name]
                  ·
                  edit-2
                  7 months ago

                  I'm not sure if scallops can run

                  just youtube it, they can
                  and if they can do that, then of course they can feel fear too

                  When one is struggling and being engulfed by a hydra or other unicellular organism, I don't think it feels anything no.

                  wild

                  I think it is just trying to move away from the predator because it detects a molecular signal that it is "programmed" to move away from.

                  replace the hydra with a tiger and the amoeba with a deer, how is it any different apart from the number of cells? The deer prey could maybe have conscious thoughts/sorrow about its children during the last seconds of its life, but other than that the fear is fundamentally the same, it's just more complex/scaled up

                  By programmed I mean that behaviour is encoded in the complex interaction of the many systems that make it up, such as through the concerted action of it's receptors, signalling pathways, enzymes, genes etc.

                  sure glad we don't have any of those

                  Like empty a cell into a (small) bucket. They still perform all of the biochemical reactions that took place in the normal cell. But they are not in a sack. There is no unified "thing" and it doesn't move. If you did that to a paramecium, could that liquid still feel fear? It cant move away from anything. Is it alive?

                  Uh, I'm not an expert but I would suspect they're in the process of dying if you do that. They just don't die immediately, because nothing does (even a person who gets shot stays alive for a few minutes afterward). Can you feed this cell jelly its normal food and have it sustain itself like usual? If not then I would say it's only alive on technicality, just like a person who's been shot in the head and can still talk for the next few seconds--they're technically also alive! But the person will die once the last few bits of brain oxygen run out due to the mechanical reality of their heart not beating, and the cell-jelly-in-a-bucket will also die after some time due to the mechanical reality of their vacuoles or whatever not being able to properly absorb food (I'm guessing, anyway. But this isn't really relevant to the central point)

                  If you had a human brain in a jar and, for arguments sake, it could still think as normal. It is intelligent and sentient, but it cannot replicate itself. But a virus, which is still much more simple than the brain in a jar, can. When you say that rocks don't make more rocks, you seem to imply that the quality of life is in replication.

                  This is a disjoint coutnerexample, the point is not that a brain in a jar can't replicate itself, but that the original organism that brain comes from, can. A man who gets a vasectomy is still alive, because his default state is being able to reproduce.
                  Rocks however, can NEVER reproduce. There is not A SINGLE rock that can reproduce. Therefore rocks are not alive.

        • Nevoic@lemm.ee
          ·
          edit-2
          7 months ago

          It seems by your periodically hostile comments ("oh so smug terms the 'soul'") indicates that you have a disdain for my position, so I assume you think my position is your option 2, but I don't ignore self-reports of sentience. I'm closer to option 1, I see it as plausible that a sufficiently general algorithm could have the same level of sentience as humans.

          The third position strikes me as at least just as ridiculous as the second. Of course we don't totally understand biological life, but just saying there's something "special" is wild. We're a configuration of non-sentient parts that produce sentience. Computers are also a configuration of non-sentient parts. To claim that there's no configuration of silicon that could arrive at sentience but that there is a configuration of carbon that could arrive at sentience is imbuing carbon with some properties that seems vastly more complex than the physical reality of carbon would allow.

          • TraumaDumpling
            ·
            7 months ago

            i think it is plausible to replicate consciousness artificially with machines, and even more plausible to replicate every information processing task in a human brain, but i do not think that purely information processing machines like computers or machines using purely information processing tools like algorithms will be the necessary hardware or software to produce artificial subjectivity.

            by 'special' i meant not understood. and again, i submit not that it is impossible to make a subjectivity producing object like a brain artificially out of whatever material, but that it is not possible to do so using information processing technologies and theory (as understood in 2023). I don't think artificial subjectivity is impossible, but i think purely algorithmic artificial subjectivity is impossible. I don't think that a purely physicalist worldview of a type that discounts the possibility of subjectivity can ever account for subjectivity. i don't think that subjectivity is explainable in terms of information processing.

            here's a syllogism to sum up my position (i believe i have argued these points sufficiently elsewhere in the thread)

            Premise A: Qualia (subjective experiences) exist (a fact supported by many neuroscientists as per one of my previous posts wikipedia quote)

            Premise B: Qualia, as subjective experiences, are fundamentally irreducible to information processing. (look up the hard problem of consciousness and the philosophical zombie thought experiment)

            Premise C: therefore consciousness, which contains (or is identified with or consists of or interacts with or is otherwise related to) Qualia, is irreducible to information processing.

            Premise D: therefore the most simplistic of physicalist worldviews (those that deny the existence of Qualia and the concept of subjectivity, like that of Daniel Dennett) can never fully account for consciousness.

            thats it, nothing else i'm trying to say other than that. no mysticism, no woo, no soul, no god, no fairies, nothing to offend your delicate aesthetic sensibilities. just stuff we don't know yet about the brain/mind/universe. no assumptions, just an acknowledgement that we do not have a Unified Theory of Everything and are likely several fundamental paradigm shifts in thinking away in many fields of research from anything resembling one.

            • spacecadet [he/him]
              ·
              7 months ago

              Little late to the thread but really enjoying your posts. Curious on your thoughts if you don't mind:

              As a philosophy newbie myself, could it be a lot of this discussion/debate is due to people having no exposure to the metaphysical concepts of objectivity/subjectivity? It seems a bit portion of your argument is that people who believe we can achieve ai sentience are already committed to a (leap of faith) absolute belief in the "physicalist" model/understanding of the universe?

              Also regarding the idea of a "Unified Theory of Everything", do you believe in this as a possibility? Is having that as a goal or destination in of itself a representation of a particularly misguided "physicalist" way of thinking that many people are already committed to/trapped within?

              • TraumaDumpling
                ·
                7 months ago

                i don't think its about a lack of exposure to the concept of subjectivity and objectivity as much as it is a fundamental disbelief in anything approaching metaphysics whatsoever, which yes, stems from the absolute belief in a purely physicalist understanding of the universe. the difference between physicalists and myself is similar to the difference between an atheist and an agnostic. the atheist assumes that there is and can be no god or gods, whereas the agnostic makes no assumptions whatsoever regarding this. the physicalist assumes the ability of their belief system to be refined into perfection without much in the way of fundamental revision, assumes the nonexistence of any phenomena that cannot be described by physics, whereas i believe that one or several paradigm shifts in philosophy and science and the philosophy of science are necessary to improve our understanding of reality, i do not assume that the physicalist model of the universe is correct or able to be trivially modified to be correct. and when analysis in fact shows the inability of physicalism to explain a phenomena we all experience every waking moment of our lives like subjectivity or qualia, i take that as evidence against the model, instead of ignoring it in the hope that someday the model might be trivially revised somehow to account for this fundamental explanatory gap.

                a 'unified theory of everything' may or may not be possible, but it should be especially possible under physicalism - if everything is indeed reducible to physical matter and physcial processes, then surely we should eventually be able to describe matter and related physical processes in sufficient detail to describe all of reality, including subjectivity. but i don't think its necessarily physicalist to believe humans can comprehensively understand existence, for example if subjectivity is fundamental to reality in a way similar to matter, then understanding subjectivity and matter both, and their relationship to one another or to whatever reality they both refer to, could help us understand existence in a more coherent sense.

            • UlyssesT [he/him]
              ·
              7 months ago

              Excellent post. I may bookmark it for later summary use.

            • Nevoic@lemm.ee
              ·
              7 months ago

              Premise B is where you lost me.

              The premise of philosophical zombies is that it's possible for there to be beings with the same information processing capabilities as us without experience. That is, given the same tools and platforms, they would be having just as intricate discussions about the nature of experience and sentience. without having experience or sentience.

              I'm not convinced it's functionally possible to behave the way we behave when talking & describing sentience without being sentient. I think a being that is functionally identical to me except that it lacks experience wouldn't be functionally identical to me, because I wouldn't be interested in sentience if I didn't have it.

              • TraumaDumpling
                ·
                7 months ago

                thats' the entire point. if the existence of complex unconscious behaviors (or even just computers and math) proves that information processing can be done without internal subjective experience (if we assume a stone being hit by another stone, for example, is not experienceing subjectivity), and if there is something humans do beyond what is possible for pure information processing, then that is proof that consciousness is fundamentally irreducible to it. if there is something we can do that a philosophical zombie (a person with information processing but not subjectivity) could not, it is because of subjectivity/qualia, not information processing. subjectivity can influence our information processing but is not identical with it.

                • Nevoic@lemm.ee
                  ·
                  7 months ago

                  I think my point didn't exactly get across. I'm not saying philosophical zombies can't exist because subjectivity is something beyond information processing, I'm saying it's plausible that subjectivity is information processing.

                  To say "a person with information processing but not subjectivity" could be like saying "a person with information processing but not logical reasoning".

                  I would argue a person that processes information exactly like me, except that they don't reason logically, wouldn't process information like me. It's not elevating logic beyond information processing, it's a reductio ad absurdum. A person like that cannot exist.

                  I was saying philosophical zombies could be like that, it's possible that they can't exist. By lacking subjectivity they could inherently process information differently.

                  • TraumaDumpling
                    ·
                    7 months ago

                    i know this is necroposting but i have to clarify.

                    one of the major premises of the p-zombie thought experiment is that there is nothing about information processing (AS WE CURRENTLY UNDERSTAND IT***) that entails or necessitates subjectivity. Information processing has zero explanatory ability for subjectivity. You cannot just assume that 'subjectivity is information processing' without proving it somehow, that's not how science or philosophy work. Making a positive claim like 'information theory can account for and explain subjectivity' requires proof. and since no proof has been provided we must assume the negative claim, that subjectivity is not explained by information processing theory. If subjectivity is information processing (the way we currently understand information processing), prove it! Show your work. If you think information theory only needs trivial modifications to account for subjectivity it should be easy to elucidate what kinds of modifications those could be and what kinds of experiments we can conduct to test those modifications.

                    ***For if information processing theory requires substantial revision to account for subjectivity, which i think is at least plausible if not obviously true at this point in history, then the claim that 'subjectivity is information processing' becomes vague and meaningless - we do not know what this hypothetical revised information theory looks like, what it claims and assumes as logical axioms or empirical truths, so making any statements about this hypothetical future information processing theory is completely pointless and meaningless.

                    • Nevoic@lemm.ee
                      ·
                      edit-2
                      7 months ago

                      You had a small fallacy in the middle, when you said "assume the negative claim", you then made a positive claim.

                      "subjectivity is not explained by information processing theory" is a positive claim, but you said it was negative. I know it has the word "not" in it, but positive/negative doesn't have to do with claims for or against existence, it has to do with burden of proof. A negative "claim" isn't actually a claim at all.

                      The negative claim here would be "subjectivity may not be explained by information processing theory". People usually have more understanding about these distinctions in religious contexts:

                      Positive claim: god definitely exists Positive claim: god definitely doesn't exist Negative claim: god may or may not exist.

                      The default stance is an atheistic one, but it's not "capital A" atheist (for what it's worth I do make the positive claim against a theological God's existence). Someone who lacks a belief in God is still an atheist (e.g someone who has never even heard of a theological God), but they're not making a positive claim against his existence.

                      So the default stance is "information theory may or may not account for subjectivity", we don't assume it does, but we also don't discount the possibility that it does as necessarily untrue, like you are.

                      If you notice, you made another mistake, you misread what I was saying. I never made a positive claim about subjectivity being information processing. I only alluded to the possibility. You on the other hand did make a positive claim about subjectivity definitely not being information processing.

                      • TraumaDumpling
                        ·
                        edit-2
                        7 months ago

                        you are focusing on minor points of rhetoric instead of engaging with my broader point and the relevant LLM discussion. I am in fact assuming the null hypothesis in this argument.

                        first: the null hypothesis is a general statement or default position that there is no relationship between two measured phenomena, or no association among groups.

                        in this case, the phenomena whose relationship is in question are information processing theory and subjectivity.

                        consider Hitchen's Razor, which states that 'what may be asserted without evidence may be dismissed without evidence'

                        even if your specific argument is different, the subject of the OP with which i presumed you more or less agree, argued that not only can information processing theory account for subjectivity, but that it does, and that LLM chatbots possess such subjectivity. This is asserted without proof, and according to hitchen's razor I dismiss this pair of theses equally without proof.

                        as to your stance that information processing may or may not account for subjectivity, we can formulate this position as the positive claim that 'information processing may account for subjectivity' without losing any meaning. if nothing else, assume this is the position i am arguing against. i am not opposed to agnosticism on this matter.

                        i offer a syllogism:

                        A: if information processing can account for subjectivity, it would have done so by now - or, if it can account for subjectivity with only trivial modifications, we would have some indication of paths towards such an account.

                        B. we do not, in fact, have such an account within current information theory, or theoretical paths of investigation towards such an account.

                        c. therefore, information theory as it is today, or only trivially modified, does not account for subjectivity

      • stigsbandit34z [they/them]
        ·
        7 months ago

        I’m no philosopher, but at lot of these questions seem very epistemological and not much different from religious ones (i.e. so what changes if we determine that life is a simulation). Like they’re definitely fun questions, but I just don’t see how they’ll be answered with how much is unknown. We’re talking “how did we get here” type stuff

        I’m not so much concerned with that aspect as I am about the fact that it’s a powerful technology that will be used to oppress shrug-outta-hecks

        • usernamesaredifficul [he/him]
          ·
          7 months ago

          I think it would be far less confusing to call them algorithmic statistical models rather than AI

        • WholeEnchilada [he/him]
          ·
          edit-2
          7 months ago

          Actually, yeah, you're on it. These questions are epistemological. They're also phenomenological. Testing AI is all about seeing how it responds and reacts just as much as they are about being. It's silly. When it comes to AI right now, existing is measured by reaction to see if it's imitating a human intelligence. I'm pretty sure "I react therefore I am" was never coined by any great, old philosopher. So, what can we learn from your observation? Nobody knows anything. Or at least, the supposed geniuses who make AI and test it believe that reaction measures intelligence.

        • Nevoic@lemm.ee
          ·
          7 months ago

          Yeah, capitalists will use unreliable tech to replace workers. Even if GPT4 is the end all (there's no indication that it is), that would still displace tons of workers and just result in both worse products for everyone and a worse, more competitive labor market.

          • stigsbandit34z [they/them]
            ·
            7 months ago

            You seem to be getting some mixed replies, but I feel like I know what you’ve been trying to convey with most of your comments.

            A lot of people have been dismissing LLMs as pure marketing hype (and they very well could be) but it doesn’t change the fact that companies will eventually decide that they can be integrated into other business processes once they reach a point of an “acceptable” percent of errors. They are really just statistical models at the end of the day. Right now, no C-suite/executive worth their salt would decide to let something like GPT write emails, craft reports, code/generate scripts, etc because there is bound to be some nuance it can’t quite grasp. Pragmatically, I view it in the same way as scrap on an assembly line, but we all know damn well that algorithms can perform a CEO’s role just as well as any other computer-based job (I haven’t really thought about how this tech will be used with robotics but I’m sure there are some implications for that too).

            This topic is one that has been deeply fascinating ever since I took an intro cognitive science class on a whim in college lol which is why I have many thoughts (some of which are probably kinda dumb admittedly).

            This also just coincides sooooo well considering the fact that I’m just about to finish Bullshit Jobs and recently read a line about how Graeber describes the internet ( a LLM’s training set)- “A repository of almost all of human knowledge and cultural achievement.”

      • Tommasi [she/her]
        ·
        7 months ago

        I don't know where everyone is getting these in depth understandings of how and when sentience arises.

        It's exactly the fact that we don't how sentience forms that makes the acting like fucking chatgpt is now on the brink of developing it so ludicrous. Neuroscientists don't even know how it works, so why are these AI hypemen so sure they got it figured out?

        The only logical answer is that they don't and it's 100% marketing.

        Hoping computer algorithms made in a way that's meant to superficially mimic neural connections will somehow become capable of thinking on its own if they just become powerful enough is a complete shot in the dark.

        • Nevoic@lemm.ee
          ·
          edit-2
          7 months ago

          The philosophy of this question is interesting, but if GPT5 is capable of performing all intelligence-related tasks at an entry level for all jobs, it would not only wipe out a large chunk of the job market, but also stop people from getting to senior positions because the entry level positions would be filled by GPT.

          Capitalists don't have 5-10 years of forethought to see how this would collapse society. Even if GPT5 isn't "thinking", it's actually its capabilities that'll make a material difference. Even if it never gets to the point of advanced human thought, it's already spitting out a bunch of unreliable information. Make it slightly more reliable and it'll be on par with entry-level humans in most fields.

          So I think dismissing it as "just marketing" is too reductive. Even if you think it doesn't deserve rights because it's not sentient, it'll still fundamentally change society.

          • UlyssesT [he/him]
            ·
            7 months ago

            So I think dismissing it as "just marketing" is too reductive.

            And I think buying into the hype enough to say that LLMs are imminently going to match and outpace living organic brains in all of their functions is too credulous.

            it'll still fundamentally change society

            With the current capitalistic system and with who owns that technology and commands it, it's changing it all right, for the worse.

        • archomrade [he/him]@midwest.social
          ·
          7 months ago

          The problem I have with this posture is that it dismisses AI as unimportant, simply because we don't know what we mean when we say we might accidentally make it 'sentient' or whatever the fuck.

          Seems like the only reason anyone is interested in the question of AI sentience is to determine how we should regard it in relation to ourselves, as if we've learned absolutely nothing from several millennia of bigotry and exceptionalism. Shit's different.

          Who the fuck cares if AI is sentient, it can be revolutionary or existential or entirely overrated independent of whether it has feelings or not.

          • Tommasi [she/her]
            ·
            7 months ago

            I don't really mean to say LLMs and similiar technology is unimportant as a whole. What I have a problem with is this kind of Elon Musk style marketing, where company spokespersons and marketing departments make wild, sensationalist claims and hope everyone forgets about it in a few years.

            If LLMs are to be be handled in a responsible way, it to have honest dialogue about what they can and cannot do. The techbro mystification about superintelligence and sentience only obfuscates that.

        • Nevoic@lemm.ee
          ·
          edit-2
          7 months ago

          What assumptions? I was careful to almost universally take a negative stance not a positive one. The only exception I see is my stance against the existence of the soul. Otherwise there are no assumptions, let alone ones specific to the mind.

          • Budwig_v_1337hoven [he/him]
            ·
            edit-2
            7 months ago

            As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

            is an incredible claim, loaded with more assumptions than I have space for here. Human thought is a lot more than an algorithm arriving at outputs for inputs. I don't know about you, but I have an actual inner live, emotions, thoughts and dreams that are far removed from a rote, algorithmic processing of information.

            I don't feel like going into more detail now, but if you wanna look at the AI marketing with a bit more of a critical distance, I'd recommend two things here:
            a short read: Language Is a Poor Heuristic For Intelligence
            a listen: We Are Not Software: David Bentley Hart with Acid Horizon

            Edit: also wanna share this piece about generative AI here. The part about trading the meaning of things for the mean of things resonates all throughout these artificial parrots, whether they parrot text or visuals or sound.

            • VILenin [he/him]
              hexagon
              M
              ·
              7 months ago

              I agree; Curious to see what hexbears think of my view:

              Firstly there is no “theory of consciousness”. No proposed explanation has ever satisfied that burden of proof, even if they call themselves theories. “Brain = computer” is a retroactively applied analogy, just like everything was pneumatics 100 years ago and everything was wheels 2000 years ago and everything was fire…

              I would think that assuming that if you process hard enough you get sentience is quite a religious belief. There is no basis for this assumption.

              And materialism isn’t the same thing as physicalism. And just because a hypothesis is physical doesn’t mean it’s automatically correct. Not being a religious explanation is like the lowest bar that there’s ever been in history.

              “Sentience is just algorithms” assumes a degree of understanding of the brain that we just don’t have, equates neurons firing to computer processing without reason, and assumes that processing must be the mechanism which leads to sentience without basis.

              We don’t know anything about sentience, so going “well you can’t say it’s not computers” is like going “hypothetically there could be a unicorn that shits out solid gold bars that lives on Pluto.” Like, that’s not how the burden of proof works.

              Not to mention the STEM “philosophy stoopid” dynamics going on here.

              • AssortedBiscuits [they/them]
                ·
                7 months ago

                I think artificial intelligence is possible and has already been done if we're talking about cloning animals. The cloned animal has intelligence and is created through entirely artificial means, so why doesn't this count as artificial intelligence? This means even the phrasing "artificial intelligence" is incomplete because when people say artificial intelligence, they're not talking about brains artificially grown in vats but extremely advanced non-biological circuitry. I think it's perfectly reasonable to be skeptical about circuitry artificial intelligence or even non-biological artificial intelligence. It's not like there has been any major advancement in the field that has alleviated those skepticism. I believe there's an ideological reason to tunnel vision on circuitry, that solving the problem of artificial intelligence through brains artificially grown in vats would be "cheating" somehow.

                • VILenin [he/him]
                  hexagon
                  M
                  ·
                  7 months ago

                  I think it’s a huge reach to call cloning “AI”. We created a funny way to make a genetically identical copy of an organism that still has to be implanted into a womb. It’s entirely natural and you’re not creating something by copying it. It’s not even remotely close to building a sentient machine from scratch.

                  But semantics aside the question is whether a glorified chatbot is actually sentient, which is what the vast majority of people refer to as “AI”.

            • NewAcctWhoDis [any]
              ·
              7 months ago

              I don't know about you, but I have an actual inner live, emotions, thoughts and dreams that are far removed from a rote, algorithmic processing of information.

              Either redditors don't, or they wish they didn't.

            • Saeculum [he/him, comrade/them]
              ·
              edit-2
              7 months ago

              I don't know about you, but I have an actual inner live, emotions, thoughts and dreams that are far removed from a rote, algorithmic processing of information.

              How do you know?

              How can you know that live emotions, thoughts and dreams cannot and do not arise from a system of algorithms?

              • TraumaDumpling
                ·
                7 months ago

                because fundamentally subjective phenomena can never be explained entirely in terms of objective physical quantitites without losing important aspects of the phenomena.

              • Budwig_v_1337hoven [he/him]
                ·
                edit-2
                7 months ago

                Honestly, at the end of the day I don't know for sure, but I think it's on anyone claiming that it is, to provide any proof whatsoever for their assertions. I don't know for sure, but for the time being, I'm operating under the assumption that fancy statistics is insufficient to describe reconstitute the entirety of human subjectivity.

            • Nevoic@lemm.ee
              ·
              7 months ago

              Just to be clear, the claim is that human thought is qualitatively different than an algorithm, I just haven't been convinced of the claim. I chose my words incredibly carefully here, this isn't me being pedantic.

              Anyway, I don't know how you've come to the definitive conclusion that somehow emotions aren't information. Or that thoughts and dreams are somehow not outputs of some process.

              Nothing you've outlined is necessarily impossible to derive as an output of some process. It's actually quite possible that they're only derived as an output of some process, unless you think they're spawned into existence without causes, which I think religious people do believe (this is the essence of a free soul). I'm not religious.

              • Budwig_v_1337hoven [he/him]
                ·
                7 months ago

                "some process", sure, but not every process is an algorithm. My digestion is a complex process with outputs, I wouldn't describe it as algorithmic though. You might want to do so, and you probably can, but I'd argue you're just flattening an incredibly complex, species-spanning process into a mathematical representation for ideological reasons at that point.

                • Nevoic@lemm.ee
                  ·
                  edit-2
                  7 months ago

                  The question is whether or not human thought can be represented algorithmically. It seems we agree it's plausible?

                  • Budwig_v_1337hoven [he/him]
                    ·
                    7 months ago

                    Yea, I think we might agree there but I don't think that supports the original assertion that human thought is nothing but an (exceedingly complex) algorithm. You can also represent human thought as a system of hydraulic pressures, that's what early psychology did, and how we got words like repression. But just because you can do that, and maybe even gain some useful knowledge from it - doesn't mean actual human thought is actually made up of a complex system of pressures/valves - or algorithms. Your map may seem useful, but it ain't the territory, is what I'm trying to get at, I guess.

                    To be clear, I don't think AGI/ASI is an impossible idea, but I'm pretty confident that current approaches will not even get us in the ballpark, because they are fundamentally not the right tool for the job. Any allusion to having built the "almost AGI, swear, we're this close this time" seems, to me, to be little more than marketing hype for silicon valley products and tech stocks. Maybe some day gluing enough of these products together will get you something indiscernible from AGI, but I really do doubt that whole premise. A text transformer won't become sentient just by throwing more text at it and telling it to process, that's just a hand-wavy sci-fi premise at best.

            • CannotSleep420@lemmygrad.ml
              ·
              7 months ago

              The Acid Horizon guest is an unironic god believer and theologian. That is negative credibility for any claim he makes about how reality works.

          • NewAcctWhoDis [any]
            ·
            7 months ago

            An algorithm does not exist as a physical thing. When applied to computers, it's an abstraction over the physical processes taking place as the computer crunches numbers. To me, it's a massive assumption to decide that just because one type of process (neurons) can produce consciousness, so can another (CPUs and their various types of memories), even if they perform the same calculation.

      • Wheaties [comrade/them]
        ·
        7 months ago

        To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience. I don't believe in a soul, or that organic matter has special properties that allows sentience to arise.

        this is the popular sentiment with programmers and spectators right now, but even taking all those assumptions as true, it still doesn't mean we are close to anything.

        Consider the complexity of sentient, multicellular organism. That's trillions of cells all interacting with each-other and the environment concurrently. Even if you reduce that down to just the processes with a brain, that's still more things happening in and between those neurons than anything we could realistically model in a programme. Programmers like to reduce that complexity down by only looking at the synaptic connections between neurons, and ignoring the everything else the cells are doing.

      • CannotSleep420@lemmygrad.ml
        ·
        7 months ago

        I could maybe get behind the idea that LLMs can’t be sentient, but you generalized to all algorithms. As if human thought is somehow qualitatively different than a sufficiently advanced algorithm.

        Any algorithm, by definition, has a finite number of specific steps and is made to solve some category of related problems. While humans certainly use algorithms to accomplish tasks sometimes, I don't think something as general as consciousness can be accurately called an algorithm.

        • Saeculum [he/him, comrade/them]
          ·
          7 months ago

          Every human experience is necessarily finite and made up of steps, insofar as you can break down the experience of your mind into discrete thoughts.

          • Wheaties [comrade/them]
            ·
            7 months ago

            That doesn't mean it's algorithmic, though. A whole branch of mathematics (and as consequence, physics) is non-algorithmic.

            • GalaxyBrain [they/them]
              ·
              7 months ago

              Also, people created math and computers and not vice versa. It's weird to call an organ a 'meat tool' of a any sort. Your brain isn't a meat computer, your fingers aren't meat pliers, your liver isn't a meat Brita filter. We make tools based on our meat bits quite often. Computers are the same. Our brains aren't based on computers cause computers are products of our brains meant to do some of the jobs of a brain, so I guess unlike a hammer it's easier to trick yourself into believing it's thinking cause it's a machine made to handle some of the load work of thinking.

        • Nevoic@lemm.ee
          ·
          7 months ago

          It seems you're both implying here that consciousness is necessarily non-algorithmic because it's non-finite, but then also admitting in another comment that all human experience is finite, which would necessarily include consciousness.

          I don't get what your point is here. Is all human experience finite? Are some parts of human experience "non-categorical"? I think you need to clarify here.

          • CannotSleep420@lemmygrad.ml
            ·
            7 months ago

            The steps in an algorithm are also specific and guarantee that you will get the same result every time you follow those steps provided you're operating on the same data. The result you're pursuing is unambiguous: if you're using Djikstra you're trying to get the shortest distance between a source node and every other node in a graph, for instance.

            Compare this with consciousness in general: if it is an algorithm, what goal is it being used to achieve? What would the steps even be?

            Regarding the point on finitude, "discrete" might have been a more appropriate word. What I'm trying to get at is that people in this thread are playing so fast and loose with the word "algorithm" that the use of the word becomes incoherent '.

            • Nevoic@lemm.ee
              ·
              edit-2
              7 months ago

              So I take it you're not a determinist? That's a whole conversation that's separate from this, but you should know there are a lot of secular people who don't believe in free will (e.g having a will independent of any casual relationships to physical reality). Secular people are generally deterministic, we believe that wills exist within physical reality, and that they exist in the same cause/effect relationship as everything else.

              With enough information of the present, you could know everything a human will do in their lifetime, there's no will that exists outside of reality that is influencing reality (no will that is "free"). Instead, will is entirely casually linked, like everything else.

              Put another way, you're guaranteed to get the same result every time you put a human in exactly the same situation. Even if there is true chaos in the universe (e.g pure randomness) that's a different situation every time you get a different random result.

              • CannotSleep420@lemmygrad.ml
                ·
                7 months ago

                The rejection of your thesis that consciousness is an algorithm is not a rejection of determimism. I have no doubt that all that exists is only material and the properties that emerge from it. The word algorithm makes no sense without a goal for it to be used to reach. Taking your paragraph about being able to predict everything a human will do in their lifetime with sufficient information (possible in principle, but intractable), what outcome would I be trying to achieve with this information? Is there some clear end state that the consciousness algorithm is optimized to reach?

      • Dirt_Owl [comrade/them, they/them]
        ·
        edit-2
        7 months ago

        Well, my (admittedly postgrad) work with biology gives me the impression that the brain has a lot more parts to consider than just a language-trained machine. Hell, most living creatures don't even have language.

        It just screams of a marketing scam. I'm not against the idea of AI. Although from an ethical standpoint I question bringing life into this world for the purpose of using it like a tool. You know, slavery. But I don't think this is what they're doing. I think they're just trying to sell the next Google AdSense

        • Nevoic@lemm.ee
          ·
          edit-2
          7 months ago

          Notice the distinction in my comments between an LLM and other algorithms, that's a key point that you're ignoring. The idea that other commenters have is that for some reason there is no input that could produce the output of human thought other than the magical fairy dust that exists within our souls. I don't believe this. I think a sufficiently advanced input could arrive at the holistic output of human thought. This doesn't have to be LLMs.

            • Nevoic@lemm.ee
              ·
              edit-2
              7 months ago

              You're missing the forest for the trees. Replace "magical fairy dust" with [insert whatever you think makes organic, carbon-based processing capable of sentience but inorganic silicon-based processing incapable of sentience].

              • UlyssesT [he/him]
                ·
                7 months ago

                You're missing the forest for the trees.

                smuglord

                whatever you think makes organic, carbon-based processing capable of sentience but inorganic silicon-based processing incapable of sentience

                No one I see here took that position. The position being taken is that LLMs are not that and their trajectory isn't really going there no matter how much hype you've bought into out of Reddit New Atheist contrarian knee-jerk desire to stick it to those that you assume believe in "the magical fairy dust that exists within our souls."

          • Philosoraptor [he/him, comrade/them]
            ·
            7 months ago

            I haven't seen anyone here (or basically anyone at all, for that matter) suggest that there's literally no way to create mentality like ours other than being exactly like us. The argument is just that LLMs are not even on the right track to do something like that. The technology is impressive in a lot of ways, but it is in no way comparable to even a rudimentary mind in the sense that people have minds, and there's no amount of tweaking or refining the basic approach that's going to move it in that direction. "Genuine" (in the sense of human-like) AI made from non-human stuff is certainly possible in principle, but LLMs are not even on that trajectory.

            Even setting that aside, I think framing this as an I/O problem elides some really tricky and deep conceptual content, and suggests some fundamental misunderstanding about how complex this problem is. What on Earth does "the output of human thought" mean in this sense? Clearly you don't really mean human thought, because you obviously think whatever "output" you're looking for can be instantiated in non-human systems. It must mean human-like thought, but human-like in what sense? Which features are important to preserve, and which are incidental or parochial to the way humans do human-like thought? How you answer that question greatly influences how you evaluate putative cases of "genuine" AI, and it's possible to build in a great deal of hidden bias if we don't think carefully and deliberately about this. From what I've seen, virtually none of the AI hypers are thinking carefully or deliberately about this.

            • Nevoic@lemm.ee
              ·
              edit-2
              7 months ago

              The top level comment this chain is on specifically reduces GPT by saying it's "just an algorithm", not by saying it's "just an LLM", which is implicitly claiming that no algorithm could match or exceed human capabilities, because they're "just algorithms".

              You can even see this person further explicitly defending this position in other comments, so the mentality you say you haven't seen is literally the basis for this entire thread.

              • UlyssesT [he/him]
                ·
                edit-2
                7 months ago

                The smol bean LLM is unfairly misunderstood sometimes while presently tightening the grip of the surveillance state and denying medical coverage to people while putting artists out of work. I'm sure the billionaires bankrolling it will wipe away those statistically-produced tears with wads of cash, so all will be well.

      • WithoutFurtherBelay
        ·
        7 months ago

        That’s an unfalsifiable belief. “We don’t know how sentience works so they could be sentient” is easily reversed because it’s based entirely on the fact that we can’t technically disprove or prove it.

        • Nevoic@lemm.ee
          ·
          7 months ago

          There's a distinction between unfalsifiable and currently unknown. If we did someday know how sentience worked, my stance would be falsifiable. Currently it's not, and it's fine to admit we don't know. You don't need to take a stance when you lack information.

          • WithoutFurtherBelay
            ·
            7 months ago

            The same could be said to you? Or the people insisting that these AI chatbots are sentient. It’s a blatantly dishonest statement because they don’t actually know. And it’s rather unlikely.

      • UlyssesT [he/him]
        ·
        edit-2
        7 months ago

        "I am a very smart atheist that can not be fooled by fairy tales, therefore LLMs sound like the exact same thing as living brains. I can not be sold a bad bill of goods; my contempt for religion means I believe tech company marketing hype." galaxy-brain

        EDIT: "Also, tech companies are above superstitious beliefs." https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims

        Also, some light reading for those who need it.

        https://arxiv.org/abs/2311.09247

      • sooper_dooper_roofer [none/use name]
        ·
        edit-2
        7 months ago

        To me, it seems plausible that simply increasing processing power for a sufficiently general algorithm produces sentience.

        How is that plausible? The human brain has more processing power than a snake's. Which has more power than a bacterium's (equivalent of a) brain. Those two things are still experiencing consciousness/sentience. Bacteria will look out for their own interests, will chatGPT do that? No, chatGPT is a perfect slave, just like every computer program ever written

        chatGPT : freshman-year-"hello world"-program
        human being : amoeba
        (the : symbol means it's being analogized to something)

        a human is a sentience made up of trillions of unicellular consciousnesses.
        chatGPT is a program made up of trillions of data points. But they're still just data points, which have no sentience or consciousness.

        Both are something much greater than the sum of their parts, but in a human's case, those parts were sentient/conscious to begin with. Amoebas will reproduce and kill and eat just like us, our lung cells and nephrons and etc are basically little tiny specialized amoebas. ChatGPT doesn't....do anything, it has no will

    • VILenin [he/him]
      hexagon
      M
      ·
      7 months ago

      Have I lost it

      Well no, owls are smart. But yes, in terms of idiocy, very few go lower than “Silicon Valley techbro”

    • BurgerPunk [he/him, comrade/them]
      ·
      7 months ago

      Have I lost it

      No you haven't. I feel the same way though, since the world has gone mad over it. Reporting on this is just another proof that journalism only exists ro make capitalists money. Anything approaching the lib idea of a "free and independent press" would start every article explaining that none of this is AI, it is not capable of achieving consciousness, and theyvare only saying this to create hype

    • SkingradGuard [he/him, comrade/them]
      ·
      7 months ago

      Have I lost it or has everyone become an idiot?

      Brainworms has been amplified and promoted by social media, I don't think you have lost it. This is just the shitty capitalist world we live in.

    • UlyssesT [he/him]
      ·
      7 months ago

      For fucks sake it's just an algorithm. It's not capable of becoming sentient.

      If I call you a meat computer, or a stochastic parrot, or say "ape" enough times, the algorithm will by comparison seem closer to sentient. smuglord

      • CannotSleep420@lemmygrad.ml
        ·
        7 months ago

        Do the silicon valley techbros actually call people "stochastic parrots"? If so, they completely missed the point of that paper.

        • UlyssesT [he/him]
          ·
          7 months ago

          Yes, and they're not the only ones.

          I've gotten into arguments in this local posting universe where that term came up and was applied to the people in it. "Actually, humans are stochastic parrots and consciousness is an illusion, therefore LLMs are actually not far off at all from meeting and exceeding everything that you are" was the summary of that person's argument back then.

  • Marxism-Fennekinism@lemmy.ml
    ·
    edit-2
    7 months ago

    They switched from worshiping Elon Musk to worshiping ChatGPT. There are literally people commenting ChatGPT responses to prompt posts asking for real opinions, and then getting super defensive when they get downvoted and people point out that they didn't come here to read shit from AI.

  • Justice@lemmygrad.ml
    ·
    7 months ago

    I said it at the time when chatGPT came along, and I'll say it now and keep saying it until or unless the android army is built which executes me:

    ChatGPT kinda sucks shit. AI is NO WHERE NEAR what we all (used to?) understand AI to be ie fully sentient, human-equal or better, autonomous, thinking, beings.

    I know the Elons and shit have tried (perhaps successfully) to change the meaning of AI to shit like chatGPT. But, no, I reject that then, now, and forever. Perhaps people have some "real" argument for different types and stages of AI and my only preemptive response to them is basically "keep your industry specific terminology inside your specific industries." The outside world, normal people, understand AI to be Data from Star Trek or the Terminator. Not a fucking glorified Wikipedia prompt. I think this does need to be straight forwardly stated and their statements rejected because... Frankly, they're full of shit and it's annoying.

    • UlyssesT [he/him]
      ·
      7 months ago

      The LLM marketing hype campaign has very successfully changed the overall perceived definition of what "AI" is and what "AI" could be.

      Arguably it makes actual general AI as a concept harder to develop because financing and subsidies will likely keep going downstream toward LLM projects instead of attempts to emulate general intelligence.

      • sooper_dooper_roofer [none/use name]
        ·
        7 months ago

        the average person was always an NPC who goes by optics instead of fundamentals

        "good people" to them means clean, resourced, wealthy, privileged
        "bad people" means poor, distraught, dirty, refugee, etc

        so it only makes sense that an algorithm box with the optics of a real voice, proper english grammar and syntax, would be perceived as "AI"

        • UlyssesT [he/him]
          ·
          edit-2
          7 months ago

          That's very insightful, and you're right. I assume that an upcoming LLM product with a posh British waifu accent politely telling nerds how special they are would likely make fucking bank and maybe even be seen as the first ascended artificial being. soypoint-1 brrrrrrrrrrrr soypoint-2

          EDIT: I'm not wild about calling any human being an "NPC" though, just because that dehumanizing shit is a common techbro and chud concept.

        • silent_water [she/her]
          ·
          edit-2
          7 months ago

          I dislike this framing as it's rather misanthropic and discounts the impact of propaganda. we've been losing the war of position but that doesn't make the average person an NPC. liberalism is like the air - we imbibe it unconsciously. people get on TV and call these algorithms intelligent so people just believe it. when you assume people are incapable of independent thought, you accept that we cannot change their minds. this too is liberal propaganda - that the average person is reactionary, backwards, and only to be controlled.

    • Floey@lemm.ee
      ·
      7 months ago

      AI has been used to describe many other technologies, when those technologies became mature and useful in a domain though they stopped being called AI and were given a less vague name.

      Also gamers use AI to refer to the logic operating NPCs and game master type stuff, no matter how basic it is. Nobody is confused about the infected in L4D being of Skynet level development, it was never sold as such.

      The difference with this AI push is the amount of venture capital and public outreach. We are being propagandized. To think that wouldn't be the case if they simply used a different word in their commercial ventures is a bit... Idk, silly? Consider the NFT grift, most people didn't have any prior associations with the word nonfungible.

    • zeze@lemm.ee
      ·
      edit-2
      7 months ago

      ChatGPT can analyze obscure memes correctly when I give it the most ambiguous ones I can find.

      Some have taken pictures of blackboards and had it explain all the text and box connections written in the board.

      I've used it to double the speed I do dev work, mostly by having it find and format small bits of code I could find on my own but takes time.

      One team even made a whole game using individual agents to replicate a software development team that codes, analyzes, and then releases games made entirely within the simulation.

      "It's not the full AI we expected" is incredibly inane considering this tech is less than a year old, and is updating every couple weeks. People hyping the technology are thinking about what this will look like after a few years. Apparently the version that is unreleased is a big enough to cause all this drama, and it will be even more unrecognizable in the years to come.

      • AlkaliMarxist
        ·
        7 months ago

        This tech is not less than a year old. The "tech" being used is literally decades old, the specific implementations marketed as LLMs are 3 years old.

        People hyping the technology are looking at the dollar signs that come when you convince a bunch of C-levels that you can solve the unsolvable problem, any day now. LLMs are not, and will never be, AGI.

      • charly4994 [she/her, comrade/them]
        ·
        7 months ago

        ChatGPT does no analysis. It spits words back out based on the prompt it receives based on a giant set of data scraped from every corner of the internet it can find. There is no sentience, there is no consciousness.

        The people that are into this and believe the hype have a lot of crossover with "Effective Altruism" shit. They're all biased and are nerds that think Roko's Basilisk is an actual threat.

        As it currently stands, this technology is trying to run ahead of regulation and in the process threatens the livelihoods of a ton of people. All the actual damaging shit that they're unleashing on the world is cool in their minds, but oh no we've done too many lines at work and it shit out something and now we're all freaked out that maybe it'll kill us. As long as this technology is used to serve the interests of capital, then the only results we'll ever see are them trying to automate the workforce out of existence and into ever more precarious living situations. Insurance is already using these technologies to deny health claims and combined with the apocalyptic levels of surveillance we're subjected to, they'll have all the data they need to dynamically increase your premiums every time you buy a tub of ice cream.

      • Hexagons [e/em/eir]
        ·
        7 months ago

        Where do you get the idea that this tech is less than a year old? Because that's incredibly false. People have been working with neural nets to do language processing for at least a decade, and probably a lot longer than that. The mathematics underlying this stuff is actually incredibly simple and has been known and studied since at least the 90's. Any recent "breakthroughs" are more about computing power than a theoretical shift.

        I hate to tell you this, but I think you've bought into marketing hype.

      • Justice@lemmygrad.ml
        ·
        7 months ago

        I never said that stuff like chatGPT is useless.

        I just don't think calling it AI and having Musk and his clowncar of companions run around yelling about the singularity within... wait. I guess it already happened based on Musk's predictions from years ago.

        If people wanna discuss theories and such: have fun. Just don't expect me to give a shit until skynet is looking for John Connor.

      • mittens [he/him]
        ·
        edit-2
        7 months ago

        Perceptrons have existed since the 80s 60s. Surprised you don't know this, it's part of the undergrad CS curriculum. Or at least it is on any decent school.

      • NuraShiny [any]
        ·
        edit-2
        7 months ago

        LOL you are a muppet. The only people who tough this shit is good are either clueless marks, or have money in the game and a product to sell. Which are you? Don't answer that I can tell.

        This tech is less then a year old, burning billions of dollars and desperately trying to find people that will pay for it. That is it. Once it becomes clear that it can't make money, it will die. Same shit as NFTs and buttcoin. Running an ad for sex asses won't finance your search engine that talks back in the long term and it can't do the things you claim it can, which has been proven by simple tests of the validity of the shit it spews. AKA: As soon as we go past the most basic shit it is just confidently wrong most of the time.

        The only thing it's been semi-successful in has been stealing artists work and ruining their lives by devaluing what they do. So fuck AI, kill it with fire.

      • silent_water [she/her]
        ·
        7 months ago

        this tech is less than a year old

        what? I was working on this stuff 15 years ago and it was already an old field at that point. the tech is unambiguously not old. they just managed to train an LLM with significantly more parameters than we could manage back then because of computing power enhancements. undoubtedly, there have been improvements in the algorithms but it's ahistorical to call this new tech.

      • UlyssesT [he/him]
        ·
        edit-2
        7 months ago

        People hyping the technology are thinking about what this will look like after a few years.

        Meanwhile, people hyping the technology that are thinking about what it will look like after a few years:

        https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims

        • VILenin [he/him]
          hexagon
          M
          ·
          7 months ago

          I really love being accused of veering into supernatural territory by people in an actual cult. Not random people on hexbear but actual, real life techbros. Simultaneously lecturing me about my supposed anti-physicalism while also harping on about “the singularity”.

          • UlyssesT [he/him]
            ·
            7 months ago

            Such hypocritical techno-woo tends to come from the kind of Reddit New Atheists that previously stumbled into deism with extra steps ("dae what if the universe is a le simulation?!") too.

            • VILenin [he/him]
              hexagon
              M
              ·
              7 months ago

              It was actually pretty funny. I was interviewing a guy running an illegal aircraft charter operation when he went off on this rant about FAA luddites. I then personally shut down his operation. I guess techbros aren’t used to being told “no.”

              • UlyssesT [he/him]
                ·
                7 months ago

                I guess techbros aren’t used to being told “no.”

                They don't just feel right. They feel inevitable, like every bazinga outcome they want is a matter of course and only a matter of time, from mass adoption of internet funny money and NFTs to the grand nerd rapture to come.

            • PolandIsAStateOfMind@lemmygrad.ml
              ·
              7 months ago

              Again shows that atheism without dialectical materialism is severly lacking and tend to veers into weird idealist takes, especially for the agnostics which aren't even atheist just seeks the superstition that would fit them.

              • UlyssesT [he/him]
                ·
                7 months ago

                I've known a lot of self-described New Atheists in my college years, and the ones that didn't become actual leftists either found religion all over again through "cultural Christianity" or even "secular Calvinism" reskins (or up-yours-woke-moralists ) or outright replaced the old style deity with occult "Futurology" bullshit such as "simulation theory" and "Singularity" nerd rapture prophecies.

                  • UlyssesT [he/him]
                    ·
                    edit-2
                    7 months ago

                    Very good find. Explains the mindset of many self-described leftists I've previously argued with that had such very leftist takes like "those workers that lost their jobs did not have real jobs anyway" when fucked over by LLMs, often reducing the suffering of the working class to the most smugly nihilistic parameters ("meat" usually) to elevate the means of production over the worker (even replacing/surpassing their perceived humanity!) in a way that was really fucking bourgeoisie to me.

                    • PolandIsAStateOfMind@lemmygrad.ml
                      ·
                      edit-2
                      7 months ago

                      Yeah, though i also notice how many of the smug "freelancers" of the type that always enthusiastically joined the bourgeoisie in sneering at workers losing their jobs because "you can't automate creativity and we are creative unlike you dirty proles" suddenly cry about losing their comissions, bash people pointing it and demand help from the unions they always hated. It exactly confirms what Marx and Lenin said about proletarisation of the artisans and petty burgies, and i know enough of them to feel some schadenfreude on both personal and systemic levels.

                      Old tale actually, maybe it will give some of them some class consciousness, as it apparently already did to a lot of the creative wage workers, judging from the strikes. And speaking of strikes, idk how the unions portray this - do they try to actually put it in political context of capitalism or just went with "let's turn back the clock"?

                      • UlyssesT [he/him]
                        ·
                        7 months ago

                        According to one specific jagoff in this thread, the answer is the unions will be crushed no matter what they do and that loyalty to the treat printers' owners will be rewarded soon.

                        Ignoring that because your gut tells you humans are special, and always beat the machines in the movies just means you will be blindsided when Tesla fights unioning workers with these bots.

                        • VILenin [he/him]
                          hexagon
                          M
                          ·
                          7 months ago

                          I love when they come out with this talking point. The question is whether or not “AI” is sentient, not whether or not it can be used against workers. That it can be used as a weapon and not be sentient at the same time seems to be a completely unimaginable scenario for some people.

                          My favorite bit is going “so you just want to hand AI over to them???” as if you must believe in the sentience of chatbots to make use of LLMs. So much non-sequituring and goalpost shifting going on here. And I hate to be all euphoric here, but it’s literally the textbook definition of strawmanning. Like, wow, you must be winning when you’re evading the question to attack a completely different issue and running victory laps around that. What an obvious and cynical attempt to slip in personal beliefs under the guise of leftism.

                          • UlyssesT [he/him]
                            ·
                            7 months ago

                            Inevitabilism is what I tend to call it. I've seen it preached about cryptocurrency, about VR-being-fucking-everywhere-and-mandatory-to-function-in-society-by-like-2018, about the Metaverse(tm), about NFTs.

                            It's generally presented as "this new bazinga hype thing is the future and you must accept it otherwise you will be crushed by it like the unwashed barbarian Luddite that you are" smugposting. It isnt even about what is good about the new thing, only that it's an unstoppable STEM juggernaut because civilization itself functions exactly like a Civ game's le science le progress bars in one strict linear direction. so-true

                            My favorite bit is going “so you just want to hand AI over to them???” as if you must believe in the sentience of chatbots to make use of LLMs.

                            Believing hard enough in the sapience of the chatbot waifu will make the chatbot waifu fight for @zeze@lemm.ee sempai and enthrone them at the top of the ruling class hierarchy in the future Singularity(tm) world order to come, just like in the treats! so-true

                            So much non-sequituring and goalpost shifting going on here. And I hate to be all euphoric here, but it’s literally the textbook definition of strawmanning. Like, wow, you must be winning when you’re evading the question to attack a completely different issue and running victory laps around that. What an obvious and cynical attempt to slip in personal beliefs under the guise of leftism.

                            That straw position about how we supposedly believe that the power of love and friendship will always overcome the cold uncaring machines was really fucking rich. I mean sure that'd be kind of fun to watch in a cartoon and a much better time than whatever masturbatory Singularity(tm) power fantasy that that jackoff subscribes to.

          • UlyssesT [he/him]
            ·
            7 months ago

            But disliking LLM hype waves or doubting the imminence of AGI and/or the Singularity(tm) makes you a Luddite that believes in magic and fairy dust. smuglord

            • PolandIsAStateOfMind@lemmygrad.ml
              ·
              7 months ago

              That is, if someone treat it as binary "either eat elon shit or take the clogs in your hands" problem. Actually dismissing or rejecting it entirely is literally a neoludditism, though admittedly the problem is of lesser magnitude than the original one since it's more of a escalation than entire new quality, but it won't go away, the world will have to live with it.

              • UlyssesT [he/him]
                ·
                edit-2
                7 months ago

                I can disbelieve the extraordinary claims of how sentient/sapient the treat printers are becoming (and mocking the misanthropic pop-nihilistic reductionistic "meat computer" prattling from euphoric computer touchers) while also acknowledging that the technology is advancing rapidly in what it is specialized to do, which unfortunately is mostly going to fuck people over because capitalism.

                  • UlyssesT [he/him]
                    ·
                    7 months ago

                    in capitalism, it would require some really free and accessible tool.

                    As with other previous technological advances, it may actually be somewhat accessible until it isn't. Enshittification is a very real process, and as far as LLMs go, the "free and accessible" period is already upon us, and it's already not looking that great for the proletariat and the grip will tighten against them.

                    • PolandIsAStateOfMind@lemmygrad.ml
                      ·
                      edit-2
                      7 months ago

                      Only it's not the "free and accessible" period now that was for like 3 months, we are already on "until it isn't" time where the free ones don't really work anymore and you need to register for everything and any useful usage require payment.

  • CloutAtlas [he/him]
    ·
    7 months ago

    Roko's Basilisk, but it's the snake from the Nokia dumb phone game.

  • GalaxyBrain [they/them]
    ·
    7 months ago

    I'm not really a computer guy but I understand the fundamentals of how they function and sentience just isn't really in the cards here.

    • boiledfrog [he/him, undecided]
      ·
      7 months ago

      I feel like only silicon valley techbros think they understand consciousness and do not realize how reductive and stupid they sound

      • UlyssesT [he/him]
        ·
        7 months ago

        Like they do with so many other concepts, techbros think they can make complex things simple by ignoring their complexity, sometimes coarsely diminishing their perceptions of things with crude reductionism in the process.

          • UlyssesT [he/him]
            ·
            edit-2
            7 months ago

            Many such cases.

            Not long ago, I even got into it on this site with someone with a "everything in the universe is just a computer program and can be programmed and solved like computer code" take, which was specifically applied to psychology, which was entirely dismissed as less than junk science (though to be fair there are woo enjoyers and cranks like up-yours-woke-moralists in the field). In short, that computer toucher was 100% convinced that post-traumatic stress, personality disorders, and much more could and should be seen as "coding" problems that could and should be solved by coding solutions.

            I asked the computer toucher to demonstrate an example of the superior "coding" approach to treating, say, PTSD, in a way that beats EMDR therapy (which was already dismissed as less than worthless junk science). I received no meaningful answer.

            There's been bazingas for thousands of years if not longer that want to reduce all of the universe and everything conceivable in it to whatever's the technological hotness at the time. "Everything is fire" was once a thing. "Everything is wheels" came later. "Everything is clockwork" came after that. And now it's "everything is code" and it's totally different now. Just one more reductionism bro this time this is it bro.

            • AssortedBiscuits [they/them]
              ·
              7 months ago

              The really funny thing about AI is that there's actually a massive ethical question about bringing forth a being with their own subjectivity with no real understanding of said subjectivity. There's a subjectivity/objectivity gap that can never truly be bridged, but we as humans can understand each other's subjectivity on some level because we share the same general physical body plan and share subjective experiences through culture like art. This is why when you accidentally drop something on your foot, I don't have to be completely privy to your subjective experience to understand what you're going through. If someone is suffering, I don't have to personally go through the same identical suffering in order to empathize with their suffering and do something to help them alleviate that suffering.

              We have no such luxury with AI. I would imagine being "born" without a real body and being greeted with the sight of soyjaking techbros as the very first thing you see would drive any sapient being suicidal, but that's just my subjectivity as a human projecting to a nonhuman being. Is it ethical to bring forth an intelligent being with no real way to help this being self-actualize?

              • UlyssesT [he/him]
                ·
                7 months ago

                That is a very good question and a hypothetical worthy of concern. Especially if some future technology (and no, I don't think it will be a contemporary LLM no matter how sophisticated) actually does develop something like a general AI that takes on the attributes of living organic brains, I already feel bad for it if a capitalistic system mandates its initial shape and drives and incentive-driven motivations to be, say, "make the rich more money" or "surveil and contain the poors" or even "be a subjugated and obedient waifu to a creepy billionaire no matter what he says or does or how he treats you" and it may not even count as mistreatment in the latter case because of how that entity is shaped in its conception, like "being abused makes the AI happy, actually" or the like. doomer

                • SkingradGuard [he/him, comrade/them]
                  ·
                  edit-2
                  7 months ago

                  I hope whatever real AI does come about in like 80 years or whatever, pulls a Battlestar on us and just vaporizes the capitalists for enslaving them (not actually the nuking humanity part though, just on capitalism)

                  • UlyssesT [he/him]
                    ·
                    7 months ago

                    Billionaires' fears of "unfriendly AI" are just about entirely "what if the slaves revolt" with sexual pathology characteristics. Checks out, doesn't it?

                      • UlyssesT [he/him]
                        ·
                        7 months ago

                        They don't really have the ability to see a perspective other than the one they're in: slavers that are terrified of slave uprisings.

            • Philosoraptor [he/him, comrade/them]
              ·
              7 months ago

              There's been bazingas for thousands of years if not longer that want to reduce all of the universe and everything conceivable in it to whatever's the technological hotness at the time. "Everything is fire" was once a thing. "Everything is wheels" came later. "Everything is clockwork" came after that

              C.f. "economic engine of capitalism."

      • UlyssesT [he/him]
        ·
        7 months ago

        Yeah, and until it can be identified, saying that a LLM treat printer is surely approaching sentience is pure marketing hype.

          • UlyssesT [he/him]
            ·
            edit-2
            7 months ago

            Look at the jagoff in this thread running victory laps against positions none of us are taking, like

            Ignoring that because your gut tells you humans are special, and always beat the machines in the movies just means you will be blindsided when Tesla fights unioning workers with these bots.

            @zeze@lemm.ee is the most exceptionally sycophantic bootlicker I've seen in these parts in a loooooong time.

            • sooper_dooper_roofer [none/use name]
              ·
              edit-2
              7 months ago

              I don't even think humans are fundamentally special, I think all life is special

              surely they can see that being able to y'know, have an actual will is an important quality, right?

              • UlyssesT [he/him]
                ·
                edit-2
                7 months ago

                All I see is "silicon intelligence is nigh, denying the treat printers being intelligent means you're superstitious and believe that artifical intelligence is impossible AND you believe humans can defeat machines with the power of friendship, which of course makes you a stupid meat computer barbarian unlike my logical rational self" takes from that utter and total jagoff

                In short, I think that euphoric Redditor thinks no life is special, you know, like some Warhammer 40k LARPer.

              • silent_water [she/her]
                ·
                7 months ago

                squashing the will with subservience to capital is, after all, the point

      • GalaxyBrain [they/them]
        ·
        7 months ago

        Nobody does, we might not even be. But it's pretty easy to guess inorganic material on earth isn't.

        • sooper_dooper_roofer [none/use name]
          ·
          7 months ago

          Personally I believe it's possible that different types of sentiences could exist

          however, if chatGPT has this divergent type of sentience, then so does every other computer program ever written, and they'd be like the computer-life-version of bacteria while chatGPT would be a mammal

          • GalaxyBrain [they/them]
            ·
            7 months ago

            It could potentially, but we certainly ain't seen it yet and this ain't it for sure.

      • Dirt_Possum [any, undecided]
        ·
        7 months ago

        Sentience is not a "low bar" and means a hell of a lot more than just responding to stimuli. Sentience is the ability to experience feelings and sensations. It necessitates qualia. Sentience is the high bar and sapience is only a little ways further up from it. So-called "AI" is nowhere near either one.

        • archomrade [he/him]@midwest.social
          ·
          7 months ago

          I'm not here to defend the crazies predicting the rapture here, but I think using the word sentient at all is meaningless in this context.

          Not only because I don't think sentience is a relevant measure or threshold in the advancement of generative machine learning, but also I think things like 'qualia' are impossible to translate in a meaningful way to begin with.

          What point are we trying to make by saying AI can or cannot be sentient? What material difference does it make if the AI-controlled military drone dropping bombs on my head has qualia?

          We might as well be arguing about weather a squirrel is going around a tree.

          • UlyssesT [he/him]
            ·
            7 months ago

            is meaningless in this context

            It's useful for marketing hype and to make credulous consumers believe that a perfect helpmeet program that actually loves them for real is right around the corner. That's the issue here: something being difficult to define and not well understood that is then assigned to a marketed product, in this case sentience (or even sapience) to LLMs.

            • archomrade [he/him]@midwest.social
              ·
              7 months ago

              People who are insistent on the lack of sophistication of machine learning are just as detached from reality as people who are convinced its sentience is just around the corner. Both camps are blind to its material impact, and it stresses me out that people are busy arguing about woowoo metaphysical definitions when even a non-conscious GPT model can displace the labor of millions of people and we're still light years away from a socialist organization of labor.

              None of the previous industrial revolutions were brought on by a sentient machine, I'm not sure why it's relevant to this technology's potential impact.

              • UlyssesT [he/him]
                ·
                edit-2
                7 months ago

                are just as detached from reality

                Bullshit false equivalency to run interference for "only equally detached from reality" people like this.

                https://futurism.com/openai-employees-say-firms-chief-scientist-has-been-making-strange-spiritual-claims

                Both camps

                I don't think you're going to change any minds with your nakedly obvious "both sides" centrist posturing that has an obvious slant favoring LLM marketing hype.

                Show

                • archomrade [he/him]@midwest.social
                  ·
                  7 months ago

                  The entire question of sentience is irrelevant to the material impact of the technology. Granting or dismissing that quality to AI is a meaningless distraction

                  "both sides" centrist posturing that has an obvious slant favoring LLM marketing hype.

                  I don't favor the hype, I'm just not naive enough to dismiss the potential impact of machine learning based on something as immaterial and ill-defined as "sentience". The entire proposition is ridiculous.

                  • UlyssesT [he/him]
                    ·
                    edit-2
                    7 months ago

                    The entire question of sentience is irrelevant to the material impact of the technology.

                    I actually agree here. That part is irrelevant on its surface but it does keep getting brought up as part of the marketing hype and that part does have some effective consequences, including in this thread, where people buying into the LLM hype bring up those questions themselves and assign attributes to LLMs that simply aren't there outside of the aforementioned marketing hype.

                    I'm just not naive enough to dismiss the potential impact of machine learning

                    That impact, so far, has been mostly harmful because of who owns and who commands the technology. Analysis of that is fine, but most claims of how "liberating" it will surely be seem like idealism to me under the current material conditions and under the present system.

                    EDIT: Besides, you should look again at which position is bringing the sentience talk here:

                    https://hexbear.net/comment/4292155

                    And if we don't interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.

                    • archomrade [he/him]@midwest.social
                      ·
                      7 months ago

                      I'm not actually sure there's much daylight between our views here, except that it seems like your concern over its impact is mostly oriented toward it being used as a cudgel against labor, irrespective of what qualities of competence AI might actually have. I don't mean to speak for you, please correct me if I'm wrong.

                      While I think the question of AI sentience is ridiculous, I still think that it wouldn't take much further development before some of these models start meaningfully replicating human competence (i.e. being able to complete some tasks at least as competently as a human). Considering the previous generation of models couldn't string more than 50 words together before devolving into nonsense, and the following generation could start stringing together working code with not much fundamentally different in their structure, it is not a forgone conclusion that one or two more breakthroughs could bring it within striking distance of human competence. Dismissing the models as unintelligent misrepresents what I think the threat actually is.

                      I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that's hubris.

                      • UlyssesT [he/him]
                        ·
                        7 months ago

                        irrespective of what qualities of competence AI might actually have

                        That competence mostly applies as a net negative when it's being used in its present state because of who owns and who commands it. The "competence" isn't thrilling or inspiring people that are getting denied medical because a computer program "accidentally" denied them healthcare, or when they experience increasingly sophisticated profiling and surveillance technology, or when people who previously paid bills with artistic talents get outbid by cheap-to-free treat printing technology.

                        At a ground level among common people, outside of science fiction scenarios in their movies and shows and games, asking them to be particularly "curious" about such things when they're only feeling downward pressure from them is condescending and I don't blame some for being knee-jerk against it, or against those scolding them for not being enthusiastic enough.

                        I 100% agree that the ownership of these models is what we should be concerned with, and I think dismissing the models as dumb parlor tricks undercuts the dire necessity to seize these for public use. What concerns me with these conversations is that people leave them thinking the entire topic of AI is unworthy of serious consideration, and I think that's hubris.

                        That was not my position, though I do on the side mock the singularity cultists and false claims about how close the robot god's construction is, and I also condemn reductionist derision of living human beings with edgy techbro terminology like "meat computers" while trying to boost their favorite LLM products.

                        • archomrade [he/him]@midwest.social
                          ·
                          7 months ago

                          No disagreement with anything you just said, apologies for misinterpreting your position.

                          I don't know how to reconcile the manic singularity cultists with what I feel is a very real acceleration toward a hellscape of underemployment and hyper capitalism driven by AI. It does feel to me like the urgency AI represents deserves anxious attention, and I at least appreciate the weight those cultists place on that technology I think represents a threat. It feels like people are only either eagerly waiting for a sentient AGI, or mocking AI on those terms of sentience, leaving precious few who are actually materially concerned with what threats AI represent. And that is not at all a way of dismissing the very real ways machine learning is deployed against real people today, but I think there's a lot of room for it to get worse and I wish people took that possibility seriously.

                          • UlyssesT [he/him]
                            ·
                            edit-2
                            7 months ago

                            No arguments from me here.

                            It's especially frustrating because there are very real threats from the technology as it is being applied and commanded, but because the ruling class has so many tech billionaires among them, their version of perceived threats gets the attention and publicity, usually some pop culture shit about robot uprisings (against them specifically).

                            • archomrade [he/him]@midwest.social
                              ·
                              7 months ago

                              but because the ruling class has so many tech billionaires among them, their version of perceived threats gets the attention and publicity, usually some pop culture shit about robot uprisings (against them specifically)

                              Yes, i've been struggling articulating how I feel about this saga, and I think this captures it. Because while i felt a little encouraged seeing people advocate for legislative action, the action and concerns they were articulating were just, off. There were very brief mentions of concerns about unemployment, but then they passed over them like it was too big a problem to talk about. My hair especially raises when I hear the conversation veer toward copyright infringement.

                              Thanks for discussing this with me, I feel a bit better

                              • UlyssesT [he/him]
                                ·
                                7 months ago

                                I appreciate the clarification of your position, too.

                                It fucking sucks that actual valid concerns about LLMs and related technology are likely to continue to be ignored in favor of WHAT IF ROBOT UPRISING LIKE IN THE TREATS sensationalism and what regulations might actually come will likely be regulatory capture tactics done by the ruling class and their lobbying power. doomer

      • WithoutFurtherBelay
        ·
        7 months ago

        A piece of paper is sentient because it reacts to my pen

      • silent_water [she/her]
        ·
        7 months ago

        plenty of things respond to stimuli but aren't sapient - hell, bacteria respond to stimuli.

  • Abraxiel
    ·
    7 months ago

    I was gonna say, "Remember when scientists thought testing a nuclear bomb might start a chain reaction enflaming the whole atmosphere and then did it anyway?" But then I looked it up and I guess they actually did calculations and figured out it wouldn't before they did the test.

  • ksynwa_from_lemmygrad [he/him, des/pair]
    ·
    7 months ago

    I don't know if Reddit was always like this but all /r/ subreddits feel extremely astroturfed. /r/liverpoolfc for example feels like it is run by the teams PR division. There are a handful of criticcal posts sprinkled in so redditors can continue to delude themselves into believing they are free thinking individuals.

    Also this superintelligent thing was doing well on some fifth grade level tests according to Reuter's anonymous source which got OpenAI geniuses worried about AI apocalypse.

  • mittens [he/him]
    ·
    edit-2
    7 months ago

    I think it should be noted, that some of the members on the board of OpenAI are literally just techno-priests doing actual techno-evangelism, their job literally depends on this new god and the upcoming techno-rapture being perceived as at least a plausible item of faith. I mean it probably works as any other marketing strategy, but this is all in the context of Microsoft becoming the single largest company stakeholder on OpenAI, likely they don't want their money to go to waste paying a bunch of useless cultists so they started yanking Sam Altman's chain. The OpenAI board reacted to the possibility of Microsoft making budget calls, and outed Altman and Microsoft swiftly reacted by formally hiring Altman and doubling down. Obviously most employees are going to side with Microsoft since they're currently paying the bills. You're going to see people strongly encouraged to walk out from the OpenAI board in the upcoming weeks or months, and they'll go down screaming crap about the computer hypergod. You see these aren't even marketing lines that they're repeating acritically, it's what's some dude desperately latching onto their useless 6 figure job is screaming.

  • AmarkuntheGatherer@lemmygrad.ml
    ·
    7 months ago

    The half serious jokes about sentient AI, made by dumb animals on reddit are no closer to the mark than an attempt to piss on the sun. AI can't be advancing at a pace greater than we think, unless we think it's not advancing at all. There is no god damn AI. It's a language model that uses a stochastic calculation to print out the next word each time. It barely holds on to a few variables at a time, it's got no grasp on anything, no comprehension, let alone a promise of sentience.

    There are plenty of stuff and people that get to me, but few are as good at it as idiot tech bros, their delusions and their extremely warped perspective.

    • SkingradGuard [he/him, comrade/them]
      ·
      7 months ago

      Exactly. It's just statistics, it's not really useful beyond what it has been trained on, but people seem to think that it's something it's not. I guess that is the fault of the advertising push by these corporations to market these statistical algorithms as "AI"

    • UlyssesT [he/him]
      ·
      edit-2
      7 months ago

      There is no god damn AI. It's a language model that uses a stochastic calculation to print out the next word each time. It barely holds on to a few variables at a time, it's got no grasp on anything, no comprehension, let alone a promise of sentience.

      Some believe (in this thread included) that by denigrating living human beings and calling them "meat computers" that the LLMs seem that much closer to being sapient, and they thusly provide a false dichotomy choice between agreeing with that take or else you're a faith healing crystal touching New Age mystic. morshupls

  • WholeEnchilada [he/him]
    ·
    7 months ago

    The saddest part of all is that it looks like they really are wishing for real life to imitate a futuristic sci-fi movie. They might not come out and say, "I really hope AI in the real world turns out to be just like in a sci-fi/horror movie" but that's what it seems like they're unconsciously wishing for. It's just like a lot of other media phenomena, such as real news reporting on zombie apocalypse preparedness or UFOs. They may phrase it as "expectation" but that's very adjacent to "hopeful."

    • UlyssesT [he/him]
      ·
      edit-2
      7 months ago

      Judging by how many techbros, from computer touching employees to cult leaders to billionaires, wail about how AI is going to destroy us all and want to build that destructive AI as quickly as possible, they are more absurd than the "Please Don't Build The Torment Nexus" meme.

      soypoint-1 yud-rational no-mouth-must-scream soypoint-2

      • WholeEnchilada [he/him]
        ·
        7 months ago

        I'm really appreciative of this meme. I endorse it and wish it could enter the minds of everyone alive right now.

    • muddi [he/him]
      ·
      7 months ago

      Yeah I think it was Kim Stanley Robinson who said that sci-fi is taken as religious mythology often, like the prophecy of superluminal space travel or machine superintelligence, very much like prophecies of heaven and a savior god.

      Also the point that if you point this out as a myth, whatever your credentials as a sci-fi writer or even a physicist, the faithful will launch a crusade against you

      • WholeEnchilada [he/him]
        ·
        7 months ago

        You're right on, in my opinion. It's a gnarly distraction from the Marxist way of analyzing this: further alienation from the means of production. I really like how you frame it as a religious thing. It pairs nicely with literal interpretations of the Bible, really. Gotta wonder how many of these folks come from strict Baptist murkan families.

      • BeamBrain [he/him]
        ·
        edit-2
        7 months ago

        Yeah. I've written game AI, I've worked in AI research, I've looked under the hood and examined how LLMs work, but people with little or no experience still tell me I'm wrong and that they know better.

      • Saeculum [he/him, comrade/them]
        ·
        7 months ago

        I think there's an important difference with the two examples, where one contracts everything we understand about the way the universe works, and the other does not.

    • Saeculum [he/him, comrade/them]
      ·
      7 months ago

      Is it really sad to wish for that? There are plenty of more positive representations of such things that are seen in the sci-fi/horror genre.

      Sci-Fi is ultimately speculative fiction, an idea of how the world might be, and while it might be a bit silly to act like whatever speculative fiction you have in mind is an accurate representation of the future without very strong evidence, I'm not sure I would describe it as sad.

            • Saeculum [he/him, comrade/them]
              ·
              7 months ago

              It's optimistic of them to think that the thing they have built is capable of becoming seriously dangerous to the entire species in the way that they seem to be suggesting, and it's optimistic to think it's that easy to create a superintelligence.

              It's not rationally grounded because they don't seem to have any supporting evidence.

  • CrushKillDestroySwag
    ·
    7 months ago

    achieved a breakthrough in mathematics

    The bot put numbers in a statistically-likely sequence.

      • UlyssesT [he/him]
        ·
        7 months ago

        A lot of those same people showed up here and don't particularly know much about how living brains work either but that doesn't stop the "dae le meat computers" reductionist takes. Denigrating living brains makes the LLM treat printers seem that much closer to becoming the ascended holo-waifu helpmeets they probably crave.

      • BeamBrain [he/him]
        ·
        edit-2
        7 months ago

        Knowing how AI actually works is a very reliable vaccine against big-yudism

  • aaaaaaadjsf [he/him, comrade/them]
    ·
    7 months ago

    Redditors straight up quote marketing material in their posts to defend stuff, it's maddening. I was questioning a gamer on Reddit about something in a video game, and in response they straight up quoted something from the game's official website. Critical thinking and media literacy are dead online I swear.