Good post by David Golumbia on ChatGPT and how miserable it all is :rat-salute-2:

    • Frank [he/him, he/him]
      ·
      2 years ago

      You're not a stochastic parrot. And claiming or believing you are reveals a deep fundamental ignorance of how language and cognition works. It also reveals a deep ideology; somehow human language, cognition, the ability to work with abstract symbols and semantic meaning, are all reducible to some statistically weighted math problems. Despite ai researchers who aren't techbros trying to sell you on madlibs ii; electric boogaloo telling everyone for years that modern ml models are not intelligent, do not think, and are not doing what human minds do. This is stem poisoning. Engineers, or really coders, who don't understand how anything works but believe they know everything because of pro-stem propaganda confidently spouting off about unknown unknowns.

      Very suddenly we've gone from "human like ai is decades off if it's even possible" to "this math problem that locates statistical correlations in a big .txt file is basically sentient, bro. Trust me, bro!"

        • Frank [he/him, he/him]
          ·
          2 years ago

          Okay so you're in the grip of unknown unknows. You don't know you're wrong because you're not sufficiently familiar with the material. Private meditation is not sufficient for understanding or discussing language, perception, cognition, or really anything. You're not "making things up". There are a variety of models but one that I favor suggests that your brain is made up of many non-conscious modules or agents that interact to produce higher level speech, utterances, behaviors, whatever. Your conscious self doesn't know what's going on down there but those modules are thinking and engaging in complex decision making. The same way that a person may have never heard of calculus but can perfectly map the trajectory of a thrown object in 3d space without being consciously aware of how they're doing it.

          They're handling the grammar, the vocabulary, cross referencing information in your memories, evaluating what is and isn't significant, and applying other processes that you don't need to be consciously aware of. You're probably aware from your meditative practice that things go a lot smoother when you're not acting consciously. You're confusing a lack of consciousness for a lack of complexity. The non-conscious parts of your brain, the parts that handle the majority of our cognitive functions, are very smart. They just don't report things to your conscious self unless high-level executive function is needed.

          Also, definitions; the unitary self is illusory. Sentience, the ability to feel and perceive, is not. It's a very important distinction.

          • fratsarerats [none/use name]
            ·
            edit-2
            2 years ago

            Also, definitions; the unitary self is illusory. Sentience, the ability to feel and perceive, is not. It’s a very important distinction.

            Sounds like this guy (mr meat bro) has been watching too many Sam Harris videos and thinks that he's some kind of mantra master observer or something 🤣

    • mittens [he/him]
      ·
      edit-2
      2 years ago

      Same for the stochastic parrot thing. I’m a stochastic parrot, so what

      The only count I disagree here is that calling us stochastic parrots in the same way that chatGPT is a stochastic parrot is vastly overselling existing technology. Literally a claim made by the CEO of the AI company, probably worth being more than a little bit skeptic. In fact I'd go as far as claiming that artificial intelligences deriving actual meaning is the last frontier of AI, a problem that can't even be conceptualized, to my knowledge at least.

      • Frank [he/him, he/him]
        ·
        2 years ago

        It reveals a fundamentally incurious ignorance about how language and cognition work. It's such a patently ridiculous statement that it could only possibly have come from a stem poisoned tech ceo who hasn't read anything but bazinga hype articles since highschool.

    • BodyBySisyphus [he/him]
      ·
      2 years ago

      Mostly with you, but I think it's fair that there's a qualitative aspect to cognition and consciousness that our tech overlords don't seem to get - the difference between existentialism and nihilism is that the latter embraces the possibility that humans can create and enact meaning. Yeah, you can clearly get pretty far with statistical models, and maybe the universe is deterministic and our experience is just the product of particles following concrete and physical laws, but I think concluding that you're a stochastic parrot on the basis of the existence of Chat GPT is an overreach.

      • Frank [he/him, he/him]
        ·
        2 years ago

        In so far as in understand anything at all about quantum mechanics, my understanding of quantum mechanics is that it strongly suggests that the universe is not deterministic.

          • space_comrade [he/him]
            ·
            2 years ago

            on a physical level the orthodoxy (fwiw) is that the brain is far too “warm, wet, and noisy” to harness quantum effects on a macroscopic scale

            Maybe, I don't think we can be 100% sure of that though, there are indications of this not being true: https://physicsworld.com/a/do-quantum-effects-play-a-role-in-consciousness/

              • space_comrade [he/him]
                ·
                2 years ago

                Just read the article, it's not that long. Basically it's not really clear cut and there are plenty of unknowns.

          • Frank [he/him, he/him]
            ·
            2 years ago

            on a physical level the orthodoxy (fwiw) is that the brain is far too “warm, wet, and noisy” to harness quantum effects on a macroscopic scale

            Really? I thought they were working on the idea that consciousness might rely on some spooky motion at a distance stuff?

    • ssjmarx [he/him]
      ·
      2 years ago

      Your first paragraph is a semantic argument that has no bearing on the author's thesis. It doesn't matter if meaning is inherent to human life or decided upon by the humans themselves, the argument is that AI art models implicitly put forward the notion that creativity is just putting pixels on a screen or words on a page - but actual artistic expression requires more than that. Even if an AI generates a series of words that are indistinguishable from poetry written by a person, that AI has at no point engaged in the activity of "writing poetry".

    • Frank [he/him, he/him]
      ·
      2 years ago

      Ai cannot "write better poetry than you" unless you reduce poetry to random arrangements of words that you think sound nice. Unless you think that the semantic content of poetry is totally irrelevant. Unless you think that language is still language when it doesn't convey meaning or have any semantic content at all.

      In the sense that an ai can produce a novel arrangement of words, and we reduce poetry to novel arrangements of words? But language isn't reproducing noises. A lyre bird is not talking or communicating or capable of speech. It's just repeating things it's heard with no understanding of what those things are. We are not lyre birds.

        • Frank [he/him, he/him]
          ·
          2 years ago

          Dadaism and it's consequences have been a disaster for human civilization.

          Also, I disagree with your definition of poetry as, apparently "Any novel combination of words including those without semantic meaning". At some point you need to draw a distinction between "poetry" and "any utterance" or the term becomes pointless.

          If meaningless arrangements of words based on their statistical prevalence in a dataset is poetry then what isn't?

          • iie [they/them, he/him]
            ·
            edit-2
            2 years ago

            meaningless arrangements of words

            this is kinda verbal sleight of hand imo. i'm not here to argue or to defend ai, just gonna chime in real quick.

            when you say "meaningless" up there, you mean there is no intent behind the text. but calling a text "meaningless" would usually imply "text that does not make sense or contain information."

            if you read a poem and you feel something, and you can imagine the scene, then that poem meant something to you, no matter how it came to exist. the poem held information that you parsed and felt.

            imo we should be careful not to mix up statements about the act of writing and the output of writing. ambiguity like that leads to endless disagreement and frustration in discussions.

            also:

            chatgpt has never felt or lived, but it has processed lots of writing from humans who have felt and lived. we could argue those people are the real authors of whatever chatgpt writes. chatgpt is an algorithm for imitating and remixing what humans have written before. so even if your point is "text cannot have meaning unless a human wrote it" then chatgpt still kinda passes the test. kinda.

            this is just an aside though. my main point was earlier.

  • Spectre_of_Z_poster [they/them]
    ·
    2 years ago

    Marxists: Capitalist technological creation and fixed capital accumulation and automation will lead to mounting contradictions and be the eventual base of a fully automated socialist society

    Also Marxists: No, don't create automation don't accumulate fixed capital or advance technology. Let's remain stagnant in 20th century technology forever

    • drhead [he/him]
      ·
      2 years ago

      And yet if you call someone anti-materialist over this, it breaks their mind for weeks.

      • Frank [he/him, he/him]
        ·
        2 years ago

        Not wanting your quality of life to significantly degrade because tech bros are stealing to commons again is anti-materialist got it.

        • drhead [he/him]
          ·
          2 years ago

          It is if you don't have any realistic plan whatsoever to actually eliminate the problem, and instead choose to endlessly complain about it.

          • ssjmarx [he/him]
            ·
            2 years ago

            The coming climate apocalypse is bad.

            You moron. You swine. You sniveling worm. Can't you see that pointing out that something is bad when you can't fix it is anti-materialist!?

            • drhead [he/him]
              ·
              2 years ago

              That analogy doesn't work because you can solve climate change by drastically reducing greenhouse gas emissions. You can't "solve" the existence of deep learning tech except by somehow deleting every trace of its existence and somehow preventing people from creating it again.

              • Frank [he/him, he/him]
                ·
                2 years ago

                In abstract using math problems to do creative work it tacky, banal, and evidence of a lack of imagination and seriousness. But that's not a problem. There have been crass philistines that shouldn't be invited to parties since the dawn of time. Humanity will survive.

                In this specific case math problems are being used to immiserate working class artists and reduce everyone's quality of life by replacing creative work with shitty madlibs copypasta. The solution, surprise surprise, is to abolish the economic form that incentivizes the private accumulation of capital! Ie, abolish capitalism.

                Because this is literally just another example of capital seizing the commons in order to profit from it.

                Oooh oh shit he's got a materialist analysis of the problem noooo mah reductive retreat in to scientism to avoid discussing icky feels! What will I do now?!

              • ssjmarx [he/him]
                ·
                2 years ago

                I wasn't equivocating the two problems, I was pointing out the absolute ridiculousness of the assertion that pointing out problems with society is somehow anti-Marxist. Literally the main thing Marx did was point out problems in society!

                • Spectre_of_Z_poster [they/them]
                  ·
                  edit-2
                  2 years ago

                  Except Marx’s proposed solutions were to use the trends and directions of capitalist society against it, to allow its own contradictions to defeat itself. Not to criticize and whine and try to freeze things in place somehow, those who try to regulate and freeze capitalism in place are reformists and are doomed to failure.

                  Artificial Luddite reaction against AI automation can only slow it temporarily, not stop it globally. It’s inevitable and the end point of capitalism. There’s nothing you or I can do to stop it, and in fact it’s necessary for socialism for these productive forces to advance to the point of post-scarcity. We simply need to seize them.

                  So it’s not useful to scaremonger about the technology itself, it’s only useful to promote the seizure of the technology for our own ends.

                  If you understood Marx and Historical Materialism, this would be more clear. Capitalism is not ontologically evil, capitalism has no moral character it is merely a necessary stage before socialism. This is where a lot of left moralists diverge or misunderstand Marx, and why many also are too harsh on the PRC.

                  • ssjmarx [he/him]
                    ·
                    2 years ago

                    seizure of the technology for our own ends.

                    What ends could their possibly be to siezing Chat GPT? That we can generate meaningless blocks of text more quickly?

                    The capitalists will use this and other generation tech to eliminate jobs, naturally - socialists, whose society should not be enslaved to market dynamics, should be able to recognize that this tech is destructive and eliminate or severely restrict it accordingly.

                    • drhead [he/him]
                      ·
                      2 years ago

                      So it's a technology that can eliminate jobs, but it's also entirely useless in the hands of a worker-controlled economy? Only one of these two things can be true.

                      • ssjmarx [he/him]
                        ·
                        2 years ago

                        You misunderstand. AI generation doesn't meaningfully replace art, but it can substitute for art in contexts where volume trumps content, and if it is embraced on an industrial scale it has the potential to permanently damage art as an institution. Under capitalism, where raw output is a consideration of every artist who needs to be able to make a living, the faux art generated by algorithms will inevitably be mass adopted regardless of the damage it does to society.

                        • drhead [he/him]
                          ·
                          2 years ago

                          "the damage that it does to society"

                          What damage? If AI art existing and some people ascribing meaning and/or value to is giving the writer of this article (or anyone else) an existential crisis, that is a personal problem, not societal damage. It isn't a "precursor to fascism" like this absolute shitpost of a Medium article suggests.

                      • Florist [none/use name]
                        ·
                        2 years ago

                        The way to square that circle is to say that the jobs it's eliminating are useless lol

                        • drhead [he/him]
                          ·
                          2 years ago

                          But if the jobs are useless, then you're still not really any better off by banning it under socialism. And which jobs, exactly, are being considered useless here? Aren't we worried about artists mainly?

                • Spectre_of_Z_poster [they/them]
                  ·
                  2 years ago

                  Love that the people with Marx in their name never understand Marx.

                  Moralism? Check. Luddism? Check. Failure to understand historical materialism and capitalism’s role in developing automation and production? Check.

                  • ssjmarx [he/him]
                    ·
                    2 years ago

                    You're inventing specters of points I never made. The things I've said are a) Marxists should critique society and b) AI generation tech has no place in a post capitalist world.

                      • ssjmarx [he/him]
                        ·
                        2 years ago

                        All technology is good all the time and it should always be adopted by society without restriction

                        Capital Vol IV

                        • Spectre_of_Z_poster [they/them]
                          ·
                          2 years ago

                          Marx was famously against capitalism developing productive forces. He wanted to destroy it and live in 1800s technology forever, he was Amish

                          • ssjmarx [he/him]
                            ·
                            2 years ago

                            Lemme try an analogy. Imagine that the capitalists invented a machine that makes sand, which they then sold as food. This machine would not be worth reproducing under socialism.

                            • Spectre_of_Z_poster [they/them]
                              ·
                              edit-2
                              2 years ago

                              If it’s as bad as you say, you have nothing to worry about. Human artists will keep their jobs because the sand gives no nutrients.

                              The very fact that you are so scared of this technology belies your point that it’s totally useless and doesn’t produce anything of value.

                              You are contradicting yourself. Is AI automation going to replace all the food production in the world with sand? That makes no sense and nobody will buy it or use it.

                              If the AI and automation is a viable threat to you, that is because it is creating socially necessary outputs and you fear being replaced.

                              • ssjmarx [he/him]
                                ·
                                2 years ago

                                You misunderstand. AI generation doesn’t meaningfully replace art, but it can substitute for art in contexts where volume trumps content, and if it is embraced on an industrial scale it has the potential to permanently damage art as an institution. Under capitalism, where raw output is a consideration of every artist who needs to be able to make a living, the faux art generated by algorithms will inevitably be mass adopted regardless of the damage it does to society.

                            • drhead [he/him]
                              ·
                              edit-2
                              2 years ago

                              AI chatbots can be useful for some things. They do have limitations though. Maybe if they could be connected to the internet without instantly turning extremely fascist as opposed to the normal, statistically average level of fascism it gets from a single dose of the internet (in addition to a number of other re-education measures), they'd be a lot more capable and could serve as a good alternative to search engines.

                              Deep learning technology as a whole is something that you would absolutely fucking want if you had any intention of running a planned economy and has a number of other applications. These require the same tools and resources to produce and operate and operate off of similar principles.

              • ssjmarx [he/him]
                ·
                edit-2
                2 years ago

                pointing out bad things is anti-Marxist

                I would argue that pointing out bad things is an essential first step in Marxism.

                • Spectre_of_Z_poster [they/them]
                  ·
                  edit-2
                  2 years ago

                  You are basically just being a Bernstein complaining about monopolies and trying to pass anti-trust legislation to protect small businesses.

                  As Lenin pointed out, monopolization is inevitable under capitalism and its logical end point. It’s also necessary, as only monopolized industry can be easily seized. Capitalism creates the conditions of its own defeat and builds the base of the next system to come that will replace it.

                  Just like this, automation is inevitable under capitalism and its logical end point. It’s also necessary, as only a post-scarcity and automated based can enable socialism. Capitalists build the base and develop productive forces, and then once it becomes mired in its own crisis it can then be seized for the replacement.

                  This is the Marxist conception of capitalist development into socialism. You are simply a reactionary or an idealist if all you can do is whine about the inevitable direction of capitalism and attempt to hold it in stasis instead of using the contradictions of capitalism against it.

                  Trust busting is Liberal policy and anti-Marxist as it preserves capitalism for longer. Policies meant to hinder or prevent automation, or regress to an earlier period of technology, are Liberal policies and anti-Marxist as it preserves capitalism for longer and prevents the necessary development

                  • ssjmarx [he/him]
                    ·
                    2 years ago

                    Actually, Marxists should let capitalists do all the bad things and never oppose them.

                    V I Lenin

                    • Spectre_of_Z_poster [they/them]
                      ·
                      edit-2
                      2 years ago

                      You don’t need to aid the capitalists in automating things or monopolizing things, they will do it on their own no matter what you do.

                      The solution however is not for the dispossessed petty bourgeois to implement anti-trust and freeze capitalism in place, not is the solution for the unemployed workers to destroy technology through Luddite reaction. These are both failed and doomed strategies. They are ultimately reactionary, in that they refuse to deal with the progression and development of society and seek to freeze it in place forever (not even possible, it will fail).

                      The solution is to organize the dispossessed and unemployed into seizing these monopolies and automated technologies for themselves and cut the reactionary idealist shit that completely ignores the trends of capitalism and historical materialism and development

                      • ssjmarx [he/him]
                        ·
                        2 years ago

                        Why would the dispossessed masses ever join with the communists to sieze the means of production if the communists haven't taken the time to articulate their better world? Nobody will want to be our friends if we just reject all critique and say "artists losing their jobs is good actually"

                        • Spectre_of_Z_poster [they/them]
                          ·
                          edit-2
                          2 years ago

                          You don’t have to say workers or artists losing their jobs is good.

                          You say “artists losing their jobs is the inevitable consequence of our capitalist society. Let’s seize the automation and share it among ourselves, and use these productive forces for the good of all humankind instead of just a small clique of owners”

                          The difference is important, because your proposed solution is a dead end doomed to failure and people will stop listening to communists if your solution is Luddism that fails over and over. You just want to ban AI within the context of capitalism to ameliorate harm within capitalism temporarily. Someone else in another capitalist nation will just surpass you and do it anyway. You will become obsolete and stuck in old technology while the world moves on without you, and you will be frozen in a less advanced form of capitalism still, until it buckles because other capitalists are outcompeting you.

                          • ssjmarx [he/him]
                            ·
                            2 years ago

                            your proposed solution

                            I never proposed a solution, aside from a post-revolutionary hypothetical in another branch of this comment tree. I took issue with what I percieved to be the assertion that Marxists shouldn't analyse and complain about things.

        • Spectre_of_Z_poster [they/them]
          ·
          2 years ago

          Basing things on what you “want” and not what is actually possible is idealism yeah. AI is coming and there’s nothing we can do about it except seize the infrastructure when the time is right, and use its power for ourselves

          • Frank [he/him, he/him]
            ·
            2 years ago

            Y'all really hate the idea that there might be anything in life that can't be discussed in the language of heavy industry, don't you? Like someone starts talking about feels and it's like "No! YOu fool! You moron! The only things worth caring about are production throughput in tractor factories! How could you possibly care about anything else at all?!?!?!"

            • Spectre_of_Z_poster [they/them]
              ·
              2 years ago

              I’m just a pragmatist who deals with the actual trends of capitalist economy and society and not just what I feel like I want

            • KollontaiWasRight [she/her,they/them]
              ·
              edit-2
              2 years ago

              A) why are you worried about these things? they live in the deepest and most unsettling parts of the uncanny valley. From an artistic perspective, they're a joke.

              B) Material conditions change. You can't pretend them away or waste your time trying to stuff cats back in bags. That way lies the failure of Luddism: it had no answer to the looms beyond their destruction, which meant it was doomed to failure. You have to find a way to use the thing in a just manner, not smash it and expect it to just go away. Luddism's problem isn't that it doesn't identify a real and significant harm, it is that the answer it presents to that problem is the political-economic equivalent of sticking your fingers in your ears and shouting "I can't hear you!"

              C) You can feel however you like about a thing. Feeling that way is ultimately unrelated to the conditions and technologies that led to its creation, however, and it isn't cause to pursue a quixotic task.

            • space_comrade [he/him]
              ·
              edit-2
              2 years ago

              I'll admit marxists sometimes do this but honestly in this case I don't really see a problem.

              This whole anti AI trend is reactionary to the core, it's based on nothing other than "the machines took our jobs" and "machines cannot feel therefore machine art BAD".

    • KollontaiWasRight [she/her,they/them]
      ·
      2 years ago

      in their minds perpetual reliance on creativity is bad engineering

      That would be because it is. Using new code to solve a known problem is little more than an investment in creating defects. Sometimes you have to do it for reasons outside of the context of it being new, and every software engineer should know how to implement the core functionality related to their area of expertise in order to understand why it might not work, but writing new production code for the sake of new code is irresponsible. If you want to write fun, creative code, just don't put it in something that is actually intended to work.

      That said, many developers do celebrate the creativity of putting the pieces together to do something substantively new. It's a rush. It leads to a bit of a god complex, and too many software engineers refuse to be responsible for managing their own brainworms, but it is a common motivator of SWEs.

  • jackal [he/him]
    ·
    edit-2
    2 years ago

    When I think about generative AI, I don't feel like it's an attack on human creativity. Like every technology before, especially the rise of computers, the artists adapted and harnessed the new technology to create new art. That's going to be the same with generative AI. It will be a tool just like synths or 3d renders or any other digital processing.

    edit: Also that the point of art (in my opinion) has never been the artwork itself, or at least not only about the piece of art in isolation, but how it relates to the artist or the lived experience.

    • mittens [he/him]
      ·
      edit-2
      2 years ago

      Indeed, I consider AI generated art to be kinda like readymades, really. Art is not about the piece but about the piece being observed, that's where meaning occurs. Just in the same way that the babble AI generates acquires meaning when it's read by us, it didn't intend to have any meaning at generation, it was given meaning by the spectator. Same with art.

      • Spectre_of_Z_poster [they/them]
        ·
        edit-2
        2 years ago

        So if the AI generated content is sufficiently advanced, and you are not aware it’s AI generated when you interpret it and give it meaning does it then become art?

        You are saying something different than the commenter above. The original comment stated art gains meaning from the artist, and you are saying that art gains meaning from the audience. If it’s the latter, and the audience isn’t aware the creator is AI, then it becomes art just like any human made content.

        The difference between AI and human artists is only meaningful if you believe art gains meaning from the artist.

        • mittens [he/him]
          ·
          edit-2
          2 years ago

          I made a mistake and I intended to specify "readymade art" on the second sentence, it's not meant to cover all art, sorry 😅. I realize now that I did state something different from the post I was replying to, but I do stand by the statement that it gains meaning from the audience. In the same sense readymade art pieces are not originally intended to convey artistic intent when produced. Only when they're displayed. The mere act of being displayed and being spectated as a display is where the piece (the infamous urinal, for example) acquires meaning. I know it's a controversial opinion though.

          This isn't even a novel problem in art. I argue that it's pretty much the same as Xeniakis' stochastic compositions and John Cage's experiments with composing by using the i-ching, they were trying to divorce the artist from art. The difference is that the AI pieces just happen to be rather pleasing on an immediate level. Urinals, stochastic music, 4' 33'' were, uh, not pleasing. If the implication is that AI art is essentially the same as human made art, that context and curation is the entire difference, then yeah, it's something that art has been contending for decades though.

          Should be mentioned that this would be a better discussion if the obvious threat of being automated out of a job wasn't looming all over our heads, I think it injects too much consequence to what's a conversation a bit divorced from normal life

  • KollontaiWasRight [she/her,they/them]
    ·
    2 years ago

    There are good arguments against the current direction of AI development, but only one of them makes a brief cameo in this piece (AI reifies social inequalities and bigotries and further refines them). Missing is what ought to be obvious: These models are hot garbage. The create product of their "work" is bad. Look at that shitty racism rap - ignore the racist and sexist comment and just look at it from the perspective of writing lyrics. It fucking sucks. It has an at-best-loose understanding of meter, rhyme seems to exist purely to make its phrasing maximally awkward, and it uses no real poetic technique. The only lyrics it could actually replace are the random interjections of European techno producers. Then go look at the "art" these things produce. It's complete shit. It's just a reproduction of an idiot's understanding of what an image is supposed to be. At its absolute best, it isn't good enough for a coffee-table book of mediocre art.

    As a programmer, these tools are vaguely useful for some boilerplate code when monitored, but most of the code it spits out either doesn't work or reflects the reality that the model does nothing but put together words it thinks are related, with no understanding of the underlying use of the code in question. It only performs well when you give it a purely abstract exercise. Start using it for anything real, and you'll be rewriting 70% of the code it gives you.

    • BynarsAreOk [none/use name]
      ·
      2 years ago

      You are correct but I think in these discussions there is an assumption that AI tooling will eventually become good enough to overcome those problems.

      • KollontaiWasRight [she/her,they/them]
        ·
        2 years ago

        I'm not particularly worried about that, tbh. These models don't understand why we put the things together that we put together, just that we do. They can duplicate the things we do, but that doesn't mean they can duplicate the subtextual conversation between the reader/viewer/listener and the art that makes art art in the first place.

  • UlyssesT [he/him]
    ·
    edit-2
    2 years ago

    I think I'll take a break from this one, after some parting thoughts.

    I don't believe in a "soul" or any particularly special cosmic importance assigned to human beings or any particular divine presence that assigns such importance to human beings. That said, crude reductionism plays into the hands of the ruling class and is far more often used to degrade and denigrate "the other" than it ever actually meaningfully challenges the arrogance and the abuses of the ruling class. Look at how many of them (and their enabling minions) see the rest of us as "NPCs" or even "husks" to borrow a term from the "Effective Altruist" eugenicists discussed here a few days ago.

    If we're all supposed to call our individual consciousness "an illusion" on some deterministic grounds and then stretch that stance further to state that we can (and by implication, should) be easily replaced with chat programs and the like, why fight the ruling class at all? Why not just let the superficially more efficient computers do their thing, let the the owners of such computers, simply "stochastic parrots" like the rest of us even if they clearly get a lot more power and privilege with their own "parroting," rake in those profits at all our expense? Why don't we just let them do as they please and instead lay down and die because we have no more right to exist than the machines that the ruling class owns for purposes of replacing us, one purpose at a time? It all doesn't matter how we suffer because our lived experiences are all just an illusion, right? :galaxy-brain:

    If and when society is improved somewhat and suffering is reduced, I'll be more open to discussions of how we're all "meat computers" in "meatspace" and what meaningful consequences (if any, really) that such reductionism entails for how we should think about ourselves and each other. Until then, it's just another kind of alienation and insulation, a privileged position to see others suffering and adhere to crude and nihilistic yet ironically lofty perspectives on the human condition that usually further demoralize the proletariat instead of motivating them to fight back and, materialistically, do fucking nothing to improve their material conditions.

    That's why so many right-wing techbros like to talk about "meatspace" and "meat computers," I argue. The perspective does not threaten their power and in their interpretation actually strengthens their grip.

    Parting thoughts over.

    :manhattan:

      • UlyssesT [he/him]
        ·
        edit-2
        2 years ago

        is deeply equalizing

        You clearly completely missed my point if you can look at the gross injustices of the world and the growing contradictions of capitalism and see the machines owned by the ruling class as at least the equal of people for their right to exist, and declare "this is equalizing!" while the people are trampled over by their rulers, "equal" as meat and/or parrots or whatever reductionist descriptions you feel privileged in using.

        I hold no special metaphysical fondness for Buddhism. It's often another alienating system of maintaining the status quo and expanding the power of the ruling class, especially as applied by rulers that have adopted it historically.

        I said I wanted to take a break because it's almost always :wall-talk: dealing with arguments like yours. I'm more Continental in my philosophy about what is happening and what ought to be done and less about quaint and privileged abstracts about "what is, is."

        I predict this is going to go nowhere except, maybe, name calling. You did make a new account specifically to pitch your take in this thread, after all. I really don't want to get into this further right now.

          • UlyssesT [he/him]
            ·
            edit-2
            2 years ago

            We probably agree on what is to be done,

            Not necessarily.

            "Your poetry is just parroting, just like the machine replacing you" provides no comfort and for that matter no hope and no way forward for the poet being replaced, and on those grounds I argue it can even be counter-revolutionary.

            I could expand that philosophy further into telling a laid off factory worker that the robot that replaced their post on the assembly line simply did his job better and therefore he should have no grievance instead of asking that worker why the owner of the machine is the only one that significantly benefits from the machine's labor, and then suggesting what is to be done from there. If I told that worker that they are literally the equal (or by implication of efficiency, the lesser) to the machine that replaced them in the factory, and therefore should feel some sort of solidarity with that machine and be happy that their existential equal took over the job, even while owned by someone that gets all the machine's benefits, that worker would be demoralized or pissed off and I wouldn't blame them.

            The "path" you suggest is a potentially ruinous one on those grounds when it comes to any future possibility of defeating capitalism before it destroys us all, "meat" or "parrots" or whatever.

            I really don't want to get into this any further.

  • Simferopol [none/use name]
    ·
    edit-2
    2 years ago

    dont care i love that thing it gives great advice (sometimes). even though its by default very neolib.

  • usa_suxxx [they/them]
    ·
    2 years ago

    I don't really what problem this solves other than the ability to sell you more slop and create the advertisements more slop. From what I have seen, there is no guarantee for correctness on technical matters. It's doesn't feed or clothes people. So I kind of always feel odd when people say it's un Marxist to be against AI Art or Chat GPT.

    • Spectre_of_Z_poster [they/them]
      ·
      edit-2
      2 years ago

      Chat GPT can write code. It can debug code. It can design websites. It can translate language better than any automated language translation services. I fail to see how this doesn’t automate socially necessary work and solve problems

      • usa_suxxx [they/them]
        ·
        2 years ago

        The legacy code creator. Debugging from scratch every time. Everyone's favorite activity.

        • Spectre_of_Z_poster [they/them]
          ·
          2 years ago

          AI has improved exponentially in just the last year. You are completely blind if you do not see the potential this has to basically eliminate nearly all white collar work as it becomes even more sophisticated

          • mittens [he/him]
            ·
            2 years ago

            I mean, I'm a bit skeptic because most coding work is actually eternally debugging shitty code, I barely write stuff at my current work, I'm lucky if I get to do more than 20 lines a week, because most of the day is spent browsing undocumented garbage finding the most appropriate place to add my fix in. Leaving that aside, we have always been at risk of white collar jobs being either lost, or precarized, or outsourced, or being flooded with too many workers, or at risk that the bubble will burst fundamentally undermining the value of IT forever, or all of them happening at once. This AI shit might be a catalyst but what has meaningfully changed in the working relationship? We were ALWAYS at risk.

      • drhead [he/him]
        ·
        2 years ago

        Let's be honest. ChatGPT is copying code snippets from StackOverflow with varying levels of correctness. I guess that is what people were doing anyways though.

        • Spectre_of_Z_poster [they/them]
          ·
          2 years ago

          No it isn’t connected to the internet any longer and it creates novel code for requests in plain English that are extremely specific and niche

          • space_comrade [he/him]
            ·
            2 years ago

            The more specific you get the more wrong it tends to get though, it really starts messing up on the small details.

            • Spectre_of_Z_poster [they/them]
              ·
              2 years ago

              And this time last year it was much worse and couldn’t even do the more generic requests. It will continue to refine.

              Remember, automation doesn’t need to be better than humans at coding. It just needs to be good enough to function, and then it will take over because it’s basically free

          • mittens [he/him]
            ·
            edit-2
            2 years ago

            It's not connected to the internet but the model was trained (and is constantly re-trained) with shit scrapped from the internet, which makes the distinction meaningless.

    • drhead [he/him]
      ·
      2 years ago

      What does "being against it" do, though? What specific actions would you take in opposition to deep learning tech?

      I'm strict on calling it un-Marxist because carrying out an anti-AI programme would rely on either an unsustainable unending struggle against everyone trying to recreate it, or going full :a-guy: and bringing us so far back into the Stone Age that we can never reindustrialize again.

      Most of the problems that people describe with deep learning tech, including what you're describing, are problems with the system that it exists within, not problems with the tech itself. The abolition of capitalism is the only sustainable and permanent solution to the problem, and would be one that allows humanity to fully realize its benefits with few adverse consequences.

      As of right now, I do not think any opposition to AI will actually benefit workers in any way -- the most likely outcome would be that huge media companies end up being the only people able to effectively use the technology, which will result in most of the job eliminations we would hope to prevent happening anyways. It's a fight between media companies wanting stronger copyright (look up the Mickey Mouse curve -- we're due for another expansion of copyright) and tech companies wanting to sell ridiculously overpriced cloud services, and regular artists don't get a seat at this table under our current system.

    • edge [he/him]
      ·
      edit-2
      2 years ago

      From what I have seen, there is no guarantee for correctness on technical matters.

      But it comes close, which makes it a useful tool. A programmer can get it to generate some code, then they go through and make sure it's good. I've used it for that, there was one problem it wasn't able to help me solve, but there was another problem where it probably saved me a good hour or two (probably more if I lost focus because of untreated ADHD) of trying to find if someone else had my specific problem or else breaking the problem down to more generic problems.

      Similar goes for art. Artists can use an AI generated image as a base and work from there.

  • edge [he/him]
    ·
    2 years ago

    Counterpoint: it helps me with programming problems.

  • BynarsAreOk [none/use name]
    ·
    2 years ago

    My take remains the same, the current way in which capitalists will use AI is bad. But there doesn't seem to be a solution that doesn't just end up on a slippery slope and it still doesn't address the elephant in the room which is how do you actually enforce this.

    Society would have to around and see AI art as reprehensible as child porn, so that not only you can get a broad international legislative consensus against it, but also be able to mark sure capitalists enforce this legislation.

    So will we get the same consensus with AI tools? Its a rhetorical question isn't it? We can't get people organized to do any meaningful climate change praxis and that is an existential threat.

    We only have one recent example of something becoming socially taboo in a short amount of time and it was NFTs. If you can convince the entire art community to organize and oppose AI art then maybe AI art could end up just like NFTs. Actual professionals and people involved in the community are the ones that should be convinced to be against it. Wasting time arguing with with the average person wont change anything if half the art community is split on the issue.

  • hahafuck [they/them]
    ·
    2 years ago

    I don't think there is ever going to be a good reason to use a nuclear bomb