• TreadOnMe [none/use name]
    ·
    edit-2
    1 year ago

    It is truely amazing that these idiots have deluded themselves into thinking that an LLM is intelligent and not just giving the responses that generate the feedback that it is trying to get, which means interacting with humans and human content that has zero idea what it is talking about, with no way to effectively check for frame of reference.

    It's no wonder they all talk like fucking engineers who have never actually read a fucking book outside of Atlas Shrugged in their life.

  • GnastyGnuts [he/him]
    ·
    1 year ago

    Even a few months ago this stuff was getting basic math problems wrong, and people were like "no, it's okay because if you already know math, you can recognize the incorrect answers and tweak the prompt until you get the answer that you know is correct if you already know math, and that means this can replace math teachers!"

    The closest this AI shit has come to sapience is just in making people into absolute fucking idiots.

    • AbbysMuscles [she/her]
      ·
      1 year ago

      The closest this AI shit has come to sapience is just in making people into absolute fucking idiots.

      AI will indeed rival human intelligence. Not because AI is getting smarter, but because techbros are just that dumb. The only people they'll automate away is themselves.

        • UlyssesT [he/him]
          ·
          1 year ago

          It's all Mechanical Turk trickery that exploits human labor?

          astronaut-2 astronaut-1

      • Farman [any]
        ·
        1 year ago

        With corona thats going to be all of us eventually.

    • 7bicycles [he/him]
      ·
      1 year ago

      I didn't really think I'd see like mass market snake oil appeal within my lifetime, honestly. Yeah yeah sure the worlds full of gizmos and gadgets that fit the description but usually political figures don't start talking about how ion converters that somehow make your car fuel more gooder will change the world

    • randomquery [none/use name,any]
      ·
      edit-2
      1 year ago

      People noticed from the start that ChatGPT is horrible at long additions and it would essentially produce random answers, and it's surprisingly simple why it fails: The way this works it cuts the sentences we input into strings of symbols from left to right and then passes these chunks of text through the neural network. When it comes to addition this fails because to add numbers we have to do it "from right to left".

    • UlyssesT [he/him]
      ·
      1 year ago

      The closest this AI shit has come to sapience is just in making people into absolute fucking idiots.

      That's the most success it's had with ascension: convincing lots of people, including self-described leftists, that the human is just a meat chatbot as if denigrating living people would accelerate the timetable for Nerd Rapture, I mean, the Singularity(tm).

  • buh [any]
    ·
    1 year ago

    Me a few months after the semester ends

  • Barabas [he/him]
    ·
    edit-2
    1 year ago

    This is what we are using enough computing power to fuel a medium sized country for. The inevitable conclusion that flooding the internet with trash will mean that the thing modeling off of the Internet will get worse.

    I guess it is about as useful as using crypto to teach libertarians why money is regulated.

    • Shinji_Ikari [he/him]
      ·
      1 year ago

      I'm pretty sure the models' training data hasn't changed much yet, as the data set is huge.

      I'm not sure if this is negative tuning outcomes or what.

    • LaGG_3 [he/him, comrade/them]
      ·
      1 year ago

      I guess it is about as useful as using crypto to teach libertarians why money is regulated.

      So that means no lessons will be learned and it will be a waste of time and resources! ancap-good

  • GrouchyGrouse [he/him]
    ·
    edit-2
    1 year ago

    The internet makes everything stupider, good job connecting your robo brain to the stupidest place, nerds

  • mittens [he/him]
    ·
    edit-2
    1 year ago

    it's because it's insanely expensive to run the entire billion-parameter model every single time lol.

    since the most damning results came from ChatGPT4, you know, the premium one you have to pay for, then I can only assume that OpenAI was actually losing money per request and are now scrambling to curb their losses. means that perhaps maybe running an ultra-complex LLM is not really cheaper than hiring an actual person? perhaps maybe the costs of operation were subsidized by VCs in order to make LLMs seem more appealing than they actually are? perish the thought.

    • usernamesaredifficul [he/him]
      ·
      1 year ago

      means that perhaps maybe running an ultra-complex LLM is not really cheaper than hiring an actual person?

      especially since it is almost always wrong

      • Hive [none/use name]
        ·
        1 year ago

        I think you also very correct they will basicly force the llms on everyone with out it being that much more profitable, especially if the llms keep eating their own out puts.

        • usernamesaredifficul [he/him]
          ·
          edit-2
          1 year ago

          it's like CGI once you've spent the upfront cost it's basically free. The fact it doesn't work is like the fact CGI looks lame an annoying irrelevance

          after all nothings worked since the 80's anyway. These days are like a Thatcherite 1970s

    • Parzivus [any]
      ·
      1 year ago

      I think it probably is still cheaper, the expensive part is development. AI companies right now are basically in a race to make useful AI before the VC funding runs out. The big developments will probably be slow and steady university research, as always

      • mittens [he/him]
        ·
        1 year ago

        I mean the issue here is that it's regressing so there's two things that may be happening here:

        1. (My guess) is that they're pruning the model to be more lean, which definitely does imply that it's expensive to run

        2. New data being added to the model is too biased and it's making the model perform worse which implies that it's going to be very very very expensive to gather quality data to improve the model

        • underisk [none/use name]
          ·
          1 year ago

          They could be trying to prune the training corpus of copyrighted works to get ahead of any potential legal conflicts. It's also possible their training corpus has been tainted with stuff that was generated by AI.

          • mittens [he/him]
            ·
            1 year ago

            The first sounds plausible but I was definitely leaning towards the second

          • Hive [none/use name]
            ·
            1 year ago

            Fucking bingo you get it, you get a medal of. Assinment understander gold-communist they absolutely are.

        • Hive [none/use name]
          ·
          1 year ago

          There is still quite a bit of fat to trim on these llms might be early to tell, but yeah profitability is lower then expected they pounded 1 trillion $ over the last 7-8 months its a program that unemployes people, so how could it ever be good for the economy side note it is a real productivity upgrade and we really haven't gotten for 30 years, but it also seems to be a tech that is dead ended

  • aaaaaaadjsf [he/him, comrade/them]
    ·
    edit-2
    1 year ago

    Yeah, they're going to lock all the features that make it more advanced than Google assistant or Siri behind a paywall. If you want chatgpt to do math or programming, you're going to have to pay up. Those features were only offered for free in the beginning to entice users to join.

    • goatmeal [none/use name]
      hexagon
      ·
      1 year ago

      It doesn't write better than good writing either. Sometimes more human errors are more preferable to read than the shit I've read it spit out. I've written books and had to do articles very quickly or they couldn't run that day.

      I was probably the best writer in a smallish state schools writing heavy major. I knew other good ones too but the vast majority of kids could not write. My unfair advantages were big city public schooling and having former English teacher mom. I've known better writers than me. Especially when it comes to fiction and I tend to think ChapGPT would write it very badly. For the same reason I and other people are bad at it. It takes a great human imagination and original storytelling.

      I was writing between doing highschool and college. I tuned up in community college and by the time I was at the university I was able to one draft everything. I would get question marks on all my pages and I could tell some students were preparing for several drafts using personal shorthand only they knew. I genuinely askee the profs if they even cared to attempt to teach 100 level writing as way to push back on trying to pawn it off on me.

      I mention a lot of this because college bad writing is all relatively similar. It's defipherable but nowhere near where it would need to be to present an argument. In social science it's all garbled data. So it caused all these controversies at Texas A&M and other universities when it came out, as profs saw garbled data, knew kids cheat then flunked entire classes for turning in ChatGPT that was original college writing that wasn't that good. Which seems to be where ChatGPT is. It seems to hit the median human quality at maximum efficiency.

      • alcoholicorn [comrade/them, doe/deer]
        ·
        1 year ago

        I can't talk about good vs bad writing, but ChatGPT's issue is the same as AI image generation.

        It's just a surface-level resemblance to the thing it's generating without any understanding behind it. Any time it's answering a unique question, there's no underlying cohesive thought, and when it's answering non-unique questions, it's just plagiarizing multiple people.

        • goatmeal [none/use name]
          hexagon
          ·
          11 months ago

          Sounds like bad writing but I suppose profs who didn't do dick to bring writing up to the collegiete level saw that from their students and AI and connected bad dots.

          I just always hated thst I was a bit older and they wanted my job to help everyone. So I would go on tirades about how nobody gets their $$$ worth. Was me thesis too

      • UmbraVivi [he/him, she/her]
        ·
        1 year ago

        Art is an expression of personal feelings, opinions and experiences, and our appreciation of it comes from our ability to relate to the artist through their art. That is where the substance comes from.

        AI can master the "craft" behind it, being able to draw or write well by "objective" metrics, but it will always suffer from, well, not having personal thoughts or feelings it could express. It could probably generate decent Marvel slop, but it could never create something like lt-dbyf-dubois

        • goatmeal [none/use name]
          hexagon
          ·
          11 months ago

          Right like how I read The Bsll Jar then immesiately went out to by Plaiths documentary. It would've been a shame if it was 1011100010100010101

  • mechwarrior2 [he/him]
    ·
    edit-2
    1 year ago

    When I bring up the "ai art" discourse with my friend in a creative field, he's like "whatever dont care I'm in my own lane"

    Then when I read people were talking about iterating models using outputs, I thought Oh ok he's good these things are just going to get worse from now on, either in accuracy/fidelity, or just loss of generality. Shit in shit out. Hubris. Pure ideology.

    • Dull_Juice [he/him]
      ·
      edit-2
      1 year ago

      I mean there was that one study I saw shared somewhere on here recently that basically showed if the training data ends up using other AI generated content it degrades pretty much immediately. I know the training data is curated but do wonder if you can start sneaking garbage into it.

    • PosadistInevitablity [he/him]
      ·
      1 year ago

      Technology usually advances in capability over time. I’ve never heard of an entire field getting worse. Certain services sure, but it seems impossible we’d backslide as a civilization to the point where we just can’t do what we could with older technology.

      Not without some huge war or disaster anyways.

            • PosadistInevitablity [he/him]
              ·
              1 year ago

              Allowing the facilities and technical expertise to fall apart is a “won’t” to me.

              Like, we wouldn’t be able to build a wooden galleon in short order, but that’s a matter of societal choice. We haven’t maintained the facilities or expertise.

              We could redevelop them. It’s not a “can’t” the way I see it.

              • TreadOnMe [none/use name]
                ·
                edit-2
                1 year ago

                We could actually build a wooden galleon in short order (there are teams of amatuer and professional historians/archeologists that do it as a hobby in San Diego, source, I used to work there during the summers in high school), but the quality for a short order galleon that they used to be able to accomplish for a short order galleon wouldn't be there at all. We could probably make one that would last for 5-10 years of heavy use, while older ones were generally in use from 10-25, with some old stories of ships being used for 40 years.

      • usernamesaredifficul [he/him]
        ·
        1 year ago

        Technology usually advances in capability over time. I’ve never heard of an entire field getting worse

        the technology being used here is actually from the 50's in terms of the algorithms being used. The bottleneck here is data collection which is a social and organisational problem not a technical one.

  • IceWallowCum [he/him]
    ·
    edit-2
    1 year ago

    No omnipresent omnipotent evil AI mastermind gf? sadness

    Seriously though:

    1- Maybe this is what will hammer on people's heads that AI is a specific tool for a limited specific purpose, much like a hammer or a saw?

    2- I wonder how China's state AI is doing

    3- what would be the feasibility of using an AI for monitoring and analysing trends in money transactions, prices etc in a whole country? Mind you, I'm speaking from a country that has widespread digital payment. Poor people that need to beg are asking for help as transaction to their phones

    • UlyssesT [he/him]
      ·
      1 year ago

      Mind you, I'm speaking from a country that has widespread digital payment. Poor people that need to beg are asking for help as transaction to their phones

      Only a few years ago, cryptobros were claiming cryptocurrency was the killer app that would make poor people in impoverished countries, uh, something related to what you said.

    • UmbraVivi [he/him, she/her]
      ·
      1 year ago

      His next video is gonna be about memestocks like Gamestop and Bed, Bath & Beyond

      Still excited for it because I watched every one of his 3 hour vids like 5 times but it's gonna be a while til he makes a full on essay about AI sicko-wistful

      If ur a real FoldingHead tho, he talked about it a bit on a podcast with Adam Conover

      • stigsbandit34z [they/them]
        ·
        1 year ago

        Capitali- critical video essays have replaced podcasts for me, so I guess you could say I’m a foldinghead in that aspect 😀

  • Tachanka [comrade/them]
    ·
    edit-2
    1 year ago

    they got rid of like half the people in my office because they think a machine learning bot is going to do data science better than trained engineers agony-mescaline

    • UlyssesT [he/him]
      ·
      1 year ago

      It's somewhere between "fuck the workers" and "new tech will solve everything, bazinga." debord-tired

  • PosadistInevitablity [he/him]
    ·
    1 year ago

    AI generates the weirdest fucking opinions in people.

    Weird ass Luddite type of rage boils underneath the discussions involving AI. Honestly perplexing to me why that is.

    It’s a computer tool. Not a literal intelligence.

    brainworms

    • Goblinmancer [any]
      ·
      1 year ago

      "AI" a really good marketing term for an algorithim, made people think the AI is a a literal sentient program.

        • usernamesaredifficul [he/him]
          ·
          1 year ago

          it's more a making things up machine that was built with plagerism.

          It's not good at plagerism because everything it makes is shitty

          • Smeagolicious [they/them]
            ·
            edit-2
            1 year ago

            No but you see it doesn’t plagiarize since it analyzes general trends (from human art and writing scanned and used without permission) and since it’s not directly piecing sentences or visual art together like scavenged puzzle pieces then you clearly cannot be offended very-intelligent

            • usernamesaredifficul [he/him]
              ·
              edit-2
              1 year ago

              I wouldn't worry about it tbh this is what spending half a billion dollars on making a LLM gets you and it's both shit and unprofitable

              I don't imagine 1 billion dollars gets you much more data than 0.5 does and would assume that the returns diminish drastically. And as training data is the real bottleneck in machine learning production I think chatGPT actually demonstrates that LLM's are an infeasible proposition

    • crispy_lol [he/him]
      ·
      1 year ago

      I think the rage is directed at the morons who actually think chatgpt is the answer to everything, they’re loud and annoying

      “Have you tried asking chatgpt?”

      • anaesidemus [he/him]
        ·
        1 year ago

        i was kinda hoping it could replace Google, but you'd need communism first I guess.

      • UlyssesT [he/him]
        ·
        1 year ago

        The "humans are just meat chatbots anyway" bazingas were worse than any so-called "Luddite" that didn't like losing their job to a slop printer.

    • Vingst [he/him]
      ·
      1 year ago

      People don't want to lose their livelihood to automation. The Luddites were perfectly rational.

      • PosadistInevitablity [he/him]
        ·
        edit-2
        1 year ago

        Fearing productive technological advancement is irrational because the fear is misdirected.

        The source of the pain is the economic/social system. No one would fear automation in a socialist system - it would be a benefit to all.

        This is like hating the Sun because your neighbor beats you when it rises every day. You really should be hating your neighbor.

        • UlyssesT [he/him]
          ·
          edit-2
          1 year ago

          A slave (while currently suffering on the plantation) should not irrationally resent the plantation because under better management (that isn't there and isn't coming anytime soon) the plantation would be a nice place. morshupls

          Same deal with the whip, really. morshupls

          • PosadistInevitablity [he/him]
            ·
            edit-2
            1 year ago

            Wouldn’t it make more sense to resent the Slaveowners?

            Focus your hate on the people actually oppressing you rather than some symbolic thing entirely meant to shield that slave owner.

            stalin

            • UlyssesT [he/him]
              ·
              1 year ago

              Wouldn’t it make more sense to resent the Slaveowners?

              You missed my point.

              You're standing over people suffering right now from bad conditions now and condescending to them because a theoretical nonexistent in the present better version of those conditions would hurt them less. You even waved around the "rational" word.

              A whip is just a tool, but if you were someone in the late 1800s and you met an emancipated former slave and told them that actually the whip was just a tool and being haunted by those whip scars and the thought of whips is irrational, you would not get a good reception.

              • PosadistInevitablity [he/him]
                ·
                edit-2
                1 year ago

                We’re talking about Chat GPT and AI

                No one is being whipped here

                No one is suffering in the fields

                • UlyssesT [he/him]
                  ·
                  1 year ago

                  No one is being whipped here

                  No one is suffering

                  No one lost their jobs, no one was pushed further into precarity, poverty, or outright homelessness, your rational blanket statement claims? what-the-hell

                  You may have found your way here from out of reddit-logo but apparently the reddit-logo hasn't found its way out of you.

                    • UlyssesT [he/him]
                      ·
                      1 year ago

                      Yes, because they both involve people like you condescending to others for being "irrational" toward tools that can and have hurt them under conditions that do make them suffer.

                      Your "no one is suffering" blanket statement that ignores everyone that already got fucked over by chatbot-related technology shows how "rational" your argument really is. You're just stanning for the technology and looking down on people getting hurt by it.

                      Why are you even here? This is supposed to be a leftist site that supports workers, not condescends to them to stan for bazinga tech that could be nicer under a system that isn't here.

                        • UlyssesT [he/him]
                          ·
                          1 year ago

                          That’s not what I support so you’re just wrong.

                          No one is suffering

                          Stop lying directly to my face.

                            • UlyssesT [he/him]
                              ·
                              1 year ago

                              You wear that emoji well, you techbro-apologist clown.

                              • PosadistInevitablity [he/him]
                                ·
                                edit-2
                                1 year ago

                                Take a chill pill and stop extrapolating an out of context statement to peoples entire world view

                                Like, you’re just wrong about what I believe I can’t really say anything other than that because it’s a frivolous shouting match

                                • UlyssesT [he/him]
                                  ·
                                  edit-2
                                  1 year ago

                                  stop extrapolating an out of context statement

                                  No one is suffering

                                  You're either laughably privileged or a liar after making the claim that no one has been hurt by chatbot technology.

                                  How am I even supposed to respond to that?

                                  RETVRN to reddit-logo , where being "rational" and smug and condescending about how actually in a theoretically nicer society technology that hurts people wouldn't hurt people would be better received, and bask in karma and gold and praise for your rationality there.

                                  • PosadistInevitablity [he/him]
                                    ·
                                    1 year ago

                                    I never claimed that and I am saying again that’s not the case. I realized I worded that wrongly and edited it…

                                    AI in its current form is CRAZY exploitative and fucks over working class people. The masters that hold those tools should be butchered for their exploitative ways

                                    Like, dude, there is nothing I can do but say I wrote the wrong thing.

                                    My world view is not represented by a typo. You’re trying to harangue me for something I honestly don’t believe.

                                        • UlyssesT [he/him]
                                          ·
                                          1 year ago

                                          You left out “in the fields”

                                          I wonder why

                                          Because that's bullshit goalpost moving that you edited in after the fact. Does suffering only come from the fields?

                                          Your worldview is trying so very hard to be "technically correct" and condescend to people that are supposed to be comrades because they weren't Rational(tm) enough for your liking after getting fucked over by tools used by the ruling class in the present system, right now.

                                          • PosadistInevitablity [he/him]
                                            ·
                                            edit-2
                                            1 year ago

                                            AI is exploitative in its current form. Full stop. It should be destroyed and the creators killed.

                                            After the Revolution, the tech would be nice to have.

                                            This is what my position is.

                                            • UlyssesT [he/him]
                                              ·
                                              edit-2
                                              1 year ago

                                              That's nice.

                                              Your "chill pill" and "my dude" and "I wonder why" smarmy talk certainly didn't make that clear, especially after your position was some "technically correct" bullshit that may thrive on reddit-logo but does the opposite of rallying people who feel rightfully hurt about being screwed by the present economic system. Telling them they are technically wrong about what they are angry about is just counterproductive pedantic bullshit.

                                              EDIT:

                                              and the creators killed.

                                              Why? The people who actually coded the things and got them started weren't billionaire assholes. They were workers, exploited, undervalued, underappreciated, and outright forgotten compared to the assholes that own the technology and command its use.

                                              • PosadistInevitablity [he/him]
                                                ·
                                                1 year ago

                                                I only used that tone with you. because you’ve been an ass.

                                                Creators refers to the capitalist owners.

                                                • UlyssesT [he/him]
                                                  ·
                                                  1 year ago

                                                  I only used that tone with you. because you’ve been an ass.

                                                  Mirror. Find one.

                                                  Creators refers to the capitalist owners.

                                                  All that pretense of rationality and you credit the owning class with creating the technology? what-the-hell

    • stigsbandit34z [they/them]
      ·
      1 year ago

      Luddite rage is extremely valid, especially considering the fact that the root of it is never shared. Reality invented once again big-cool