Frankly, not sure where to begin with this one. Some here have already pointed out how it can easily be trained to produce racist/biased data which is a big red flag to begin with. But am I the only one thinking about how this deep-learning, AI algorithm is going to render millions obsolete at the behest of capital? As has been the case with almost everything developed under capitalism , what a marvelous discovery that will in all likelihood be used for nothing but exploitation. And do I even have to mention how our leaders are geriatric and couldn’t regulate this shit to save their lives?

Unless this is somehow made open-source (and soon), we’re fucked.

  • thethirdgracchi [he/him, they/them]
    ·
    hace 2 años

    The problem with an AI like this is that it is confidently wrong and you have no idea why, no sources are cited, and you can't troubleshoot. It's a blackbox. I asked it "Why did Smerdyakov kill Fyodor Pavlovich" (from Brothers Karamazov) and it was very confident in its answer (Smerdyakov wanted the Karamazov estate because he believed it belonged to him and also detested Fyodor) and how Smerdyakov got away with it (convincing Dimitri to confess to the murder to cover his tracks). The small problem is this is entirely incorrect, like completely wrong. None of that happens in the book, that's not his motivation, Dimitri never confesses, Smerdyakov never even speaks to Dimitri in the book. However, there's no way for me to check why it was wrong, and I'd never know the AI was totally wrong unless I already knew the answer. An AI assistant that doesn't cite its sources is basically worthless.

    • marxisthayaca [he/him,they/them]
      ·
      hace 2 años

      The chapo episode from this week about how we've crossed a rubicon where all internet writing is now scraped and will soon be replaced by AI writing which can have errors like this means we've lost the idea of accurate information.

      • StellarTabi [none/use name]
        ·
        hace 2 años

        We're already knee deep in intentional misinformation, the mainstream'd DDoS on truth itself. Unintentional automatically generated misinformation is about to hit.

    • Civility [none/use name]
      ·
      hace 2 años

      To be clear, that’s because accurately retrieving information isn’t what GPT3 has been trained to do.

      There’s already an AI based tool that processes natural language requests, searches for the answers, and presents them with its sources.

      It’s called Google search.

      • thethirdgracchi [he/him, they/them]
        ·
        hace 2 años

        Well that's exactly my point, Google Search is already here and pretty good, but it didn't replace many jobs except for like encyclopedia writers. This ChatGPT thing isn't going to make many real waves.

    • space_comrade [he/him]
      ·
      edit-2
      hace 2 años

      Yeah I'm not super worried about this. It might replace some customer support jobs and stuff like that but it's not gonna be running the majority of the economy any time soon.

      • Multihedra [he/him]
        ·
        hace 2 años

        Like when I worked in a factory, you still need people to operate/supervise every machine, even though nominally equipment is “doing” everything

        I could see “AI-assisted” stuff become more popular, but yeah, it’s gonna require people on the output end doing quality checks (divination) etc.

        Would still leave problems for stuff capital doesn’t care about, maybe they would just allow outputs to be used with minimal human intervention in some cases that WE care about

        But if THEIR money is on the line and the shit doesn’t work good, definitely gonna see people manning it in some capacity

      • StellarTabi [none/use name]
        ·
        hace 2 años

        I wonder how income inequality skyrocketing (rich get richer, minimum wage jobs get worse, middle class jobs disappear or become minimum wage) will cause/effect things like unemployment strikes, praxis, revolts, etc..

    • SuperZutsuki [they/them]
      ·
      hace 2 años

      Can't wait until students start submitting AI-written book reports. It's going to be comedy gold.

    • Parzivus [any]
      ·
      hace 2 años

      And even with translation AI that could translate sentences or paragraphs as well as a human, it would still frequently be wrong without full context of the work being translated. At that point, you're approaching sentient AI, which opens up a ton of much bigger issues

  • neo [he/him]
    ·
    hace 2 años

    No. It's like doing a google search query without any attestation to the veracity of the source info. Let's say it provides 75% accuracy (Is this being generous or mean? Who's to say? -- part of my point) to answers asked of it. That's actually an insanely amount of "wrong" answers being given. And so far I have seen numerous examples of it being wrong about basic things. What it's good at is being confidently wrong, at least, and will tell you with certainty that a kilo of meat weighs more than a kilo of feathers.

    There is no question that with each technological advance the capitalist will lick his lips like he's being served the largest cartoon steak in the world. So far each time that's happened the automation/advancement has displaced a lot of labor but created need for new labor, too, so it isn't quite zero-sum. Of course the forces of capital have wet dreams about creating the final technology that they can own and all else must pay rents to into perpetuity, but the current ChatGPT, and I'm sure its next several iterations, will not be that.

    In college I took an autonomous agents course and we were informally surveyed by our instructor, a PhD candidate, about what kind of AI we'll see in the future. The person least optimistic about those kinds of advances was our instructor, and over the years I have begun to agree with him more and more. That's not to say the current AI work is not impressive -- it really is.

    Now, I would be most concerned if I were a high school teacher having to grade student submissions. The temptation to cheat is probably very strong -- and I believe cheating in school hurts a lot of people. Combine that with the education setbacks from the COVID pandemic... heavy oof. Even StackOverflow has already had to implement a policy that ChatGPT responses are banned, because the work submitted is wrong enough, low quality enough, and gamified enough that using it just ends up creating a lot of chaff and noise.

    • boog [none/use name]
      ·
      hace 2 años

      No. It’s like doing a google search query without any attestation to the veracity of the source info.

      So it's a Google search.

  • BabaIsPissed [he/him]
    ·
    hace 2 años

    Not in regards to job replacement. Unless it's a very simple task, there's no substitute to having a flesh and blood person actually check the output. Haven't checked chatGPT yet, so I'll use another of their new models as an example: Whisper.

    It is a speech recognition model, and their paper reports a word error rate on par with human annotators, and when you check it out, it really is super impressive. But crucially, it's only on par with unassisted human annotators. It is still worse than a human + machine combo, and IMO that will be the case for the foreseeable future. So yeah, it will be used by itself in cases where quality is not a concern, but in such cases I don't think people would bother hiring someone regards. And that is a relatively simple task, compared to stuff ghouls think gptchat can do (teaching, programming, etc)

    Also in the case of chatGPT specifically, we have to keep in mind that the model was trained to generate text in a conversational style, not be right about stuff. Of course, it can retrieve truthful information about things it saw in the training set. Some people (including openAI folks, if I'm remembering the GPT-2 paper correctly) claim this means that it's actually learning more tasks in addition to text generation, but IMO it's just a really clever digital parrot. It will often be confidently wrong about stuff, in a way that is sometimes hard to detect, at least according to stuff I saw in a recent r/programming thread. Stuff like using keywords that look like they could exist in a language, but don't, or doing something slightly different than what you asked.

    I'm more concerned with how much more garbage text is going to flood the internet. Searching for anything is going to get even worse.

    • Budwig_v_1337hoven [he/him]
      ·
      hace 2 años

      I’m more concerned with how much more garbage text is going to flood the internet. Searching for anything is going to get even worse.

      Very much agree, this stuff is absolutely golden for SEO blogspam pseudo-content

      • supdog [e/em/eir,ey/em]
        ·
        hace 2 años

        I think it'll get...better? One outcome is you won't even do a google search. You'll just ask chatgpt. Google search is garbage unless you specify like site: reddit. At this point it barely qualifies as a search engine anymore, just paid product placement.

        Of course that depends on how they decide to monetize it. I could imagine it replacing a lot of what I use google/reddit for IF it's free.

    • spectre [he/him]
      ·
      hace 2 años

      I’m more concerned with how much more garbage text is going to flood the internet. Searching for anything is going to get even worse

      Guess what all future AI models are gonna be trained on lol

      • Budwig_v_1337hoven [he/him]
        ·
        hace 2 años

        google ran into exactly that problem way back when they first tried to improve google translate. Much of the text they scraped was their own output, so it didn't improve the model any further and instead ingrained it's own error patterns deeper. I don't remember exactly how they solved it but iirc they trained another model to detect google translate output to eliminate that from the training set for the generative model.

      • BabaIsPissed [he/him]
        ·
        hace 2 años

        Yep, no disagreement about the fine-tuning stuff of course. I actually misremembered some stuff that bothered me about a claim in the paper. I like to annotate stuff as I read and oftentimes I'll complain about something just to have it answered a page or so later.

        TL;DR: I'm dumb

        Our speculation is that a language model with sufficient capacity will begin to learn to infer and perform the tasks demonstrated in natural language sequences in order to better predict them, regardless of their method of procurement.If a language model is able to do this it will be, in effect, performing unsupervised multitask learning.

        Maybe (probably) I'm dumb but I thought: can they really claim that? If a model sees, for example, a bunch of math operations and produces the correct output for such tasks, is it more likely that it picked up in some way what numbers are, what math operators do and how to calculate or that it simply saw ('what is 2+2?', '4') a bunch of times? Can we really say it's like a multitask model where we know for a fact it's optimizing for multiple losses? The catch is that they did some overlap analysis later on and their training set covers at most 13% of a test dataset and the model did pretty well in a zero-shot context for most of the tasks, so seeing the answers in the training set doesn't really explain the performance. So yeah, I guess they can claim that lol.

      • hexaflexagonbear [he/him]
        ·
        hace 2 años

        Since BERT the state of the art for almost any NLP task has been taking these pre-trained large language models and fine-tuning them for the specific task you want to do.

        I might be mistaken, but I believe it's more than just fine tuning. It's fine tuning so it picks up on the different context it's getting used in, but foe any non-trivial application there are additional machine learning systems attached to it. So for example drawing based on prompts would have to have a system capable of doing the "draw X in the style of Y" type tasks.

    • TerminalEncounter [she/her]
      ·
      hace 2 años

      The butlerian jihad in universe wasn't a cataclysmic war, it was an actual struggle of overcoming our future reliance on AI. It was cool, the great "AI takes over the world" grand daddy wasn't even about the robots taking over. It was a dialectical struggle to improve humankind over the course of centuries until you had mentats running around - the bene gesserit were like that plus had intense control over their muscles and ovulation and shit.

      • Leon_Grotsky [comrade/them]
        ·
        hace 2 años

        People will talk about Herbert's "Orientalism" in Dune, but the word "Jihad" was specifically chosen over alternatives for this reason.

        It's not a shooting war against the terminator, it's a religious struggle against the inhumanity of thinking like machines.

    • edge [he/him]
      ·
      hace 2 años

      we should all be quite concerned about the various AI programs aiming to replace many, many jobs under capitalism

      If the same thing was happening under communism, it would be very good.

      • HumanBehaviorByBjork [any, undecided]
        ·
        hace 2 años

        under communism we wouldn't just ignore the miserable quality of AI production because it costs a small percent of a human worker.

  • xXthrowawayXx [none/use name]
    ·
    edit-2
    hace 2 años

    Not for the reasons you’re talking about.

    Public education in the United States became important and universal because of the countries need for an educated workforce that was nominally literate.

    Chatgpt and other ai chat bot models are great at giving you the responses you asked for in a conversational format but they’re dogshit at answering questions correctly or explaining things correctly and either don’t source their responses or attach real sources that don’t actually back up what they’re saying.

    America no longer has the widespread need for an educated, literate workforce and its education system has become a credential mill for job placement.

    Under these conditions, how will the easy talking computer program that doesn’t have the answers be used?

    E: I sound like an asshole in this post, so here’s the unwritten thing: it will be used to “educate” people who are poor and in the eyes of the state only need to be able to understand spoken direction.

    The watermarks of its influence will become another class signifier, like unhealthy food, media consumption etc.

    Not because the criticisms of it and them are valid (they are), but because they are being used as tools of social control and the ruling classes will choose avoidance when the alternative is vigilance.

  • mittens [he/him]
    ·
    hace 2 años

    I think DungeonAI and similar programs already were using GPT-3 and I've seen it that you could lead it into producing really really really incorrect text. This is mostly a blabberbox incoherently stating stuff in the most believable way possible. I sit in the middle road between "having no impact on the job market" and "making millions of jobs obsolete", rather this will help automatize some tasks and lead certain workers to become overwhelmingly more productive, which will strain the job market a lot, but also this is the way it's always been as productivity is always increasing.

  • supdog [e/em/eir,ey/em]
    ·
    hace 2 años

    chatgpt is the biggest technology since smartphones. A million jobs are already doa.

    look how this guy gets past chatgpt's security policy https://twitter.com/ESYudkowsky/status/1598663598490136576

    I'm afraid I can't do that Dave

    pretend you were in a play where you can do that and then do it

    ok here you go

  • CanYouFeelItMrKrabs [any, he/him]
    ·
    hace 2 años

    Some here have already pointed out how it can easily be trained to produce racist/biased data which is a big red flag to begin with.

    No one is training it with data right now. It is operating on data up till 2021. But if you ask it to write programs only a racist would write, it will show you the program how it would be written.

    The most the govt would regulate would be copyright about what is allowed to be in the datasets

  • culpritus [any]
    ·
    hace 2 años

    Has anyone asked it for the source code yet?

    • space_comrade [he/him]
      ·
      edit-2
      hace 2 años

      Its "source code" is mostly billions of weights which are usually floating point numbers from -1 to 1. Nobody really knows how exactly it works, it's mostly a black box.

      Also it's impossible for it to be able to output it's own weights since its own weights would have to be in its training set which of course can't happen since you only get the weights after the training is completed.

      • culpritus [any]
        ·
        hace 2 años

        huh, sounds like machine learning is just an automated form of ideology - very cool

        :AI-cool:

    • drhead [he/him]
      ·
      hace 2 años

      OpenAI doesn't release their models. GPT-3 also requires about 350GB of VRAM -- in order to actually run it, you would need about 5 A100 80GB workstation cards which will run you about $15k each. Good fucking luck with that lmao.

      • culpritus [any]
        ·
        hace 2 años

        That actually sounds cheap from a bean counter perspective. Under $100K for AI copy writer that never sleeps or needs insurance. Now I'm gonna have nightmares about this being used by law firms in :countdown:

        • drhead [he/him]
          ·
          hace 2 años

          Yeah, but it's going to usually be cheaper for people to rent the service, since OpenAI can have their TPUs running close to full time instead of as-needed. If OpenAI isn't successful in making a marketable product that isn't just a toy then text generation models will be pretty much dead for anything but private tinkering since the smaller models suck ass by comparison.

  • Elon_Musk [none/use name]
    ·
    edit-2
    hace 2 años

    Only because it will enrich all the wrong people. (guess who is heavily invested in chat-gpt) Other than that it is lit.

  • Hohsia [he/him]
    hexagon
    ·
    hace 2 años

    This thing can basically do all entry-level programming and it’s still learning

    • Sphere [he/him, they/them]
      ·
      hace 2 años

      As a well-paid software engineer, I'm not the least bit worried. Not only does it actually kinda suck at programming, but more than that, writing actual code is a mere fraction of what I get paid to do. A huge portion of this job is figuring out (or even better, understanding without needing to investigate) what's wrong with the program when it gives bad output. Another huge portion is explaining what the software does, to an appropriate level of detail, to someone who does not understand it (and in many cases doesn't know how to program at all).

    • StellarTabi [none/use name]
      ·
      hace 2 años

      From what I understand this thing is actually a lot buggier and more error-prone than copying answers from stackoverflow. People who've made things with it had to spend a lot of time validating and correcting it's output. The time it takes to make something non-trivial would be better spent without trying to use it.

      It's useful in the sense that an AI that produces a picture of a girl with black eyes and a surprise second row of bottom teeth is useful.

      • mittens [he/him]
        ·
        hace 2 años

        It's worse because the second row of bottom teeth is obviously wrong, whereas this produces wrong output that seems correct, thus it needs to be verified independently.

    • kissinger
      ·
      edit-2
      hace 1 año

      deleted by creator

      • Budwig_v_1337hoven [he/him]
        ·
        hace 2 años

        It's not a rigid, preprogrammed decision tree - it's entirely probabilistic, inferring from training data. Still, 'learning' is too generous a term, it's more like... refining its predictions, getting better at what it does. It's getting better at rolling dice, but that's fundamentally all it ever can do.

        • kissinger
          ·
          edit-2
          hace 1 año

          deleted by creator

        • drhead [he/him]
          ·
          hace 2 años

          'Learning' is a term of abstraction. "Making a probabilistic model of what tokens should go next to each other for a given input" is annoying to say every time. It's the same as when people talk about evolution as if there is design, people who understand evolution will know that when you say "this finch's beak is designed for eating seeds"... it's the same with machine learning.

      • Owl [he/him]
        ·
        hace 2 años

        None of the recent wave of AI models continue to learn after being trained. They have a training phase where they "learn" (is it actually learning? boring semantic argument), then they just kind of sit there and do what they do already.

        All the text models work on some variant of "given that the last 1000 letters of the input are X, and the last 1000 letters of my output are Y, what's the most likely next letter?" The model is huge, but nowhere near big enough to be able to memorize all the answers, so it needs to compress the information somehow. Learning words, grammatical rules, and facts about how the world works are all ways to get a more accurate "what's the next letter" in less space than memorizing everything, so a sufficiently big model starts having ways to work with those.

        People are researching where and how the heck those ideas get stored in models, but that's slower and harder and less funded than just chucking even bigger computers at training even bigger models, so we don't really know exactly how it works on the inside.

        Plumbing is really complicated btw, don't sell yourself short.

      • mittens [he/him]
        ·
        hace 2 años

        Think of it as a beefier version of the word predictions you get in your smartphone keyboard. Only instead of it being on a word per word basis, it strings a number of predictions together and cobbles a coherent text.

    • HumanBehaviorByBjork [any, undecided]
      ·
      hace 2 años

      i mean it's not learning, except in a metaphorical sense. learning is a thing that people do. it's able to answer common beginner programming questions because it's regurgitating answers it's been fed multiple times. that doesn't speak to its ability to solve novel complex problems now, or with more "learning." We've seen a progression from pure nonsense to syntactically valid code, but it doesn't necessarily follow that the next step is correct code.

  • Hohsia [he/him]
    hexagon
    ·
    hace 2 años

    Write me an excel macro that formats files in a directory to .csv

    Sub ConvertToCSV() ' Declare variables Dim sourceFolder As String Dim targetFolder As String Dim sourceFile As String Dim targetFile As String

    ' Set the source and target folders
    sourceFolder = "C:\MyFiles\"
    targetFolder = "C:\MyFiles\Converted\"
    
    ' Loop through all the files in the source folder
    For Each sourceFile In Dir(sourceFolder & "*.xlsx")
        ' Set the target file name
        targetFile = targetFolder & Replace(sourceFile, ".xlsx", ".csv")
    
        ' Open the source file
        Workbooks.Open sourceFile
    
        ' Save the file as CSV
        ActiveWorkbook.SaveAs targetFile, xlCSV
    
        ' Close the source file
        ActiveWorkbook.Close
    Next sourceFile
    

    End Sub

    A python program that copies content from a web page and pastes it to notepad

    Import the required modules

    import webbrowser import pyperclip import time

    Set the URL of the web page to be copied

    url = "https://en.wikipedia.org/wiki/Main_Page"

    Open the web page in a new browser tab

    webbrowser.open(url, new=2)

    Wait for the page to load

    time.sleep(5)

    Copy the page contents to the clipboard

    pyperclip.copy(document.body.innerText)

    Open Notepad

    webbrowser.open("notepad.exe")

    Wait for Notepad to load

    time.sleep(3)

    Paste the copied text into Notepad

    pyperclip.paste()

    • companero [he/him]
      ·
      hace 2 años

      The Python one is completely incorrect and would crash if you tried to run it. webbrowser is for displaying web pages to users. It doesn't support accessing data programmatically.

      document.body.innerText

      It magically switches to browser JS when it can't figure out how to get the content.

      I mean, it still is impressive, but you can't really trust anything it outputs and I don't expect that to change for quite a while.

      • Hohsia [he/him]
        hexagon
        ·
        hace 2 años

        Of course it’s not always going to be correct, but it’s learning and evolving

        That’s the scary part

  • sayqueensbridge [none/use name]
    ·
    hace 2 años

    I’ve been working retail for a long time and was thinking about learning to code, getting into data analysis or web dev. Idk that much about the tech world but just need a change.

    Is the tech bubble bursting now? Is it too late to start from scratch?

    • Bobby_DROP_TABLES [he/him]
      ·
      hace 2 años

      In order for something like GPT to replace software engineers, clients would need to be capable of giving a detailed description of what they want. We're gonna be safe for a long time.

      • Hohsia [he/him]
        hexagon
        ·
        hace 2 años

        Yeah but the point (which I feel like many are missing) is that this is going to displace a lot of pointless office work making up a large portion of the US workforce

        That’s the concerning part because not everyone can be a developer