• mittens [he/him]
    ·
    edit-2
    1 year ago

    it's because it's insanely expensive to run the entire billion-parameter model every single time lol.

    since the most damning results came from ChatGPT4, you know, the premium one you have to pay for, then I can only assume that OpenAI was actually losing money per request and are now scrambling to curb their losses. means that perhaps maybe running an ultra-complex LLM is not really cheaper than hiring an actual person? perhaps maybe the costs of operation were subsidized by VCs in order to make LLMs seem more appealing than they actually are? perish the thought.

    • usernamesaredifficul [he/him]
      ·
      1 year ago

      means that perhaps maybe running an ultra-complex LLM is not really cheaper than hiring an actual person?

      especially since it is almost always wrong

      • Hive [none/use name]
        ·
        1 year ago

        I think you also very correct they will basicly force the llms on everyone with out it being that much more profitable, especially if the llms keep eating their own out puts.

        • usernamesaredifficul [he/him]
          ·
          edit-2
          1 year ago

          it's like CGI once you've spent the upfront cost it's basically free. The fact it doesn't work is like the fact CGI looks lame an annoying irrelevance

          after all nothings worked since the 80's anyway. These days are like a Thatcherite 1970s

    • Parzivus [any]
      ·
      1 year ago

      I think it probably is still cheaper, the expensive part is development. AI companies right now are basically in a race to make useful AI before the VC funding runs out. The big developments will probably be slow and steady university research, as always

      • mittens [he/him]
        ·
        1 year ago

        I mean the issue here is that it's regressing so there's two things that may be happening here:

        1. (My guess) is that they're pruning the model to be more lean, which definitely does imply that it's expensive to run

        2. New data being added to the model is too biased and it's making the model perform worse which implies that it's going to be very very very expensive to gather quality data to improve the model

        • underisk [none/use name]
          ·
          1 year ago

          They could be trying to prune the training corpus of copyrighted works to get ahead of any potential legal conflicts. It's also possible their training corpus has been tainted with stuff that was generated by AI.

          • mittens [he/him]
            ·
            1 year ago

            The first sounds plausible but I was definitely leaning towards the second

          • Hive [none/use name]
            ·
            1 year ago

            Fucking bingo you get it, you get a medal of. Assinment understander gold-communist they absolutely are.

        • Hive [none/use name]
          ·
          1 year ago

          There is still quite a bit of fat to trim on these llms might be early to tell, but yeah profitability is lower then expected they pounded 1 trillion $ over the last 7-8 months its a program that unemployes people, so how could it ever be good for the economy side note it is a real productivity upgrade and we really haven't gotten for 30 years, but it also seems to be a tech that is dead ended