• SSJ3Marx [he/him]
    ·
    3 hours ago

    Can't wait to see when happens when a program calls this function every clock cycle lmao

  • roux [he/him, comrade/them]
    ·
    4 hours ago

    This right here is giving me flashbacks of working with the dumbest people in existence in college because I thought I was too dumb for CS and defected to Comp Info Systems.

  • keepcarrot [she/her]
    ·
    5 hours ago

    One of the things I've noticed is that there are people who earnestly take up CS as something they're interested in, but every time tech booms there's a sudden influx of people who would be B- marketing/business majors coming into computer science. Some of them even do ok, but holy shit do they say the most "I am trying to sell something and will make stuff up" things.

  • bdonvr@thelemmy.club
    ·
    6 hours ago

    Can we make a simulation of a CPU by replacing each transistor with an LLM instance?

    Sure it'll take the entire world's energy output but it'll be bazinga af

    • blame [they/them]
      ·
      5 hours ago

      why do addition when you can simply do 400 billion multiply accumulates

    • citrussy_capybara [ze/hir]
      ·
      7 hours ago

      I'd just like to interject for a moment. What you're refering to as molochPlusAI, is in fact, GNU/molochPlusAI, or as I've recently taken to calling it, GNUplusMolochPlusAI.

  • WhyEssEff [she/her]
    ·
    edit-2
    8 hours ago

    lets add full seconds of latency to malloc with a non-determinate result this is a great amazing awesome idea it's not like we measure the processing speeds of computers in gigahertz or anything

    • WhyEssEff [she/her]
      ·
      8 hours ago

      sorry every element of this application is going to have to query a third party server that might literally just undershoot it and now we have an overflow issue oops oops oops woops oh no oh fuck

      • WhyEssEff [she/her]
        ·
        edit-2
        8 hours ago

        want to run an application? better have internet fucko, the idea guys have to burn down the amazon rainforest to puzzle out the answer to the question of the meaning of life, the universe, and everything: how many bits does a 32-bit integer need to have

        • WhyEssEff [she/her]
          ·
          edit-2
          7 hours ago

          new memory leak just dropped–the geepeetee says the persistent element 'close button' needs a terabyte of RAM to render, the linear algebra homunculus said so, so we're crashing your computer, you fucking nerd

          • WhyEssEff [she/her]
            ·
            edit-2
            7 hours ago

            the way I kinda know this is the product of C-Suite and not a low-level software engineer is that the syntax is mallocPlusAI and not aimalloc or gptmalloc or llmalloc.

            • WhyEssEff [she/her]
              ·
              edit-2
              7 hours ago

              and it's malloc, why are we doing this for things we're ultimately just putting on the heap? overshoot a little–if you don't know already, it's not going to be perfect no matter what. if you're going to be this annoying about memory (which is not a bad thing) learn rust dipshit. they made a whole language about it

              • Llituro [he/him, they/them]
                ·
                7 hours ago

                if you're going to be this annoying about memory (which is not a bad thing) learn rust dipshit. they made a whole language about it

                holy fuck that's so good data-laughing

              • WhyEssEff [she/her]
                ·
                edit-2
                7 hours ago

                if they're proposing it as a C stdlib-adjacent method (given they're saying it should be an alternative to malloc [memory allocate]) it absolutely should be lowercase. plus is redundant because you just append the extra functionality to the name by concatenating it to the original name. mallocai [memory allocate ai] feels wrong, so ai should be first.

                if this method idea wasn't an abomination in and of itself that's how it would probably be named. it currently looks straight out of Java. and at that point why are we abbreviating malloc. why not go the distance and say largeLanguageModelQueryingMemoryAllocator

                  • WhyEssEff [she/her]
                    ·
                    edit-2
                    7 hours ago

                    snake_case is valid, I’m just operating on the assumption that these rubes want it to be stdlib-adjacent

                    • T34_69 [none/use name]
                      ·
                      3 hours ago

                      Well come to think of it, we did find out that certain snakes are standard lib-adjacent in disguise warren-snake-green

  • kleeon [he/him, he/him]
    ·
    edit-2
    8 hours ago

    modern CS is taking a perfectly functional algorithm and making it a million times slower for no reason

    • bumpusoot [any]
      ·
      edit-2
      5 hours ago

      Given a small allocation can take only a few hundred cycles, and this would take at minimum a few seconds, it's probably between a billion and a trillion times slower.

  • Barx [none/use name]
    ·
    9 hours ago

    Uncritical support for these AI bros refusing to learn CS and thereby making the CS nerds that actually know stuff more employable.

  • regul [any]
    ·
    9 hours ago

    walter-yell "INTEGER SIZE DEPENDS ON ARCHITECTURE!"

    • blame [they/them]
      ·
      5 hours ago

      idk why boomers decided to call integers int/short/word/double word/long/long long

      you ever heard of numbers? you think maybe a number might be a little more descriptive than playing "guess how wide i am?"

      yet another thing boomers ruined

  • FunkyStuff [he/him]
    ·
    9 hours ago

    This is simply revolutionary. I think once OpenAI adopts this in their own codebase and all queries to ChatGPT cause millions of recursive queries to ChatGPT, we will finally reach the singularity.

    • hexaflexagonbear [he/him]
      ·
      9 hours ago

      There was a paper about improving llm arithmetic a while back (spoiler: its accuracy outside of the training set is... less than 100%) and I was giggling at the thought of AI getting worse for the unexpected reason that it uses an llm for matrix multiplication.

      • FunkyStuff [he/him]
        ·
        9 hours ago

        Yeah lol this is a weakness of LLMs that's been very apparent since their inception. I have to wonder how different they'd be if they did have the capacity to stop using the LLM as the output for a second, switched to a deterministic algorithm to handle anything logical or arithmetical, then fed that back to the LLM.

        • nightshade [they/them]
          ·
          edit-2
          7 hours ago

          I'm pretty sure some of the newer ChatGPT-like products (the consumer-facing interface, not the raw LLM) do in fact do this. They try to detect certain types of inputs (i.e. math problems or requesting the current weather) and convert it to an API request to some other service and return the result instead of a LLM output. Frankly it comes across to me as an attempt to make the "AI" seem smarter than it really is by covering up its weaknesses.

          • FunkyStuff [he/him]
            ·
            7 hours ago

            Yeah, Siri has been capable of doing that for a long time, but my actual hope would be that moreso than handing the user the API response, the LLM could actually keep operating on that response and do more with it, composing several API calls. But that's probably prohibitively expensive to train since you'd have to do it billions of times to get the plagiarism machine to learn how to delegate work to an API properly.

    • WhatDoYouMeanPodcast [comrade/them]
      ·
      8 hours ago

      bit idea: the singularity but the singularity just crushes us with the colossal pressure past the event horizon of a black hole.

  • DefinitelyNotAPhone [he/him]
    ·
    8 hours ago

    My guy, if you don't want to learn malloc just learn Rust instead of making every basic function of 99% of electronics take literal seconds.