• conditional_soup@lemm.ee
    ·
    1 year ago

    Noooooo, not like that! Automation is only for people who didn't go to Harvard!

    The funny thing is, the only barrier here is context size. Right now, LLMs have laughably bad context size (or attention spans, in human terms, it's basically how much information a Brian or model can keep active at any point in time) compared to humans, but that's going to change. It's not difficult to foresee a near future of LLMs with very, very, superhumanly large context sizes that could make human leadership seem ridiculously incompetent in comparison. Here's the thing, pyramid-like organizational structures are extremely common because we necessarily have layers of abstraction; the head of the organization can't do their job effectively if they're worried about whether Bob the Welder is going to make it in on time or if that invoice has got paid yet; likewise, Bob the Welder can't do his job if he's getting pulled off work to go sit in marketing meetings all day. There's only so much attention any one person can give in a day. The biggest problem is that information gets lost between these layers of abstraction, values don't necessarily remain consistent, and policies and practices aren't uniformly applicable, which can make it difficult for customers and even employees to navigate the normal processes of an organization, let alone the abnormal ones.

    As LLM context sizes reach superhuman levels, it's conceivable that they could end up flattening organizational structures by being able to be both Bob's supervisor and the CEO (or at least the CEO's assistant), and being able to keep all of the organization's context, down to the individual employee and customer needs, in mind at all times when making decisions. A government or corporation run by a properly aligned super-context AI could possibly be the closest thing we're going to get to utopian leadership, and would likely be both more ethical and more effective than human leadership.

    • Nakoichi [they/them]M
      ·
      edit-2
      1 year ago

      how much information a Brian or model can keep active

      Yeah fuck Brian.

      would likely be both more ethical and more effective than human leadership.

      Here's where you are wrong and I have something you should listen to to understand why.

      https://soundcloud.com/thismachinekillspod/281-the-smoking-gun-of-techno-capitalism-ft-meredith-whittaker

      Here's the specific article on why this is a utopian pipedream and why the reality under capitalism is much different and much scarier.

      https://logicmag.io/supa-dupa-skies/origin-stories-plantations-computers-and-industrial-control/

      • conditional_soup@lemm.ee
        ·
        1 year ago

        Good response, and thanks for bringing receipts. I'd love to read this a little later. Imo, though, large language models and generative AI in particular represent the capacity to make the means of production free and open source. True, freely available models that you could run on a gaming computer don't hold water against ChatGPT yet, but I do suspect that this will change as the emphasis in AI research pivots towards making models more efficient. It's also true that if a general AI is developed, it's not going to be FOSS, though that's honestly not the worst idea.

        With respect to your article on Babbage, I'd like to point out that much of the leadership in AI right now has been leading with the idea that any AI must follow the 3 Hs: Honest, Harmless, and Helpful. I think it's more than just hype, IMO, because they're currently burning a lot of cash hiring teams whose whole job it is to make sure that we get alignment (that is, constraining it with ethical values rather than allowing it to become a paperclip maximizer) of a potential super-intelligence correct. To be quiet frank, there's a lot of MBAs out there who could stand to pick up those 3Hs.

        • combat_brandonism [they/them]
          ·
          1 year ago

          Imo, though, large language models and generative AI in particular represent the capacity to make the means of production free and open source.

          I remember left-sympathetic cryptobros saying the same thing about cryptocurrencies for the last decade.

          • conditional_soup@lemm.ee
            ·
            1 year ago

            I really never saw the value proposition with crypto, besides it being digital cash.

            A key difference is that generative AI actually can and already does produce value as a means of production. Tons of folks use chatGPT to save hours on their workflows; I'm a programmer and it's probably saved me days of work, and I'm far from an edge case. Imo, the most telling thing is that a lot of the major AI companies are begging Congress to pull the ladder up behind them so that you can only develop AI if your market cap is at least this high; I think some of them are worried that decentralized, FOSS AIs will seriously erode their value propositions, and I think that their suspicions are correct.

    • SoyViking [he/him]
      ·
      1 year ago

      The problems facing the world today does not come from leaders having too short attention spans or inadequate access to information. The problems comes from these rulers representing bourgeois rather than proletarian interests. No amount of bazinga is going to overcome class conflict and make the dictatorship of the bourgeoisie make decisions that benefit the masses.

      • conditional_soup@lemm.ee
        ·
        1 year ago

        It's possible that if giant-context models are freely available, flat-structured organizations run by AI could outcompete less agile pyramid-structured organizations. It is possible we could see the bourgeoisie hoisted by their own petard.