You are allowed to comment if you absolutely hate AI, or love it. If you think it is overrated or underrated, ok (although I think it's too early to say what the consensus even is to know whether it is overrate/underrated). But if you think it is just a scam, gimmick, grift, etc I don't need to hear from you right now :soviet-heart:

Let the discussion begin:

So it's clear there's this big alignment debate going on rn. Regardless where you stand, isn't it fucked that there's a cabal of the biggest freaks money has ever produced debating the future of humanity with zero input from normal society?

Even if it isn't humanity's future they think it is. There's probably like 100 people in the world paid to work on alignment. How can you not develop a megalomania complex?

What kind of chatter are you hearing about AI?

I very occasional hear people irl obliquely mention AI. A cashier said like 'oh that AI stuff, that's pretty scary'. That's about it.

Now the blogs I follow have been basically colonized by AI news. These aren't even strictly technology blogs. I started following David Brin for UFO takes, I started following Erik Hoel for neuroscience takes. Literally everyone I follow has published takes on AI and zero of them dismiss it out of hand.

Sorry this will get long.

I basically feel like we are in another version of the post nuclear age except only insiders know it. After the first A-bomb, everyone knew the world was different starting the very next day. Now only designated take-havers are aware of this new reality.

Or regular folks are aware of it but they're so disempowered from having a say that they only engage with the realization online like I'm doing now. Medicare for all is Bernie's thing. The border is Trump's. Even if nothing will ever be done about healthcare, the fact that Bernie talks about it justifies you thinking about it. AI isn't any politician's thing.

I'd put the odds of a nuclear war around 1% a year. I'd say there's a 1% chance AI can be as world-ending as that. That's such a low number that it doesn't feel like "AI doomerism". But 1% multiplied by however much we value civilization is still a godalmighty risk.

When I've heard this site talk about it, it's usually in the context of "holy shit this AI art is garbage compared to real art? Where's the love, where's the soul?" If it was 1945 and we nuked a city, would you be concerned with trying to figure out what postmodernism would look like?

Usually when I've gotten to the end of my post I delete it.

    • UlyssesT [he/him]
      ·
      1 year ago

      So what we’re going to end up getting is a bunch of very powerful math solvers with none of the intelligence embedded in it. It cannot reason. It cannot solve problem with creativity and innovation. It cannot think outside the box. It doesn’t understand causality. It cannot replace real human professional in making critical decisions, for example, performing medical diagnosis. And above all, it is prone to error and cannot use reasoning to self-correct.

      Maybe that's for the best with :porky-happy: in command of technology.

      Still, its irritating how the "AI" marketing label has stuck so successfully. It makes a lot of people really, really "ITS HAPPENING" about the term.

    • blobjim [he/him]
      ·
      1 year ago

      Machine learning is clearly an achievable thing. Making some kind of "human intelligence" software is still an unknown thing that a supercomputer couldn't support (think how much power and time was used just for the big neural networks we hear about right now). Organizations focus on things that are actually achievable and realistic.

      Plus actual "intelligence" would just be used for slavery or war or some other awful thing anyways. What is even the point of making an artificial human brain?

      • UlyssesT [he/him]
        ·
        1 year ago

        What is even the point of making an artificial human brain?

        :porky-happy: wants exceptionally efficient and powerful slaves that are "friendly" and by that they mean unable to fight back.

    • meth_dragon [none/use name]
      ·
      edit-2
      1 year ago

      what will happen is that people will start buying into the AI hype after only engaging it at a very superficial level and then you'll have entire disciplines getting capital-R Rationalized (like cogsci, 'bayesian brains' jfc) and it'll take fucking decades to dig ourselves out of that hole once we hit rock bottom and figure out that brute forcing square peg through round hole doesn't actually qualitatively solve anything. but in the meantime it'll do a phenomenal job of taking up all the money and oxygen in the room.

      current incarnation of AI is to computer science what neoclassical is to economics