• NuraShiny [any]
      ·
      3 months ago

      Disagree. The technology will never yield AGI as all it does is remix a huge field of data without even knowing what that data functionally says.

      All it can do now and ever will do is destroy the environment by using oodles of energy, just so some fucker can generate a boring big titty goth pinup with weird hands and weirder feet. Feeding it exponentially more energy will do what? Reduce the amount of fingers and the foot weirdness? Great. That is so worth squandering our dwindling resources to.

      • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
        ·
        3 months ago

        Disagree. The technology will never yield AGI as all it does is remix a huge field of data without even knowing what that data functionally says.

        We definitely don't need AGI for AI technologies to be useful. AI, particularly reinforcement learning, is great for teaching robots to do complex tasks for example. LLMs have shocking ability relative to other approaches (if limited compared to humans) to generalize to "nearby but different, enough" tasks. And once they're trained (and possibly quantized), they (LLMs and reinforcement learning policies) don't require that much more power to implement compared to traditional algorithms. So IMO, the question should be "is it worthwhile to spend the energy to train X thing?" Unfortunately, the capitalists have been the ones answering that question because they can do so at our expense.

        For a person without access to big computing resources (me lol), there's also the fact that transfer learning is possible for both LLMs and reinforcement learning. Easiest way to explain transfer learning is this: imagine that I want to learn Engineering, Physics, Chemistry, and Computer Science. What should I learn first so that each subject is easy for me to pick up? My answer would be Math. So in AI speak, if we spend a ton of energy to train an AI to do math and then fine-tune agents to do Physics, Engineering, etc., we can avoid training all the agents from scratch. Fine-tuning can typically be done on "normal" computers with FOSS tools.

        all it does is remix a huge field of data without even knowing what that data functionally says.

        IMO that can be an incredibly useful approach for solving problems whose dynamics are too complex to reasonably model, with the understanding that the obtained solution is a crude approximation to the underlying dynamics.

        IMO I'm waiting for the bubble to burst so that AI can be just another tool in my engineering toolkit instead of the capitalists' newest plaything.

        Sorry about the essay, but I really think that AI tools have a huge potential to make life better for us all, but obviously a much greater potential for capitalists to destroy us all so long as we don't understand these tools and use them against the powerful.

        • NuraShiny [any]
          ·
          3 months ago

          Since I don't feel like arguing, I will grant you that you are correct in what you say AI can do. I am not really but whatever, say it can:

          How will these reasonable AI tools emerge out of this under capitalism? And how is it not all still just theft with extra steps that is imoral to use?

          • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
            ·
            3 months ago

            Since I don't feel like arguing

            I'll try to keep this short then.

            How will these reasonable AI tools emerge out of this under capitalism?

            How does any technology ever see use outside of oppressive structures? By understanding it and putting to work on liberatory goals.

            I think that crucial to working with AI is that, as it stands, the need for expensive hardware to train it makes it currently a centralizing technology. However, there are things we can do to combat that. For example, the AI Horde offers distributed computing for AI applications.

            And how is it not all still just theft with extra steps that is imoral to use?

            We gotta find datasets that are ethically collected. As a practitioner, that means not using data for training unless you are certain it wasn't stolen. To be completely honest, I am quite skeptical of the ethics of the datasets that the popular AI products were trained on. Hence why I refuse to use those products.

            Personally, I'm a lot more interested in the applications to robotics and industrial automation than generating anime tiddies and building chat bots. Like I'm not looking to convince you that these tools are "intelligent", merely useful. In a similar vein, PID controllers are not "smart" at all, but they are the backbone of industrial automation. (Actually, a proven use for "AI" algorithms is to make an adaptive PID controller so that's it can respond to changes in the plant over time.)

            • NuraShiny [any]
              ·
              3 months ago

              These datasets do not exist, you got that right.

              I highly doubt there is much AI deep learning needed to keep a robot arms PIDs accurate. That seems like something a regular old algorithm can do.

              • PM_ME_VINTAGE_30S [he/him]@lemmy.sdf.org
                ·
                edit-2
                3 months ago

                A deep neural adaptive PID controller would be a bit overkill for a simple robot arm, but for say a flexible-link robot arm it could prove useful. They can also work as part of the controller for systems governed by partial differential equations, like in fluid dynamics. They're also great for system identification, the results of which might indicate that the ultimate controller should be some "boring" algorithm.

      • daniskarma@lemmy.dbzer0.com
        ·
        3 months ago

        Idk. I find it a great coding help. IMO AI tech have legitimate good uses.

        Image generation have algo great uses without falling into porn. It ables to people who don't know how to paint to do some art.

        • NuraShiny [any]
          ·
          3 months ago

          Wow, great, the AI is here to defend itself. Working about as well as you'd think.

          • daniskarma@lemmy.dbzer0.com
            ·
            edit-2
            3 months ago

            What?

            I really don't know whats going about the Anti-AI people. But is getting pretty similar to any other negationism, anti-science, anti-progress... Completely irrational and radicalized.

            • NuraShiny [any]
              ·
              3 months ago

              Sorry to hurt your fefes, but I don't like theft and that is what AI content ALL is. How does it "know" how to program? Code stolen form humans. How does it speak? Words stolen from humans. How does it draw? Art stolen from humans.

              Until this shit stops being built on a mountain of stolen data and stolen livelihoods, the argument is over. I don't care if you like stealing money from artists so that you can pretend you had any creative input into an AIs art output. You're stealing the work of normal people and think it's okay because it was already stolen once before by the billionaires who are now selling it to you.

                • NuraShiny [any]
                  ·
                  3 months ago

                  Oh right, we live under communism, where everyone's needs are cared for. My bad

                  Oh wait, we aren't and you are just a shithead who, once again, wants to tell me that stealing from other workers is good.

                  • daniskarma@lemmy.dbzer0.com
                    ·
                    edit-2
                    3 months ago

                    How can something being stolen if no one took anything from you.

                    Same as piracy is not stealing. Training AI models is not stealing. Sharing is caring.

                    If you don't get paid enough go ask your boss why he makes much more money than you.

                    • NuraShiny [any]
                      ·
                      3 months ago

                      Yes, please apply the logic of stealing form large multi-national corporations to individual artists. Sterling logic.

                      I know why my boss makes more money then me. Because he is my enemy in a class war.

                      If any of these AI models draws art that is slightly too close to looking like Mickey Mouse the Disney corporation is sharpening the lawyer axe. I wonder why. But sharing is caring, right? Why would they do that?

                      Oh right because they want to decide what their intellectual property is used for. A right that wasn't afforded to basically every single artist whose stuff was used to train these models. These artists often rely directly on selling their art for their daily survival. Maybe they would have liked some money to sell their art for this purpose? Maybe they didn't want to sell it at all? Doesn't matter, they weren't asked. If you don't have an army of lawyers, the corporations will do as they like. Which is why Disney is save, while normal artists are fucked and weren't even asked in what hole they would like it before they were.

                      So shut the fuck up about sharing is caring, it's easy to say that if you are the one taking advantage. I don't know what field you work in, but I hope you lose your job to a robot that they trained on recordings of your work. You can tell me then how good it feels to share your skills.

    • kibiz0r@midwest.social
      ·
      3 months ago

      Considering most new technology these days is merely a distilation of the ethos of the big corporations, how do you distinguish?

      • daniskarma@lemmy.dbzer0.com
        ·
        3 months ago

        Not true though.

        Current AI generative have its bases in# Frank Rosenblatt and other scientists working mostly in universities.

        Big corporations had made an implementation but the science behind it already existed. It was not created by those corporations.

  • ☆ Yσɠƚԋσʂ ☆@lemmy.ml
    ·
    3 months ago

    The root problem is capitalism though, if it wasn't AI it would be some other idiotic scheme like cryptocurrency that would be wasting energy instead. The problem is with the system as opposed to technology.

    • kibiz0r@midwest.social
      ·
      3 months ago

      Right, but the technology has the system’s philosophy baked into it. All inventions encourage a certain way of seeing the world. It’s not a coincidence that agriculture yields land ownership, mass production yields wage labor, or in this case fuzzy plagiarism machines yield a transhuman death cult.

  • kibiz0r@midwest.social
    ·
    edit-2
    3 months ago

    It’s wild how we went from…

    Critics: “Crypto is an energy hog and its main use case is a convoluted pyramid scheme”

    Boosters: “Bro trust me bro, there are legit use cases and energy consumption has already been reduced in several prototype implementations”

    …to…

    Critics: “AI is an energy hog and its main use case is a convoluted labor exploitation scheme”

    Boosters: “Bro trust me bro, there are legit use cases and energy consumption has already been reduced in several prototype implementations”

      • interdimensionalmeme@lemmy.ml
        ·
        3 months ago

        The problem is the concentration of power, Sam "regulate me daddy" Altman's plan is to get the government to create a web of regulation that makes it so only the big tech giants have access to the uncensored models.

        • PolandIsAStateOfMind@lemmy.ml
          ·
          3 months ago

          Of course, as usual with capitalism and basically everything, we had hope to recieve a tool making expressing themselves easy for workers lacking time and training to do art, and we will superexpensive proprietary software and monopolies quite possibly gatekeep by law. Again just as in software some hope is in open source.

    • daniskarma@lemmy.dbzer0.com
      ·
      3 months ago

      Nowadays you can actually get a semi decent chat bot working on a n100 that consumes next to nothing even at full charge.

    • _NoName_@lemmy.ml
      ·
      3 months ago

      Miles is chill in my book. I appreciate what he is tackling, and hope he continues.

      It seems that there are much worse issues with AI systems that are happening right now. I think those issues should be taking precedent over the alignment problem.

      Some of the issues are bad enough right now that AI development and use should be banned for a limited time frame (at least 5 years) while we figure out more ethical ways of doing it. The fact that we aren't doing that is a massive failure of our already constantly-fucking-up governments.