Literally just mainlining marketing material straight into whatever’s left of their rotting brains.

  • MerryChristmas [any]
    ·
    edit-2
    1 year ago

    He may be a sucker but at least he is engaging with the topic. The sheer lack of curiosity toward so-called "artificial intelligence" here on hexbear is just as frustrating as any of the bazinga takes on reddit-logo. No material analysis, no good faith discussion, no strategy to liberate these tools in service of the proletariat - just the occasional dunk post and an endless stream of the same snide remarks from the usuals.

    The hexbear party line toward LLMs and similar technologies is straight up reactionary. If we don't look for ways to utilize, subvert and counter these technologies while they're still in their infancy then these dorks are going to be the only ones who know how to use them. And if we don't interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.

    Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

    • Dirt_Owl [comrade/them, they/them]
      ·
      1 year ago

      The sheer lack of curiosity toward so-called "artificial intelligence" here on hexbear is just as frustrating

      That's because it's not artificial intelligence. It's marketing.

    • VILenin [he/him]
      hexagon
      M
      ·
      1 year ago

      Oh my god it’s this post again.

      No, LLMs are not “AI”. No, mocking these people is not “reactionary”. No, cloaking your personal stance on leftist language doesn’t make it any more correct. No, they are not on the verge of developing superhuman AI.

      And if we don't interact with the underlying philosophical questions concerning sentience and consciousness, those same dorks will also have control of the narrative.

      Have you read like, anything at all in this thread? There is no way you can possibly say no one here is “interacting with the underlying philosophical questions” in good faith. There’s plenty of discussion, you just disagree with it.

      Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

      What the fuck are you talking about? We’re “handing it over to them” because we don’t take their word at face value? Like nobody here has been extremely opposed to the usage of “AI” to undermine working class power? This is bad faith bullshit and you know it.

      • UlyssesT
        ·
        edit-2
        2 months ago

        deleted by creator

    • Tachanka [comrade/them]
      ·
      1 year ago

      The hexbear party line toward LLMs

      this is a shitposting reddit clone, not a political party, but I generally agree that people on here sometimes veer into neo-ludditism and forget Marx's words with respect to stuff like this:

      The enormous destruction of machinery that occurred in the English manufacturing districts during the first 15 years of this century, chiefly caused by the employment of the power-loom, and known as the Luddite movement, gave the anti-Jacobin governments of a Sidmouth, a Castlereagh, and the like, a pretext for the most reactionary and forcible measures. It took both time and experience before the workpeople learnt to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used.

      - Marx, Capital, Volume 1, Chapter 15

      However you have to take the context of these reactions into account. Silicon valley hucksters are constantly pushing LLMs etc. as miracle solutions for capitalists to get rid of workers, and the abuse of these technologies to violate people's privacy or fabricate audio/video evidence is only going to get worse. I don't think it's possible to put Pandora back in the box or to do bourgeois reformist legislation to fix this problem. I do think we need to seize the means of production instead of destroy them. But you need to agitate and organize in real life around this. Not come on here and tell people how misguided their dunk tank posts are lol.

      • VILenin [he/him]
        hexagon
        M
        ·
        edit-2
        1 year ago

        I think their position is heavily misguided at best. The question is whether AI is sentient or not. Obviously they are used against the working class, but that is a separate question from their purported sentience.

        Like, it’s totally possible to seize AI without believing in its sentience. You don’t have to believe the techbro woo to use their technology.

        We can both make use of LLMs ourselves while disbelieving in their sentience at the same time.

        Is that such a radical idea?

        We’re not saying that LLMs are useless and we shouldn’t try and make use of them, just that they’re not sentient. Nobody here is making that first point. Attacking the first point instead of the arguments that people are actually making is as textbook a case of strawmanning as I’ve ever seen.

    • UlyssesT
      ·
      edit-2
      2 months ago

      deleted by creator

    • Wheaties [she/her]
      ·
      1 year ago

      Are we just content to hand over a new means of production and information warfare to the technophile neo-feudalists of Silicon Valley with zero resistance? Yes, apparently, and it is so much more disappointing than seeing the target demographic of a marketing stunt buy into that marketing stunt.

      As it stands, the capitalists already have the old means of information warfare -- this tech represents an acceleration of existing trends, not the creation of something new. What do you want from this, exactly? Large language models that do a predictive text -- but with filters installed by communists, rather than the PR arm of a company? That won't be nearly as convincing as just talking and organizing with people in real life.

      Besides, if it turns out there really is a transformational threat, that it represents some weird new means of production, it's still just a programme on a server. Computers are very, very fragile. I'm just not too worried about it.

    • GalaxyBrain [they/them]
      ·
      1 year ago

      It's not a new means of production, it's old as fuck. They just made a bigger one. The fuck is chat gpt or AI art going to do for communism? Automating creativity and killing the creative part is only interesting as a bad thing from a left perspective. It's dismissed because it's dismissals, there's no new technology here, it's a souped up chatbot that's been marketed like something else.

      As far as machines being conscious, we are so far away from that as something to even consider. They aren't and can't spontaneously gain free will. It's inputs and outputs based on pre determined programming. Computers literally cannot to anything non deterministic, there is no ghost in the machine, the machine is just really complex and you don't understand it entirely. If we get to the point where a robot could be seen as sentient we have fucking Star Trek TNG. They did the discussion and solved that shit.

      • sooper_dooper_roofer [none/use name]
        ·
        edit-2
        1 year ago

        The fuck is chat gpt or AI art going to do for communism?

        I think AI art could be great but chatGPT as a concept of something that "knows everything" is very moronic

        AI art has the potential to let random schmucks make their own cartoons if they input just a little bit of work. However, this will probably require a license fee or something so you're probably right

        Personally I would love to see well-made cartoons about Indonesian mythology and stuff like that, which will never ever be made in the west (or Indonesia until it becomes as rich as China at least) so AI art is the best chance at that

        • GalaxyBrain [they/them]
          ·
          1 year ago

          Okay, but the only reason that ai art could help that is because Indonesian mythology doesn't have the marketability for a budget and real artists because capitalism. It doesn't subvert the commodification of art.

          • sooper_dooper_roofer [none/use name]
            ·
            1 year ago

            Yeah, and as long as we're living in that capitalistic hellworld, AI art existing allows those stories to be told instead of the same old euromedieval-hobbit-meadow thing that's the basis of every fantasy movie and game that came out for the last 60 years

            • GalaxyBrain [they/them]
              ·
              1 year ago

              Just cause a computer can make it doesn't mean anyone will see it. That's where the capitalism comes in.

              • sooper_dooper_roofer [none/use name]
                ·
                edit-2
                1 year ago

                Just cause a computer can make it doesn't mean anyone will see it.

                A lot of Indonesian people, and other people (like me) who are interested in other cultures would see it. It would at the very least begin the process of allowing cultural diversity to even reach the rest of the world

                As it stands now, poor people in poor countries don't even have the funds/leisure time to start their own animations (or other similar hobbies). AI art solves that

                The reason western art/videogames/cartoons are so popular is not because the culture is inherently more watchable, but because only westerners (and Japanese) ever had the capital to fund their own animation studios. People watch media because it's well-made, or because it's already popular and other people are talking about it. AI art can't fix the latter, but it can fix the former.

    • plinky [he/him]
      ·
      edit-2
      1 year ago

      Kinda, but like cool ML is alphafold/esm/mpnn/finite elements optimizers for cad/qcd/quantum chemistry (coming soon(tm)). LLMs/diffusion models are ways of multiplying content, fucking up email jobs and static media creators/presumably dynamic ones as well in the future.

      I doubt people are aware that rn biologists are close to making designer proteins on like home pc and soon you can wage designer biological warfare for 500k and a small lab. Or conversely, making drugs for any protein-function related disease.

      • Sphere [he/him, they/them]
        ·
        1 year ago

        I doubt people are aware that rn biologists are close to making designer proteins on like home pc and soon you can wage designer biological warfare for 500k and a small lab. Or conversely, making drugs for any protein-function related disease.

        Please elaborate in as much detail as possible, ideally with numerous hyperlinks. (I'm less surprised by this than you might think, but would greatly appreciate being clued into what's going on in this arena right now, as I've been largely cut off from information about it for years now.)

        • plinky [he/him]
          ·
          1 year ago

          https://www.science.org/doi/10.1126/science.add2187

          https://www.nature.com/articles/s41586-023-06415-8

          https://www.sciencedirect.com/science/article/abs/pii/S1476927122000445

          Basically you can (right now) fix protein part from one protein and hallucinate/design protein backbone backwards from it, using something like 4090, and that protein with high probability will fold as predicted. As an example fig. 3 in 2, shows you can design origami-like structures, which is not useful but very impressive, considering how long protein folding was dogshit despite compute power thrown into it.

          Taking alphafold structures you can make proteins binding to other proteins, even without knowing nothing else, have appreciable expectation (>1 %) it will work. Which is how you can make designer viruses, if you were so inclined.

          Drugs for now is not solved via neural networks, but they are working towards it, and i don't see a reason why design of structures binding to known protein structures won't work, it seems if anything else easier.

          • Sphere [he/him, they/them]
            ·
            edit-2
            1 year ago

            So, after taking some time to digest this information, I have a couple of follow-up questions, if you don't mind answering them.

            First of all, where do things stand with drugs? Is it just not something academics are working on, but presumably being done (or already finished) within proprietary institutions (e.g. Big Pharma)? Can you point me to some recent papers on the subject?

            Secondly, what about enzymes? Binding proteins are interesting, certainly, but it's enzymes that really excite me the most. Is anyone working on custom enzyme design, and if so, can you link some papers on that? In looking more closely at that Nature paper, I see that enzymes are something of a work-in-progress as yet. If you have anything else on the subject, I'd welcome that, but if there's nothing else of note there, that's fine.

            Thank you for mentioning this to begin with, by the way, I really appreciate the info you've already shared!

            • plinky [he/him]
              ·
              edit-2
              1 year ago

              With drugs third paper references it, i think in the next 6 months people expect neural net check of compounds binding affinities (https://www.biorxiv.org/content/10.1101/2023.11.01.565201v1.abstract), but here quantum chemistry (neurally based) is lagging behind, they still can't do large molecules (>30 atoms) reliably. Basically rn the big bad boy is david baker lab, they do all this exciting stuff, you can periodically check google scholar for new developments like i do.

              With enzymes (as i understand) the problem is to make them work, they can make them bind, but they can't make them move to do stuff.

              Alphafold can't make conformations for now, and its a harder problem, so maybe in 2 years they can develop something reliable, as for now its mainly shenanigans of biasing folding programs into new conformations

    • GreenTeaRedFlag [any]
      ·
      1 year ago

      It's a glorified speak-n-spell, not a one benefit to the working class. A constant, unrelenting push for a democratization of education will do infinitely more for the working class than learning how best to have a machine write a story. Should this be worked on and researched? absolutely. Should it not be allowed out of the confines of people who understand thoroughly what it is and what it can and cannot do? yes. We shouldn't be using this for the same reason you don't use a gag dictionary for a research project. Grow up

      • oregoncom [he/him]
        ·
        1 year ago

        It has potential for making propaganda. Automated astroturfing more sophisticated than what we currently see being done on Reddit.

        • GreenTeaRedFlag [any]
          ·
          1 year ago

          astroturfing only works when your views tie into the mainstream narrative. Besides, there's no competing with the people who have access to the best computers, most coders, and have backdoors and access to every platform. Smarter move is to back up the workers who are having their jobs threatened over this.