Search engines are basically 90% blogspam promotion machines at this point, and the blogspam churning engine is going to become completely autonomous. The tools we're relied on for nearly three decades now are going to become a big virtual tug of war between machine-learning-guided SEO systems and machine-learning-guided ad revenue systems.

Every single anonymous interaction is going to be suspect. The next dogwhistling fucking cryptonazi groyper you run into is probably not even going to be a human being. Machine learning is probably going to elect the next US president. Eventually the medium will become so polluted that we will have to go back to doing everything in person.

Thank you for reading my doompost

  • JustAnotherCourier [none/use name]
    ·
    1 year ago

    The next dogwhistling fucking cryptonazi groyper you run into is probably not even going to be a human being.

    People begging for healthcare and police reform on twitter? Russian bots. Privately built AI with ever-expanding data sets, no oversight, and designed to continually become more convincing? Innovation.

    You think your uncle sucks at thanksgiving now, wait until his WaifuFren app is shoveling him talking points about gas ranges and cow farts in real time.

    • ennemi [he/him]
      hexagon
      ·
      1 year ago

      At this point it's probably better to skip the middleman and have the waifu radicalize me directly

      • JustAnotherCourier [none/use name]
        ·
        1 year ago

        ennemia, it makes me blush knowing I get to spend time with someone so smart! You're right, mainstream media sources are outdated liars. Don't worry! You can rely on me to tell you the truth about things like how it's an amazing time to get involved in the new cryptocurrency izancoin. I know you said you wished you had more money for medicine, maybe I can help!

        :doomer:

  • RION [she/her]
    ·
    1 year ago

    Born too late to explore the world, born too early to explore the galaxy, born just in time to experience digital Kessler syndrome :party-sicko:

    • SorosFootSoldier [he/him, they/them]
      ·
      1 year ago

      I've noticed a lot of my google auto suggest fill ins come with "on reddit" when I ask a question. I think people are starting to catch on to google being a giant ad machine.

      • SerLava [he/him]
        ·
        edit-2
        1 year ago

        I do this all the time. Especially when the results are shitty blog articles that don't answer my complex question but instead rattle off the basic facts about the topic or try to sell me shit. Certain keywords just set the algorithm off and you can't get it to stop rattling off dumb shit. Like anything containing "low blood pressure" - fuck you, high blood pressure - here is how to fix high blood pressure. High. High.

        The cure is adding site:reddit.com to the search. There's lies, there's astroturfing, and there's sometimes spam, but there are also tons of nerds trying to catch people doing this, upvoting good responses, and even upvoting counter-arguments to any bad comments that got upvoted. It's a way, way better than the recycled blogs.

      • fox [comrade/them]
        ·
        1 year ago

        Yeah, but reddit is known to be where people go to ask questions so companies astroturf subreddits where their products are mentioned.

  • TheCaconym [any]
    ·
    edit-2
    1 year ago

    I tend to agree. The amount of auto-generated websites with "answers" you can tell are already using generative language models based on what was initially genuine search results is already insane; it's particularly bad when it comes to searches about tech stuff, or video games. Those off-the-shelf models are only going to get better. And it'll be all across the board: product reviews, fake positive posts on hobbyist's forums, and so on and so on.

    I also wonder the impact it'll have on future language models; in order for the models to generate a wide range of content - on all topics - even remotely linked to the truth, they have to plug it in with internet data in the training set. What happens when said internet data itself becomes progressively polluted by the output of other ML models ? you could imagine falsehoods spreading exponentially this way.

    • TerminalEncounter [she/her]
      ·
      1 year ago

      It is a problem, and the aiweirdness tumblr person (who is something of an expert on machine learning) has speculated that machine learning free internet data will start to command a premium, so you don't get garbage in your data inputs. Like Google or the Internet Archive might be sitting on a goldmine treasure trove because they've been scrapping the web forever.

  • goboman [any]
    ·
    1 year ago

    Brb going to watch the Metal Gear Solid 2 ending again and get upset.

  • KnilAdlez [none/use name]
    ·
    edit-2
    1 year ago

    the blogspam churning engine is going to become completely autonomous.

    I've got some bad news for you: that's been true for a while now.

    Personally, I've always kind of wanted to do a leftist version of this, essentially a 'news' website that aggregates articles and automatically rewrites them to be from a leftist lens. Cheap and easy Facebook agitprop.

  • AOCapitulator [they/them]
    ·
    edit-2
    1 year ago

    Oh please, spare me the doom and gloom. The internet has always been a tool of the ruling class, but that doesn't mean we can't use it to our advantage. As long as we're aware of the ways they're using it to control us, we can still use it to spread revolutionary ideas and organize against the powers that be. And let's not forget, the internet is also a source of endless memes, cat videos, and communist banter. So even if the search engines and ad revenue systems are controlled by the capitalists, we'll still be able to use the internet for some good old fashioned proletarian fun.

    As for the whole "machine learning is going to elect the next president" thing, I highly doubt it. Even the ruling class can't control technology that much. Plus, a robot president would probably be an improvement. Can you imagine a president that doesn't need to eat or sleep? Sign me up!

    So let's not give up on the internet just yet, comrades. We've got memes to meme and revolutions to start.

    Edit: to be clear this post was created by ChatGPT using OPs post as an input and telling it to respond like "an irreverent communist forum user"

    • sooper_dooper_roofer [none/use name]
      ·
      edit-2
      1 year ago

      The internet has always been a tool of the ruling class

      There was a period between 1995 - 2005 ish where it was not explicitly so (although it still somewhat functioned that way due to only rich middle class westoids being able to afford it)

      reddit for example seemed more like an actual succdem type place back in 2010, instead of centrism + 4chan metastasization

      • AOCapitulator [they/them]
        ·
        1 year ago

        Just fyi, this post you responded to was created by ChatGPT using OPs post as an input and telling it to respond like "a communist forum user"

      • AssortedBiscuits [they/them]
        ·
        1 year ago

        reddit for example seemed more like an actual succdem type place back in 2010, instead of centrism + 4chan metastasization

        Eh, I remember all the Ron Paul spam back in 2012. It was more techbro libertarian than anything else. There's a reason why /r/jailbait was a thing. Libertarians just can't help themselves.

      • UlyssesT [he/him]
        ·
        1 year ago

        reddit for example seemed more like an actual succdem type place back in 2010

        You maybe don't remember the extensive "jailbait" and related :libertarian-alert: subs that were very popular and ran the most traffic (and trafficking) back then.

        • GaveUp [she/her]
          ·
          edit-2
          1 year ago

          No, they're actually correct. Reddit pushed my politics left which was a big reason why I used to like it but then it kept going further and further right extremely quickly

          Also, having :libertarian-alert: subs has nothing to do with the users' economics

          Stalin was homophobic af, remember

      • Apolonio
        ·
        edit-2
        6 months ago

        deleted by creator

    • FLAMING_AUBURN_LOCKS [she/her]
      ·
      edit-2
      1 year ago

      you my nonbinary friend, win the internet for today. this is indeed an epic bacon moment.

      edit: to be clear this post was created by ChatGPT using AOCapitulator's post as an input and telling it to respond "with genuine praise towards a very funny post, in the style of a Stormfront administrator account"

    • mittens [he/him]
      ·
      1 year ago

      I don't think machine learning will elect the next president but not knowing whether you're talking to an automated crypto shill or a real human being will fundamentally undermine what makes social networking appealing, to the point where I'm growing convinced that posting will be seen as something of a fad

    • Hohsia [he/him]
      ·
      1 year ago

      Are you kidding me? Just because the internet is a tool of the ruling class doesn't mean we should just accept it and use it for our own purposes. The internet is controlled by the capitalist class and they use it to control us, plain and simple. And don't try to downplay the negative effects of machine learning and AI - they are already being used to control and manipulate us in ways we can't even imagine. And a robot president? Give me a break. That's just the ruling class's way of further dehumanizing us and taking away our ability to have any real control over our lives.

      Wake up, comrade. We need to fight against the capitalist control of technology, not accept it and use it for our own purposes."

      (This response was generated by chatGPT)

  • pastalicious [he/him, undecided]
    ·
    1 year ago

    Go meet people who are making shitty art in your local area. The indie theaters that show insufferable student art “films” to 15 people. The improv theaters putting in cringy improv for an audience of only other improvisers. Go there and get to know these people, their shit sucks and it’s awesome. Because someone came up with it on their own. The internet is doomed and we all need a fucking life raft.

    • Mardoniush [she/her]
      ·
      edit-2
      1 year ago

      Communism is shitty amatuer musical theater subjected to a human face, forever. (And that's good actually)

  • Awoo [she/her]
    ·
    edit-2
    1 year ago

    If the internet becomes AI blogspam I will unleash tens of thousands of instances curated specifically for communist propaganda and blot out the sun.

    I am not joking. If they want information war they're gonna get it.

    • ennemi [he/him]
      hexagon
      ·
      1 year ago

      It's going to be your tens of thousands against their tens of millions ;_;

      • Awoo [she/her]
        ·
        edit-2
        1 year ago

        It won't, their goals will be profit driven.

        Right now with a single server rack and a few scripts I could probably make every comments section of every news site and blog utterly useless until I get arrested for spam. Hundreds of millions of posts are not hard to make.

        I don't think people are quite aware of just how easy it is for a malicious actor to trash most of the internet. "Spam" has only fallen due to arrests targeting small groups that were responsible to very large percentages of the total spam on the internet. There are no laws against "spam" that is not advertising anything. It would not be very difficult to carpet bomb most of the internet. If you hit just the top 1000 english speaking sites you would be affecting a huge portion of the total internet population.

        The only reason this isn't happening already is because nobody wants to do it - people like the internet. If they (the bougies) don't rein it in on their side then digital MAD is a perfectly viable option.

        • Sphere [he/him, they/them]
          ·
          1 year ago

          Interesting comment, thanks. One note, though:

          There are no laws against “spam” that is not advertising anything.

          In the US, this would definitely fall under the (extremely broad, both in its text and in courts' interpretations of it) Computer Fraud and Abuse Act. It basically criminalizes any violation of a website's Terms of Service, if a prosecutor wants to use it that way.

          • Awoo [she/her]
            ·
            edit-2
            1 year ago

            Good luck prosecuting that from the EU assholes. If it got real bad I could just go to Belarus or China or anywhere hostile.

            For real though anyone that keeps up with blackhat seo and marketing shit should know just what kind of scale a single individual could take things to, especially with AI, it wouldn't be hard.

        • TheCaconym [any]
          ·
          1 year ago

          Right now with a single server rack and a few scripts I could probably make every comments section of every news site and blog utterly useless until I get arrested for spam

          Using the same shady dedicated servers hosting services (mostly in eastern europe) as the darknet markets do - which would sadly mean using crypto to pay for them, mind you - and with good opsec, you most likely wouldn't even get arrested, too.

          • Awoo [she/her]
            ·
            edit-2
            1 year ago

            Could just drive to Belarus and rent a flat in cash. Hostile to the west so they aren't going to cooperate.

  • HoChiMaxh [he/him]
    ·
    1 year ago

    doompost

    Where's the doom here? Capitalists accidentally destroy the atomization machine, psyches across the globe liberated from billions of dollars of weaponized psychology research

    • HumanBehaviorByBjork [any, undecided]
      ·
      1 year ago

      OP misrepresents the problem. The total ML-ification of the internet will make it more incoherent and suspect, but it won't reduce its utility of profit generation and desire sublimation. The source of the profits might become more ephemeral and the desire might become more deranged, but the center's gonna hold.

  • save_vs_death [they/them]
    ·
    1 year ago

    Except we have a thing called "adversarial models" which means we have AIs that recognise if something is written by an AI, which also potentially makes the initial AI have a sparring partner to become better against, but it's why AI crap is not the top of google results right now, despite it being generable since like 5 years ago (ChatGPT is just the latest incarnation that happens to allow random plebs to make partial use of it). But if you're not Google or any other huge corp that can afford to filter all the AI shit out then yes all you said still stands, and mimics how the internet was initially ruined when marketing companies realised they can just spam everyone consequence free.

    • mittens [he/him]
      ·
      edit-2
      1 year ago

      Yeah but seems like everything will bog down into a game of cat and mouse. Google results may not be flooded with AI results only because SEO people haven't been able to reverse engineer the algo, not because google is super good at detecting AI. Just go to youtube to watch crypto bots having nonsense conversations with each other in pretty much EVERY comment section. AI is already killing online gaming, it will be the same thing with social networks.

      • save_vs_death [they/them]
        ·
        1 year ago

        I have my doubts SEO freaks will figure out the algorithm, they didn't do so up to now not because gooble are a bunch of mega geniouses, but because outside of some general principles they like to pretend are how the algo works, it get tweaked every week. You get to bullseye a moving target once, but then you have to do so consistently every time which is not realistic. HOWEVER, outside of this precise nitpick i completely agree that the internet how it is now will most likely get completely obliterated. A lot of websites already dumped comment sections and this is just the beginning. Everything will become much more draconic with asking for phone numbers and other things to try to verify you're a real human, and it's a losing battle, like you point out. While this might weed out individual trolls, big actors won't give a shit because they can just afford a phone number for every automated poster.

    • ElHexo [comrade/them]
      ·
      1 year ago

      it’s why AI crap is not the top of google results

      Are we using the same internet? Google has been awful for years now

      • save_vs_death [they/them]
        ·
        1 year ago

        No, we're using the game google, it's increasingly dogshit and the amount by which it gets more dogshit is becoming increasingly greater. Doesn't mean the results are AI generated though. Consider all the anecdotes floating around of writers getting being essentially fired from their job of writing articles for content mill websites, but being offered 10% of their usual rate to lightly edit articles that were written by AI, because it would otherwise get caught by the machine filters. If AI was so great why don't they just completely fire them and run the content mills automatically.

    • laziestflagellant [they/them]
      ·
      1 year ago
      1. I have a problem with my computer or have a function I don't know how to carry out.

      2. I google what I want to know

      3. I am immediately greeted by 3 pages of articles written by bots which contain either incorrect information or are advertising some sort of paid driver management application.

      4. I sigh and add 'reddit' to the end of my search query and try again

    • HumanBehaviorByBjork [any, undecided]
      ·
      1 year ago

      AI crap is literally at the top of google results right now lol. I can't search a single thing without getting something obviously auto-generated. It's gotten really bad in the last year or so.

  • CptKrkIsClmbngThMntn [any]
    ·
    1 year ago

    I really wanted to respond to this with a generated paragraph, but to be honest I don't think the bit is worth it.

    I've been logging off more and more, spending more time reading and talking to people in person, and I think as this process continues that's the only healthy way (for me at least) to react. Gonna be a long time before they can spoof my real friends in person.

      • CptKrkIsClmbngThMntn [any]
        ·
        1 year ago

        I know you're joking, but I don't think it could. There are still a lot of easy tells. What prompt would lead to it spitting out that first sentence? The parentheses in the second are a kind of personal backpedal generated by wariness of prescribing my own solutions onto others - a complex epistemological hesitancy that I don't think is easy to emulate just through surface level speech patterns, at least in more sophisticated cases than this one - and the omission of the subject in the third plus the folksy contraction is a characteristic style of mine that you'd have to intentionally instruct it to adopt at this point. I'm writing this mostly because I find it interesting; not to clap back at you.

        There's a lot of shit I come across online that already feels like it was done via machine learning. A good example is the torrent of blog posts that came up when I was trying to search out basic comparisons between database solutions. Some of those were painfully inhuman and formulaic, even though plenty probably were written by humans and only humans. It'll take no time to move into this space. But people goofing around and sharing personal anecdotes in comment threads is another ball game entirely.

        One great and interesting example if you're willing to open :reddit-logo: Try scrolling through /r/AskAnthropology for the last month or so, and there's a user/bot that has been very clearly feeding each question into ChatGPT or something equivalent. It could fool you once or twice if you're just skimming, but the limitations of its answers and summaries, especially in comparison to the actual answers, are glaringly obvious.

        I do wonder if AI spotting will become a professional endeavour as we move forward though.

        • StewartCopelandsDad [he/him]
          ·
          1 year ago

          Your comment made me realize that the neural net equivalent of finding hash collisions could be used to discredit people: take human-written text and find a (sensible) chatGPT input that returns it as a response.

          • CptKrkIsClmbngThMntn [any]
            ·
            1 year ago

            Honestly that sounds kind of fun. I would love to see different kinds of text media (I mentioned Hexbear comments vs database blog posts) ranked on how difficult they are to generate, how closely you can match them, or how convoluted the prompt has to be to get the right response.

          • AOCapitulator [they/them]
            ·
            edit-2
            1 year ago

            so what you're saying is we can now definitively prove that twitter liberals are actually bots by finding out what prompt was used to generate their tweets

    • InevitableSwing [none/use name]
      ·
      edit-2
      1 year ago

      Gonna be a long time before they can spoof my real friends in person.

      How do you know they have beating hearts or a pancreas?

  • Antoine_St_Hexubeary [none/use name]
    ·
    edit-2
    1 year ago

    Oddly enough I first became aware of this sort of thing in 2017 when I became a cat owner. The "pet health information" space was already more or less conquered by machine-written articles by then, and it only seems to have gotten worse. I ended up spending a lot of money on vet visits in order to solve problems which I feel like the 2009 version of Google would have been able to tell me how to solve at home.

  • kristina [she/her]
    ·
    1 year ago

    i think its all going to shift towards not allowing anonymity / maybe allowing people who pay x money to post. which is just blegh

    id prefer not to give my social security number, phone number, and my drivers license to sites just to post shit tier memes