Search engines are basically 90% blogspam promotion machines at this point, and the blogspam churning engine is going to become completely autonomous. The tools we're relied on for nearly three decades now are going to become a big virtual tug of war between machine-learning-guided SEO systems and machine-learning-guided ad revenue systems.

Every single anonymous interaction is going to be suspect. The next dogwhistling fucking cryptonazi groyper you run into is probably not even going to be a human being. Machine learning is probably going to elect the next US president. Eventually the medium will become so polluted that we will have to go back to doing everything in person.

Thank you for reading my doompost

  • CptKrkIsClmbngThMntn [any]
    ·
    2 years ago

    I know you're joking, but I don't think it could. There are still a lot of easy tells. What prompt would lead to it spitting out that first sentence? The parentheses in the second are a kind of personal backpedal generated by wariness of prescribing my own solutions onto others - a complex epistemological hesitancy that I don't think is easy to emulate just through surface level speech patterns, at least in more sophisticated cases than this one - and the omission of the subject in the third plus the folksy contraction is a characteristic style of mine that you'd have to intentionally instruct it to adopt at this point. I'm writing this mostly because I find it interesting; not to clap back at you.

    There's a lot of shit I come across online that already feels like it was done via machine learning. A good example is the torrent of blog posts that came up when I was trying to search out basic comparisons between database solutions. Some of those were painfully inhuman and formulaic, even though plenty probably were written by humans and only humans. It'll take no time to move into this space. But people goofing around and sharing personal anecdotes in comment threads is another ball game entirely.

    One great and interesting example if you're willing to open :reddit-logo: Try scrolling through /r/AskAnthropology for the last month or so, and there's a user/bot that has been very clearly feeding each question into ChatGPT or something equivalent. It could fool you once or twice if you're just skimming, but the limitations of its answers and summaries, especially in comparison to the actual answers, are glaringly obvious.

    I do wonder if AI spotting will become a professional endeavour as we move forward though.

    • StewartCopelandsDad [he/him]
      ·
      2 years ago

      Your comment made me realize that the neural net equivalent of finding hash collisions could be used to discredit people: take human-written text and find a (sensible) chatGPT input that returns it as a response.

      • CptKrkIsClmbngThMntn [any]
        ·
        2 years ago

        Honestly that sounds kind of fun. I would love to see different kinds of text media (I mentioned Hexbear comments vs database blog posts) ranked on how difficult they are to generate, how closely you can match them, or how convoluted the prompt has to be to get the right response.

      • AOCapitulator [they/them, she/her]
        ·
        edit-2
        2 years ago

        so what you're saying is we can now definitively prove that twitter liberals are actually bots by finding out what prompt was used to generate their tweets