We know about teams of operatives with software to manage many multiple identities and leverage things like canned responses. We also know about AI natural language generators that officially weren't released because they were considered too dangerous. Add massive processing and live monitoring power to the mix.

Are we at the stage where, without fairly deep interaction with an account, you've got to assume that they're a bot with non commercial intent?

So, this wouldn't apply on a Kpop forum, sure, but on an influential forum discussing hot geopolitics it might apply.

  • Wmill [they/them, fae/faer]
    ·
    4 years ago

    I'm posting on a Saturday morning while slightly hungover I really hope AI doesn't make as bad decisions as me. This could spell the end for us all somehow.