• makeasnek@lemmy.ml
    ·
    10 months ago

    This instance of an unstable person consuming content online and then and going to hurt people in real life is scary, but I'd propose that maybe unstable people who hurt people based on what they read are kind of inevitably going to end up on that trajectory regardless of the freedom of our speech spaces. And that maybe it's not worth sacrificing the free speech of all people simply because a few people are going to do bad things, even if that speech in part motivated them.

    What you're describing where people end up in their own media bubble is exactly why we need more open access to speech and free, neutral platforms for people to have these kinds of discussions on. There is a major push in society for everybody to splinter off into spaces where everybody agrees with them and, idk, that's exactly what happens to people when they go down the quanon/OAN/etc rabbit hole. We have lost to many "third spaces" in public where we might have these kinds of discussions and everybody keeps getting more polarized.

    The problem with hate speech the last 10 years has much more to do with the "I get in 1 word, they get in 100" problem than anything else. When you get into where people get their news, social media is a big part of that puzzle and Facebook et al has been able to put their thumb on the scale in a major way to spread hateful divisive content. Nazis has access to the internet in the 90s and 2000s, but they were in their own little bubbles and couldn't do a whole lot. It's social media that gave them a real platform by incentivizing their content and choosing, algorithmically, to promote posts which got "engagement". Likes are engagement, angry reacts are engagement, comments are engagement. Reddit had a decent system with incentivizing upvotes, but incentivizing all engagement? You get civil wars. Not only did this incentivization mechanism make divisive content more likely to show up in people's feeds, but it created a financial incentive for those posters to make divisive content and it made Facebook's bottom line predicated on divisive content.

    People are scared of the spread of hate speech and the right in the right wing it's causing and they are ready to throw "free speech" as a concept out because they are so afraid of it. We don't need to do that. What we need to do is take away the power social media companies have to influence the types and quantity of information we receive. If we do that, online hate speech will retreat back into it's little bubble and it will be a thing 1% of people hear and get influenced by, not 30%. Luckily, I think this is already happening. Fedi is a good move in this direction.

    • GarbageShoot [he/him]
      ·
      10 months ago

      but I'd propose that maybe unstable people who hurt people based on what they read are kind of inevitably going to end up on that trajectory regardless of the freedom of our speech spaces

      You say that, but do you have any evidence for it? Are we just going to brush off the mentally unwell people that cults like QAnon prey upon as being a lost cause? As being people who would just be violent because the seeds of sin in their souls compel them to? You're just arguing for a secularized version of Calvinism that is even more reliant on faith because it lacks the element of theological reasoning.

      And that maybe it's not worth sacrificing the free speech of all people simply because a few people are going to do bad things

      Maybe this obfuscates relevant factors, like how money controls media and it's not just a matter of private citizens vs other private citizens.

      What you're describing where people end up in their own media bubble is exactly why we need more open access to speech

      It takes more of an argument than you have so far put forward to prove this, though I agree with you in a way that I suspect you would reject. Specifically, the blackballing of journalists and other sources who provide more useful explanations than exist in mainstream American Discourse is definitely part of the reason people resort to cults.

      That said, if we are discounting questions like Class consciousness, your thesis falls apart entirely. These bubbles are largely self-selecting, based on marketing algorithms for the consumer-lifestyle brands that you call American politics. There is nothing stopping a brain-rotted Twitter Q freak from going on some socdem hive on Reddit, but they don't want to and they have been encouraged to this mindset by various forms of conditioning on the multi-billion dollar skinner boxes that are social media platforms. Of course, there are less polarized spaces and ones designed for "open debate" (and again Reddit provides an excellent example of these empty gestures) but overwhelmingly what we see there is more tribalism, just with a different set of etiquette.

      This shows one of the many significant failures of idealist fetishization of open society: People only have so much time and effort to put into research, especially more nebulous ideological subjects. Ideology is first and foremost a survival strategy, and people will budget their finite resources based on what they are able to project as best serving them from the limited information they operate within, starting from environments that are overwhelmingly controlled by the rich in neoliberal societies. You already have your goddam Marketplace of Idea and it has failed.

      free, neutral platforms for people to have these kinds of discussions on.

      Neutrality doesn't exist and the bodies that claim to be neutral are just question-begging their own ideology.

      Some people used to think that the internet would end war, but they were operating on a type of idealism similar to your own.

    • Zuzak [fae/faer, she/her]
      ·
      edit-2
      10 months ago

      Is the tendency for devisive content to be promoted a quirk of certain social media platforms, or is something more inherent? I'd argue that people are more likely to click on something if it presents a message of, "You are under attack!!" as opposed to say, "Firefighter rescues kitten from tree!" because the former invokes more and more powerful emotions. Brains are designed to seek out and pay attention to threats, and I think even something like a print newspaper is going to be subject to that incentive, at least to a degree.

      The other question I have is:

      What we need to do is take away the power social media companies have to influence the types and quantity of information we receive.

      Do you mean through state regulation, or just consumer choice?

      • makeasnek@lemmy.ml
        ·
        edit-2
        10 months ago

        Is the tendency for devisive content to be promoted a quirk of certain social media platforms, or is something more inherent? I’d argue that people are more likely to click on something if it presents a message of, “You are under attack!!” as opposed to say, “Firefighter rescues kitten from tree!” because the former invokes more and more powerful emotions. Brains are designed to seek out and pay attention to threats, and I think even something like a print newspaper is going to be subject to that incentive, at least to a degree.

        You're right. And it's both. But social media has greatly accelerated our ability to exploit/be vulnerable to that quirk of the human brain. Specifically, the promotion of content based on interaction alone is the problem. It's a policy choice by social media companies that has disastrous consequences for humanity.

        Do you mean through state regulation, or just consumer choice?

        Consumer choice and divestment. Brands are pulling out of Twitter, we need more of that from Meta and for them and/or consumers to recognize the dangers of using that platform. Consciousness-raising etc. People are recognizing the dangers of social media and centralization of the public square (though they may not use those terms), these platforms are hemorrhaging users, things are moving in the right direction. Musk and Spez are the best promoters of the fediverse we could ask for. And we need platforms like Fedi to mature and capture that audience. I think there's a balancing act here to make sure we have safe online spaces for people to participate in (that are federated) while allowing the expression of a diversity of viewpoints so we don't continue down the rabbit hole of polarization. One of the big problems with online public squares is the inability to tell intentions of commenters ie is that person genuinely "just asking questions" or is this a troll attempt? Reputation systems may help with this.

        State regulation of this area is really tricky, all the proposals I've seen so far are pretty ripe for abuse by the government and I don't want to give them that kind of power. They also make it harder for smaller sites and federations to exist since regulatory burdens limit who can run social media sites to only companies with money to pay lawyers.

        • Zuzak [fae/faer, she/her]
          ·
          10 months ago

          I suppose time will tell whether that trend will grow to the point of being really significant. I don't really trust the state as it stands to regulate speech in my interests. I do still believe in deplatforming hate speech when possible, and I don't really see the marketplace of ideas as being reliable due to certain ideas having stronger signals, either from monetary backing or grabbing attention. As things stand though, I don't really have a better answer than just personally using the fediverse over big social media sites.