The Chapo.Chat moderators have already stated that they're planning on removing the hard-coded slur filter from their instance of Lemmy; it's the reason you're not able to say the full text of ACAB. It's just a bad idea to have something like this hard-coded in, leftist instances are created by fostering a good culture and not by babying the people who run these servers and if we want Lemmy to grow as an actual federated alternative to Reddit then it's worth bringing up how bad this policy is by the original developers.

  • LesbianLiberty [she/her]
    hexagon
    ·
    4 years ago

    As a greater point, what will we do to prevent slurs? Will we even have a slur filter on this instance or just have enough moderation and yelling at wreckers that we don't have to hear anything like that.

    • Helmic [he/him]
      arrow-down
      2
      ·
      4 years ago

      If the filter was less aggressive and matched whole words only it'd probably be fine, though deleting the entire message and responding with a bot explaining why that word causes harm would be much more effective for what we're doing. We don't want to just bleep stuff out where it's super obvious what someone just said, if someone used a slur then that message needs to be unmade entirely and there needs to be immediate education or banning. Deliberately trying to evade the filter to use a slur as a slur should result in being ejected from teh community and should be happening infrequently enough outside of raids that it shouldn't really be a major concern that someone could use X's in front of and after a word to evade the filter, normally people should only be actually posting slurs by pure ignorance and anyone posting it otherwise just makes their malice known.

      • LesbianLiberty [she/her]
        hexagon
        ·
        4 years ago

        Is that even effective though? We should just be able to trust that people using this platform aren't foaming at the mouth to say the n word or something and that if someone says a word like tr*p we can just have a struggle session over it like posters do.

        • Helmic [he/him]
          arrow-down
          4
          ·
          4 years ago

          Is deleting posts and having a bot post explanations as to why the word is harmful effective? Or are you talking about the current filter?

          If the former, absolutely yes. Multiple subs on reddit use that approach and it works really well, people are generally receptive to it and wean themselves off of slurs because they're posting in good faith and just genuinely didn't know that word was a problem. It doesn't require strict filtering, exact matches work jsut fine because anyone trying to evade the filter can just be safely assumed to be posting in bad faith and immediately banned.

          If the latter, obviously not. It catches unrelated words like basterd and generally assumes we're not going to be punishing people for trying to say slurs without triggering the filter. The community, as you said, isn't foaming at hte mouth to say these words, 99% of hte time when we're not being raided it'll be said out of ignorance and all that's needed to catch those cases is an exact or near-exact match.

          If you're implying it'd be better to have no filter at all, that then puts the onus on the community to intervene every single time someone says a word, struggle about it, deal with cliques that think it's OK, et cetera. It's not worth the stress on everyone to get people to stop using slurs, it wasn't fun on the old subreddit and it wouldn't be fun now. A filter and bot response can automate that entire struggle session for us and makes it clear what the official stance is on those slurs, and lets the user know that if they continue to use those slurs they're liable to be banned. Like I'm really, really tired of having to explain to people why the r-slur is bad and I don't want other ND people to have to defend themselves every time it comes up.

          • steely_its_a_dildo [any]
            ·
            4 years ago

            one of the features of my mental illness (when untreated) is psychosis, getting a message about how the word 'crazy' is harmful is insulting. the first and last time it happened was on a sub that I was unaware had this automod feature. Deleted the message and never returned. I have experience violence from the state because I am crazy, the word itself doesn't hurt.

            I guess it just feels infantilizing.

            • Helmic [he/him]
              ·
              4 years ago

              The thing is, it bothers me when someone says they're OK with slurs and just uses it and tries painting those who take issue with those slurs as themselves being bigoted somehow for not wanting those words used around them. I don't want to be called the r slur, I don't like people normalizing that word because it leads to others like me being called that word.

              If you find being asked to not use a word patronizing, then don't use the word. While you might be irritated, it's a lot worse for those targeted by those words who don't take as nonchalant an attitude towards the word. Someone else autistic that's OK with the r slur that gets upset about being asked to not use it doesn't invalidate my own discomfort with that word.

              Honestly, if that was your response to good faith criticism, it was probably for the best that you didn't remain. The alternative is relitigating why X word is bad every time someone wants to argue that a harmful word is OK because you're part of the group it targets.

              • steely_its_a_dildo [any]
                ·
                edit-2
                4 years ago

                lol, I love being patronized. you read a ton into that that I didn't think let alone write.

        • Sushi_Desires
          arrow-down
          3
          ·
          edit-2
          4 years ago

          deleted by creator

          • Helmic [he/him]
            ·
            4 years ago

            I think that's as simple as not including words like trap and instead just filtering words that are exact, whole word matches that have no innocent uses. We don't have to be unable to talk about Chapo Trap House to still automate action to be taken against someone saying the N word. It mostly removes the need for someone to have to see it in the first place. There is an option other than having a draconian filter or no filter at all