Source: https://www.reddit.com/r/oddlyterrifying/comments/1ajd915/searching_for_halo_on_instagram/
One the biggest problems with the internet today is bad actors know how to manipulate or dodge the content moderation to avoid punitive consequences. The big social platforms are moderated by the most naive people in the world. It's either that or willful negligence. Has to be. There's just no way these tech bros who spent their lives deep in internet culture are so clueless about how to content moderate.
bad actors know how to manipulate or dodge the content moderation to avoid punitive consequences.
People have been doing that since the dawn of the internet. People on my old forum in the 90s tried to circumvent profanity filters on phpBB.
Even now you can get round Lemmy.World filters against "removed-got" by adding a hyphen in it.
Nothing new under the sun.
Auto-moderation is both lazy and is only going to get worse. Not saying there isn't some value on things being hard-banned (like very specific spam like shit that just keeps responding to everything with the same thing non-stop). But these mega outlets/sites want to just use full automation to ban shit without any human interactions. At least unless you or another corp has connections on the inside to get a person or people to fix it. Just like how they make it so fucking hard to ever reach a person when calling (or trying to even find) a support line.
This automated shit just blacklists more and more shit and can completely fuck over people that use those sites for income (and they even can't reach a person when their income is cut off for false reasons and don't get back-pay for the period of a strike/ban). The bad guys will always just keep moving to a new word or phrase as the old ones get banned. So we as users are actually losing words and phrases and the actual shit is just on to the next one without issues.
The thing is that words can have a very broad range of meaning depending on who uses them and how (among many other factors), but you can't accurately code all of that into a form that computers can understand. Even ignoring bad actors it makes certain things very difficult, like if you ever want to search for something that just happens to share words with something completely different which is very popular.
How do we know they didn't type something more explicit to get the result and just change what's in the search bar? Has anyone verified this?
I actually don't know, I'm not sure it is possible (I never used Instagram, the search might be auto-submitting for all I know) but intentionally flagging yourself as potential child abuser, for clout, is a bit extreme...
We beat KOSA before, we can beat it again. Contacting your reps matters. Voting matters, especially in primaries and locals. So does being active politically in other ways.
https://www.fightforthefuture.org/
I'm fine with abducting children for a Super-Soldier program. But I draw the line at having photos of them on Instagram. Honestly, a deserved warning. Be better 👏