All fucking AI turns out to be just the collective work of destitute third world laborers
Content moderation is a sick job. Like slaughterhouse worker. A good society shouldn't require anybody to do that.
Content moderators only seem to be necessary in spaces without real consequences. If you post CSAM in the friend group chat you are ostracized, in the work email list you are fired. You can't just make another account and keep doing it. Preventing repeat offenders means the volume of offensive material is so low that everybody can share the load and it's no big deal. Of course you don't need slaughterhouse workers if you don't eat flesh.
The internet will forever be a space without 'real consequences', that's how it work.
Much as I hate and long for the destruction of Facebook, Reddit, Twitter etc. and a return to community forum days, your approach only works if there are no large spaces of any kind in which to post, because anywhere with like, >1000 users is gonna need dedicated moderators of some kind.
You are correct that the internet will forever be an anon hellscape. A lot of people have tried to improve it, e.g. stack exchange's reputation system, with mixed results. I don't think it should be this way though.
anywhere with like, >1000 users is gonna need dedicated moderators of some kind.
Yes, but not in the "removing bestiality posts" way. SRA national forums are this scale and mods just clean up flame wars. Probably because you need to pay $25 to get in.
A work slack where your posts can get you fired or arrested doesn't need this sort of dedicated abuse viewer position no matter how big it gets. HR may occasionally fire somebody who accidentally pastes a Pornhub link into #general but they're not getting PTSD from it.
But if it came down to it, I'd be comfortable giving up my Facebook car videos so that people didn't have to sit and watch CSAM all day. Not worth it.
Sure, so it's either very small or very tightly gatekept. Unless you clamp down on every single publically accessible website, which isn't doable, this will be a thing.
I do want to note that such a clampdown is doable. Just not in the US under capitalism, probably. Big websites only enforce CSAM stuff now to avoid advertiser flight, and possibly legal consequences if it gets really bad. Enforcing ID-above-X-users, etc. would be about the same amount of coercion.
Anyway we've had slaughterhouses for a hundred years so mod PTSD is probably gonna be around for a long time.
It really isn't. Not if the internet is to exist in any meaningful and public form; the obstacles are virtually, if not literally, insurmountable. So long as anyone can get an IP address and access the internet, they can serve content and protocols/software can be made to browse that content as easily as we can on the internet today.
Much like governments of today wanting to break encryption, the only way to make this doable is to effectively defunct the whole point of computers.
If "ID-above-X-users" were made law, the biggest social sites would immediately require IDs, like Pornhub did in Louisiana recently. They might lobby against the law, but they're going to stay above ground because they are running a profitable business and they have shareholders and stuff. I think the majority of social media use is through companies like Facebook, Twitter, TikTok.
It's currently technically possible for anyone to make a CP website. They're rare because most countries will raid you if you do, going to great lengths to get you even if you're serving it over TOR. Same goes for drug markets. Everyone knows that if you build something like that, it's just a matter of time before you make one tiny opsec slip-up and go to jail. That's the level of coercion that has to be applied to get 99% compliance. And it can be done; it's being done right now with drug and CP websites. I like drug markets but they're super super niche. Most people don't even know they're real, TOR and crypto are technically intimidating, and they're constantly being shut down as LE plays whack-a-mole and operators exit scam. You can see how that does not translate well to making underground social media large enough to give mods PTSD. Posts on Dread get like 20 upvotes max.
And of course underground anonymous social media offers a degraded experience. Lots of normal people will be fine with aboveground sites that simply take their ID at signup, like Gmail asks for a phone number or NextDoor asks for your address. Anonymous sites will have a higher proportion of sickos posting PTSD content because that's the only place they can go, which drives away normal people, which makes the proportion of sickos higher, which drives away normal people, etc. It's what happened to "free speech" Voat, and why 4chan has 27 million monthly users compared to Instagram's 2 billion.
Sort of, CP sites are just as commonly black-holed by ISPs rather than actually shut down, so it's not that simple. And this is a level of demand thing. Tor isn't overly used, but if it were the only way to get some forms of media, it would be made much easier and much more popular.
And if the only workable solution is raiding and jailing anybody who runs a website without state-sanctioned ID verification, that seems a very heavy-handed approach with a million downsides, all for the sake of avoiding excess amounts of have to moderate :freeze-peach:.
ISPs black-hole because they don't want to get in trouble. Same thing. There's a bunch of layers where actors decide to comply with the law because it's easiest. Like AWS will probably kick you off if they find out you're hosting CP on their servers. Underlying this is the actual threat of state force: besides making advertisers happy, AWS doesn't want to get raided.
It's a heavy-handed solution, but that's kind of the nature of any policy change beyond market rate adjustments. You can tweak bond rates or whatever, but when you want to outlaw something you gotta have state force backing it up. To have, e.g. OSHA standards, you need to be willing to fine and even shut down dangerous workplaces. Most businesses will mostly comply because being punished is unprofitable. This is about worker safety. Jobs where you have to look at cartel executions 8 hours a day shouldn't exist. Rather than e.g. legislate it to two hours but leave an obvious profit incentive for companies to skirt the law, it would be better to remove the profit incentive. Make that an economically unproductive activity, because advertisers don't want to advertise next to "cleaned" but still illegal anon social media.
I agree that the jobs shouldn't exist. But this proposed solution throws the baby, the mother and the whole dang household out with the bathwater. 'State ID the whole internet' is the kind of massively bureaucratic and overly authoritarian approach that I suspect would invalidate a government in any fair society.
I'm starting to realise that fair communistic work organisation solves the problem anyways - nobody will actually be coerced into that kind of shitty work 8 hours a day. If the work needs doing, it can be organised much better, and if its not worth doing, then workers won't do it and the workplaces will find suitable tailored solutions.
What if in order to sign up you had to use an ID, regardless of site or something. Obviously that would be hell to implement right now under capitalism. But could it work under socialism? Doesn't China have a similar thing?
anonymity provides more good than harm imo - even if there's a socialist government in charge you might be queer & not want people knowing about that, or have gotten your arse kicked by the cops for no reason since cops are still fuckin cops, or (since you specifically mentioned China) have issues with your capitalist boss that you wanna talk about with other workers without endangering your livelihood. imo this slaughterhouse model is really a product of how current social media is structured. it's funny that this is my example but furaffinity has ~40k concurrent users at any given time, is constantly besiged by fascist spam, and relies on volunteer mods who don't seem too traumatised by the work. when you're actually part of the community then the horrid part is a much more manageable % of your life than when you show up to do a 9-to-5 in the Omelas box
Good points. I would say, with the Furaffinity example, you are kind of just hoping the volunteer mods aren't also trash people. Because yeah, FA is fairly decent but the other furry websites I've never heard of and don't go to frequently have really dipshit mods that are too interested in maintaining freeze peach to do anything about the Nazis or dillweeds posting pictures of their sona having a transphobic screed. Which I wouldn't know anything about because I'm not on those sites, but still.
Jokes aside I do agree, the way a site is setup tends to bring in a certain type of user or incentivize different behaviors so that's probably the best place to start.
It's technically impossible to enforce for all websites, this is just the reality of how innernet work.
I mean ideally re-educated so they don't do it anymore, this is all fantasy anyway
It is impossible to clamp down on all possible sites on the internet at this stage. No matter what you do, search engines will exist, and to be honest I think they serve a very good purpose separate from making money.
Do you propose linking real identity to everyone’s browsing and removing all ability to post anonymously? Seems like a major privacy and opsec concern and a major win for global capitalist surveillance
Reddit's system of deputizing moderators. Fuck, /r/anarchism's system of user written rules and elected moderators is probably the best model.
That's fine with me. But at some stage, you're gonna have a space big enough that this needs dedicated work, whatever you do.
I remember the stories about facebook moderators and their mental health struggles seeing such shit daily.
Yup. I think in a good society, you would go to jail (or be reeducated) upon posting CSAM and so there wouldn't be an endless torrent of it into moderated spaces.
I met a guy whose job was to review all suspicious content that was seized at pearson international. Dude seemed totally dead inside.
cw:mention of sa, child abuse etc
In February, according to one billing document reviewed by TIME, Sama delivered OpenAI a sample batch of 1,400 images. Some of those images were categorized as “C4”—OpenAI’s internal label denoting child sexual abuse—according to the document. Also included in the batch were “C3” images (including bestiality, rape, and sexual slavery,) and “V3” images depicting graphic detail of death, violence or serious physical injury, according to the billing document. OpenAI paid Sama a total of $787.50 for collecting the images, the document shows
In a statement, OpenAI confirmed that it had received 1,400 images from Sama that “included, but were not limited to, C4, C3, C2, V3, V2, and V1 images.” In a followup statement, the company said: “We engaged Sama as part of our ongoing work to create safer AI systems and prevent harmful outputs. We never intended for any content in the C4 category to be collected. This content is not needed as an input to our pretraining filters and we instruct our employees to actively avoid it. As soon as Sama told us they had attempted to collect content in this category, we clarified that there had been a miscommunication and that we didn’t want that content. And after realizing that there had been a miscommunication, we did not open or view the content in question — so we cannot confirm if it contained images in the C4 category.”
:what-the-hell: i thought it was just text initially, but nope straight up images. this kind of job should include free therapy at the very very least.
The extra horrifying thing, they do have good CP image recognition and hash values. Which means it’s new stuff.
It’s an endless stream of new horrors.
Go dunk on Steven Cohn's disgusting take in the replies https://twitter.com/spcohn/status/1615718665009102850
He points out that $320 a month is 53% above the living wage in Kenya, ignoring that 1) a living wage in Kenya buys you fucking poverty, and 2) their job is disgusting and traumatizing, and they are receiving literally zero mental health support to do it, even though the company can easily pay for that fucking support.
Go read Work without the Worker by Phil Jones if you want to read about how this has been happening for decades and how they enforce compliance and make the work un-unionizible. Maybe the most depressing book I've ever read, and I've read The Jakarta Method
Ghost Work by Siddharth Suri and Mary Grey is another good analysis of the topic, and although the writers are libs, they still call for the decommodiifcation of labor.
This is a huge issue in the ghost work industry--the content aggregators and moderators who make the internet work--they get paid shit wages to scrub CP off Facebook and YouTube with no therapy to process what they're seeing.
These folks tend to be marginalized on multiple axis, often being disabled women in the global south.
They're also harder to organize because they don't share a workplace, the labor arbitrage is huge, and some of the states lack a legal process for recognition. For example, the US, which is home to most of FB's content moderation has no way for contract laborers to gain NlRB recognition.
Any time you do a Google search, buy something on Amazon, watch a YouTube video, or even browse FB, you're relying on the work of thousands of ghost laborers.
For all of :reddit-logo:'s problems, it's system of volunteer moderators is by far the most humane of any major social media platform and its a good thing we brought it over here.
When OpenAI is valued at $29 billion for their electric parrot(a.) that runs on slavery (apparently), liberal economic hegemony teaches you to think that in terms of dollars this means OpenAI has made a thing that objectively has that much value. What's really happening though is that billionaires with that much control, $29 billion worth, of actual world economic production across the world, human labor productivity, capital ownership, etc. are making a bet that they can make more profit by investing into this thing. What the thing actually does doesn't matter. OpenAI is just another label to finance wonkery.
a. There has been mixed discussion about the importance of ChatGPT in terms of what it does. What I think it is is a more interesting way to search whatever is archived on forums like reddit and 4chan. It can pull together source code from stackoverflow and github, which is pretty neat in practice, but it'll also tell you that the age of consent is 14 or whatever because it's training on the string "age of consent" is all related to fucking libertarian incels on 8chan. Without better application, I think there is little interest in the diminishing returns of the current batch of ML spectacles. I realize that it looks like they're getting exponentially better, and in some ways they are, but they are also dependent on finicky and exponentially amassed tech infrastructure that requires continued u.s. consumer stability and militarily enforced supply chain stability, and I think most of us are doubtful about the medium term stability of american hegemony in the global south, where it matters most. If america can't get its cobalt and lithium, none of the rest really matters.
The real magic of all of this is this bit:
OpenAI’s outsourcing partner in Kenya was Sama, a San Francisco-based firm that employs workers in Kenya, Uganda and India to label data for Silicon Valley clients like Google, Meta and Microsoft. Sama markets itself as an “ethical AI” company and claims to have helped lift more than 50,000 people out of poverty.
What this means materially is that the people that are all being forced into this economic arrangement don't have to be working to produce food or other necessities to make capitalism keep turning. I think we underappreciate the extreme efficiency of current heavy industrial machinery. Perhaps this is the primary unknown known of a fundamentally Protestant capitalist empire at home: we enforce austerity while overproducing every material need, constantly. It also means that a little bit of what gets produced in excess is set aside to make impoverished people in the global south fuel all this tech stuff. At every level, the current technological revolution rests on an increasingly tenuous, purposefully impoverished class of skilled, information factory workers. These guys in Kenya literally are employed for poverty wages at an information factory. Materially, I see little difference between this and a customer service call center outsourced to India or Bangladesh.
Fuuuuuuuuck capitalism. This shit is bleak. How do our masters sleep at night?
Soundly, free from all material concerns, dreaming of life's pleasures.
that for all its glamor, AI often relies on hidden human labor in the Global South that can often be damaging and exploitative. These invisible workers remain on the margins even as their work contributes to billion-dollar industries.
Wow who would have guessed this cool great new app tech gizmo is really just a bunch of exploited poor people. Literally impossible to forsee.
Probably won't happen again
CW So possession of child pornography etc. is a crime, why can companies own them?
The shit exploited moderators have to experience was the final push that got me off of Facebook. No one's job should be to constantly view the worst humanity has to offer.
The hitherto history of AI development under capitalism is a system of fraud built on underpaid worker hiding behind the screen whos sole grand purpose is to terrify less-underpaid-but-still-deprived workers into accepting their position under the threat of "automation".