The (Trust and) Safety Dance
When a corporation pays to manage a problem instead of fixing it, it’s reasonable to question if they actually think it’s a problem in the first place.
“You can act real rude, and totally removed, and I can act like an imbecile”
— Men Without Hats
“Social” media is largely more “media” than “social” these days. Facebook, Instagram, Reddit, Twitter, and TikTok are all just variations of the same thing and by that I mean they are endless feeds of mostly bots and clout chasers reposting the same screenshots and videos on every platform. “Following” someone has little bearing on if you’ll actually see their content, you will see what the company wants you to see. Remember: “for you” isn’t.
One reason for this shift away from “social” is because “social” content is only really useful for attracting new users. “Your friends are doing it” is quite possibly the single most effective way to get a human to do something they don’t need to be doing. But we’re now in an era where the pool of people without an addictive scrolley app is dwindling, and so the effort to attract new users with “friends” becomes less profitable than squeezing the existing user base.
The other reason for the shift away from “social”, is that showing scrollers the same repackaged advertiser-approved memes over and over is much cheaper and easier (especially when not actually, y’know, creating any of the content) than having a platform where humans interact. “Media” is cheap, but “social” can get expensive. The more platforms allow for opportunities for people to pause scrolling and start socializing the more opportunities are created for toxicity. Toxicity increases engagement, but left unchecked it can cause users to quit, or worse, sue. Companies looking to maximize profit need to keep toxicity at an optimal level that squeezes the maximum amount of engagement out of users, but stops just shy of overwhelming them.
Enter “Trust and Safety” departments. “Trust and safety” departments are kinda like “Human Resources” departments. They exist to help the company avoid expensive lawsuits and expensive PR blunders. These departments, I assume, are comprised of good-hearted people who care deeply about their work and the well being of others. But they are fighting a battle that the companies do not actually want to end.
“Trust and safety” departments keep awful online places just not-awful enough so that most users don’t quit. But that’s as far as they go. If these departments were enabled to go further, and actually make social media into places where conversation was king, and anyone interrupting the free exchange of ideas was promptly booted, profits would go down, probably to the point where the whole system wouldn’t be sustainable.
put it best in her now-famous rant, “Stop Talking to Each Other and Start Buying Things” when she said platforms want us to “stop talking to each other and start buying things”.To put it bluntly, “Trust and Safety” is —borrowing a phrase from anthropologist David Graeber’s 2015 book “Bullshit Jobs”— a bullshit job. A “Bullshit job” refers to a type of job that exists to manage a problem that need not exist in the first place. Some examples Graeber gives are:
Store greeters, who are paid to pretend to be happy because the other underpaid and overworked employees can’t muster it.
Administrative assistants that are hired to make someone wealthy feel important.
The person at the airport who hands out customer concessions when the airline loses their luggage.
In the context of “Trust and Safety”, consider that there’s absolutely nothing about “humans talking online” that necessitates putting up with most of the garbage we have to put up with these days. All the trolls, rage baiters, hate-speechers, propagandists and surreptitious advertisers who forcefully inject themselves into every conversation don’t actually need to be allowed to participate, and yet they feel impossible to avoid.
Here’s an analogy: Imagine you’re a part of a study group that just added a new member. For the sake of brevity let's call him “M. Zuckerberg”. No that's too obvious, let's say “Mark Z”. As soon as Mark Z joins the group, he becomes immediately disruptive to the point of preventing much actual studying. After asking him to stop multiple times with no effect, what should the group do so they can get on with their studying?
If you answered “hire an outsider to monitor his disruptions and ensure they are kept at or below some baseline level of disruptability”, now you’re thinking like a social media company!
The obvious and simplest solution here is to kick Mark Z the fuck out. A study group does not need to accept anyone and everyone. It does not need to grow indefinitely. In fact, the larger a study group gets, the worse it tends to be for actual studying. A group that is focused on facilitating studying would never have need for a Trust and Safety department. The fact that “social” media companies have them implies a goal that is not aligned with facilitating conversation or even (as Meta Systems Incorporated puts it in their laughably ironic mission statement): “Giving people the power to build community and bring the world closer together”1
Corporate social media and its distractions are awful for achieving any kind of collaborative goals. Even simple, casual goals like “getting to know each other”. But the answer isn’t a greater financial investment in “trust and safety”, it’s using spaces that don’t need it in the first place.
Yes, Justin is talking about the Fediverse again
The reason I bring up Mastodon/Lemmy and the Fediverse so often is not because I think you need to be there, but because it serves as proof-of-concept that global online social networks can totally exist without all the garbage and nonsense that makes what we usually think of a “social media” so awful.
When I say I’m a “fan”, I don’t mean it in the way that I think most people mean when they discover some new shiny distraction. The Fediverse is not “fun” or “addicting” by default. It certainly can be, but that’s not the point of it existing. Mastodon/Lemmy (et al.) are tools that anyone can use to create a space for talking about whatever, however, and with whoever the hell they please.
When spaces are not seeking profits, they are not incentivized to endlessly grow, which means they are not incentivized for engagement beyond what the users naturally want to engage with. Non-growth platforms just aren’t incentivized to distract or upset their own users.
Mastodon has about two million users spread out over thousands of “instances” that are all connected, but each is ran by individuals or small teams. Being a small part of a bigger network means that the ratio of human people monitoring content to users posting the content is much narrower than on commercial platforms. Here are some numbers:
Meta Systems Incorporated has about 40,000 “trust and safety” employees. Assuming all of those employees are actively monitoring content all of the time (they’re not), that’s about one human set of eyeballs for every 60,000 Facebook users. These employees are not members of whatever subgroup they are monitoring, they are not personally invested in the vitality of the communities, they often don’t even speak the language. Their job is to identify and remove the worst kinds of toxic material. Nothing more.
On the other side of the equation, Lemmy.world is the largest single Lemmy instance with 130k registered users (the Lemmy network is kind of like decentralized Reddit). As of this writing, Lemmy.world has eight admins (people who can monitor everything), and a few dozen hosted communities each with their own human moderators. Just counting the admins alone (which is not a reasonable comparison because moderators do most of the curation work) means there is one admin for every 1500 active users. This more than an order of magnitude better than Meta, despite Lemmy.world not having a “trust and safety” budget at all. Furthermore, the .world admins and moderators put effort in, not because they are paid to, not because if they don’t their company could be sued, but because they want to. Because creating a space for conversation is meaningful to them, and because they themselves are members of the community and therefore personally invested in its health.
If you’re thinking “well, this isn’t a fair comparison because Lemmy.world is significantly smaller than Facebook” then you’re starting to get the point. Small communities can stay safe easier. They can stay trustworthy easier. It’s much easier to identify and minimize toxicity and harassment when the people in charge of identifying it understand the context behind what’s being said. It’s much easier to maintain a culture of whatever-the-hell-you-want when a single bad actor who is “technically not breaking any rules” can’t easily disrupt everyone.
Mastodon and Lemmy (or whatever new software that’s not been invented yet) might not ever influence global culture to the degree that Facebook and Twitter once did. But they prove that “people constructively and healthily socializing via the internet” is entirely possible without being forced to tolerate any more nonsense than one would normally expect when us “messy” humans get together. “Social media: The Business” on the other hand cannot exist without the garbage. Even if a company chose to ignore its bottom line and invest heavily in making the Trustest and Safetest platform possible, it would still be staffed by human moderators who need to be paid, vs those on the Fediverse who care about creating good spaces so much that they do it for free.
But let’s be honest, companies would never invest more than the bare minimum into Trust and Safety because they make a lot more money when their users are perpetually distracted and upset. A persistent state of anxiety is not some natural byproduct of what happens when humans interact, their products are specifically engineered to make you feel bad.
…But not too bad. It’s not the role of Trust and Safety departments to make users feel happy, productive or enriched, they just don’t want you to sue. And they don’t want you to quit. When an ostensibly “social” company has a Trust and Safety department, try and see it for what it is: a sign that failure is an option.
“Bringing the world closer together” y’know, like enabling genocide.