Why Elon Musk Should Read Facebook’s Latest Transparency Report

Today, let’s talk about Facebook’s latest effort to make the platform more comprehensible to outsiders — and how the findings inform our current, seemingly endless debate about whether you can have a social network and free speech, too.

Start with an observation: Last week, Pew Research reported that a large majority of Americans — Elon Musk for example† — believe that social networks censor messages based on the political views they express. Here is Emily A. Vogels

Slightly rising from previous yearsabout three-quarters of Americans (77%) now believe that social media sites are intentionally censoring political positions they find objectionable, including 41% who say they are highly likely.

Majorities in political parties and ideologies believe that these sites engage in political censorship, but this view is especially widespread among Republicans. About nine in 10 Republicans (92%), including GOP leaners, say social media sites intentionally censor political positions they find objectionable, with 68% saying it is highly likely. Among conservative Republicans, this view is almost ubiquitous: 95% say these sites are likely to censor certain political views, and 76% say they are highly likely.

One of the reasons I find these numbers interesting is that: naturally social networks delete posts based on the views they express. American social networks, for example, all agree that Nazis are bad and that you shouldn’t post anything else on their sites. This is a political opinion, and to say so should not be controversial.

Of course, that’s not the core complaint of most people who complain about censorship on social networks. Republicans constantly say that social networks are run by liberals, have liberal policies and censor conservative views to further their larger political agenda. (Don’t mind the evidence that social networking in general a huge blessing for the conservative moment

So if you ask people, like Pew did, whether social networks censor posts based on politics, they won’t answer the question you actually asked. Instead, they answer the question: do the people who run these companies seem to share your politics for the most part?† And that explains, I think, more or less 100 percent of the difference in how Republicans and Democrats responded.

But whether on Twitter or in the halls of Congress, this conversation almost always takes place only on the most abstract level. People will certainly complain about individual posts being removed, but rarely does anyone go into detail: what categories of posts are being removed, in what numbers, and what the companies themselves have to say about the mistakes they make.

Which brings us to a document that has a dull name, but is full of joy for those of us who are curious and enjoy reading about the failures of artificial intelligence systems: Facebook’s Quarterly Community Standards Enforcement Reportthe latest of which the company released today as part of a larger “transparency reportfor the second half of 2021.

An important thing to focus on, whether you’re an average user concerned about censorship or recently bought a social network that promises to allow almost all legal speech, is what kind of speech Facebook is removing. Very little of it is “political,” at least in the sense of “commentary on current events.” Instead, it’s about drugs, guns, self-harm, sex and nudity, spam and fake accounts, and bullying and intimidation.

To be sure, some of these categories are deeply entangled in politics – terrorism and “dangerous organizations” for example, or what qualifies as hate speech. But for the most part, this report describes things that Facebook is removing because it’s good for company† Time and again, social products find that their use shrinks when even a small percentage of the material they host contains spam, nudity, bloodshed or people harassing each other.

Usually, social companies talk about their rules in terms of what they do “to keep the community safe.” But the more existential goal is to: make sure the community returns to the site at all† Here’s what makes Texas’s new social media law, which I wrote about yesterday, potentially so dangerous for platforms: Looks like they need to host material that will scare their users away.

At the same time, it is clear that deleting too many messages also drives people away. In 2020, I reported that Mark Zuckerberg told employees that censorship was the biggest complaint from Facebook’s user base.

A more sensible approach to regulating platforms would start with the assumption that private companies should be empowered to adopt and enforce community guidelines, if only because their businesses probably wouldn’t be viable without them. From there we can require platforms to tell us this how they moderate, assuming sunlight is the best disinfectant. And the more we understand about the decisions platforms make, the smarter the conversation we can have about what mistakes we’re willing to tolerate.

As the content moderation scholar Evelyn Douek wrote: “Content moderation always involves errors, so the pertinent questions are what error rates are reasonable and what types of errors should be preferred.”

Today’s Facebook report highlights two main types of mistakes: mistakes made by humans and mistakes made by artificial intelligence systems.

Start with the people. For reasons the report does not reveal, between the last quarter of 2021 and the first quarter of this year, the human moderators suffered “a temporary decline in enforcement accuracy” on drug-related messages. As a result, the number of people who filed an appeal rose from 80,000 to 104,000, and Facebook eventually recovered 149,000 posts that had been erroneously deleted.

However, people arguably had a better quarter than Facebook’s automated systems. One of the problems with AI this time:

  • Facebook recovered 345,600 posts that were erroneously deleted for violating self-harm policies, up from 95,300 a quarter earlier, due to “an issue that caused our media-matching technology to take action against non-violating content.”
  • The company recovered 414,000 posts that had been erroneously removed for violating its terrorism policy, and 232,000 posts related to organized hate groups, apparently due to the same issue.
  • The number of posts it erroneously deleted for violating its policies on violent and graphic content more than doubled in the past quarter, to 12,800, as automated systems erroneously deleted photos and videos of the Russian invasion of Ukraine.

Of course, there was also good evidence that automated systems are improving. Most notably, Facebook took action against 21.7 million posts that violated its policies on violence and incitement, compared to 12.4 million in the previous quarter, “as a result of improving and expanding our proactive detection technology. ” That raises more than a few questions about what has escaped detection in previous quarters.

Still, Facebook shares a lot more about its flaws than other platforms; YouTube, for example shares some information about videos that were deleted in errorbut not by category and without any information about the mistakes made.

And yet there is so much more that we would benefit from knowing – from Facebook, YouTube and all the rest. How about seeing all this data broken down by country, for example? What about seeing information about more explicit “political” categories, such as posts removed for violating health misinformation policies? And how about seeing it all monthly instead of quarterly?

Frankly, I don’t know if that would do much to change the current debate on free speech. Partisans simply have too much to gain politically by shouting endless “censorship” when a content moderation decision goes against them.

But I wish lawmakers would spend at least an afternoon getting bogged down in the details of a report like Facebook’s that lays out both the business and technical challenges of hosting so many people’s opinions. It underlines the inevitability of mistakes, some of which are quite drastic. And it raises questions that lawmakers could answer through regulations that could actually pass First Amendment scrutiny, such as what rights a person should have to appeal if their post or account is mistakenly deleted.

There is, I think, also an important lesson for Facebook in all that data. Every three months, according to its own data, millions of users see their posts erroneously deleted. It’s no wonder that over time this has become the biggest complaint among users. And while mistakes are inevitable, it’s also easy to imagine Facebook treating these customers better: explaining the mistake in detail, apologizing, inviting users to provide feedback on the appeal process. And then improve that process.

The status quo, where those users may see a short automated response that doesn’t answer any of their questions, is a world where support for the social network — and for content moderation in general — continues to dwindle. If only to maintain their business, it’s time for platforms to stand up for it.

Shreya Christina
Shreya has been with for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

More from author

Related posts


Latest posts

Quality vs. Quantity: The Trade-off in Data Annotation Without the Right Tools

A bug tracking tool or issue tracker, such as BugHerd, is a specialized bug tracking system designed to record and track website or software...

Data Security in the Digital Age: Best Practices for Backing Up Essential Information

In the age where digitalization has permeated everything, the sheer importance of data security cannot be overemphasized. Whether it may be personal pictures and...

10 Tips To Work From Home For Less

Over the past few years, the idea of remote work has seen a significant surge in popularity, providing employees with the valuable advantages of...