What you need to know about Twitter’s crisis disinformation policy

Twitter announced Thursday that new safeguards have been put in place to prevent disinformation and misinformation from spreading on its platform in times of crisis, i.e. any “armed conflict, public health emergencies and large-scale natural disasters”.

Twitter’s head of trust and security Yoel Roth, wrote in a blog post Thursday said the new policy “will guide our efforts to elevate credible, authoritative information, and help ensure that viral misinformation is not reinforced or recommended by us during crises.”

The policy change aims to break through fake – often politically motivated – content that can spill over onto the internet and make real-world situations much worse. (e.g. Twitter found herself inundated in misinformation and fraudulent accounts surrounding the start of the war in Ukraine.)

But working out a new content moderation policy in a blog post is one thing, and actually enforcing it is another. Some key questions remain about how they’re going to go about it.

How will this work in practice?

To determine whether a tweet is wrong or misinformation, Twitter says it will rely on evidence from “conflict monitoring groups, humanitarian organizations, open source researchers, journalists, and more.”

Interestingly, Twitter says it will not target tweets that contain “strong commentary, attempts to disprove or verify facts, and personal anecdotes or first person accounts.” The company seems to be suggesting that it prefers a narrow definition of potentially dangerous tweets, perhaps targeting those who present outright false information as if it were the truth. But it’s hard to predict what kind of tweets could go viral and wreak havoc in the real world. “Strong commentary” on Facebook led to widespread denial of COVID and even mass killing of Rohingyas in Myanmar, Reuters reports

The policy also targets Twitter accounts with high potential reach, such as accounts with a significant following, including state-affiliated media accounts (Sputnik and RT, for example) and official government accounts. If it finds tweets containing falsehoods about an emerging crisis, it plans to stop recommending or amplifying them on its platform, including in home feeds, search results and Explore. Second, Twitter will put warning labels on tweets that have been judged to be false.

Now there are many widely followed accounts. Will Twitter be constantly monitoring all those accounts? Major social media companies rely heavily on machine learning models to identify infringing content. Viral tweets could potentially come from accounts with 2 followers or 2 million. So the same old question arises: Will Twitter’s AI be able to locate and stop the sharing of malicious content fast enough?

In a recent example, links to the Buffalo mass murderer’s live video and manifesto were reposted to Facebook and Twitter, and the two companies scribbled to delete them following the event last weekend.

Why now?

The company says it has been working on the policy since last year. It is possible that the work was inspired by the January 6, 2021 attack on the Capitol. Former CEO Jack Dorsey told Congress his platform had played a role, and Twitter was a key sounding board for the baseless “Stop the Steal” conspiracy that fueled the uprising. Twitter could also see that with white supremacy and extreme partisan rancor still simmering in the US, future “crisis” situations are possible and even likely.

Will this policy survive if/if Elon Musk takes over the company?

Twitter self is in crisis as its board of directors recently approved a sale of the company to Elon Musk. Musk himself is a self-proclaimed troll provocateur and a loud crusader for more “freedom of speech” and less content moderation on Twitter. If Musk takes his purist views on social speech seriously, he could invalidate Twitter’s new policy and allow people to spew whatever nonsense even in “times of crisis.”

Ultimately, Musk will have to answer the question of whether “freedom of speech” equates to “freedom of reach”. Global social networks are unlike any previous telecommunications network in that they provide huge reach for viral content. That is, they can deliver popular content to billions of people around the world in milliseconds. Does limiting that scope violate Twitter users’ right to free speech? Musk seems to think so.

But Twitter’s current leadership is clear does not† With the new crisis moderation policy, Twitter will not prevent people from publishing untrue content on the platform, but it does reserve the right to limit the reach of the content.


Shreya Christinahttps://businesstraverse.com
Shreya has been with businesstraverse.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider businesstraverse.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

More from author

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related posts

Advertisment

Latest posts

3 Ways You Can Support Hispanic Businesses During Hispanic Heritage Month

National Hispanic Heritage Month is a time to celebrate the rich culture and immense contributions of Hispanic and Latino Americans. This year, as we...

The Nostalgia Factor: Why Retro Gaming Continues to Thrive

In the fast-paced world of modern video games with their stunning graphics, lifelike simulations, and complex narratives, there's a distinct charm in returning to...

The Rise of Cross-Platform App Development in Australia

In the ever-evolving landscape of app development, the need for efficiency, cost-effectiveness, and wider reach has given rise to a significant trend: cross-platform app...

Want to stay up to date with the latest news?

We would love to hear from you! Please fill in your details and we will stay in touch. It's that simple!