New Twitch Guidelines Finally Take Aim at Stopping Misinformation

Vaccine conspiracies and election fraud are mentioned in the new guidelines.

Twitch has updated its policy on spam, scams, and malicious conduct to prevent “harmful misinformation actors” from using its service to disseminate damaging and widely disproven claims.

The streaming platform said its new rules would consider a person’s online presence—which includes their actions outside of Twitch—and if they’re persistently spreading misinformation. According to Twitch, this group of folks doesn’t account for many on its service, and it won’t take action against someone sharing “one-off statements containing misinformation.”

For an investigation to result in a ban, the Twitch support team will look for harmful claims persistently shared, note they’re broadly spread and untrue,  and deem if they fall under “harmful misinformation topics, such as conspiracies that promote violence.” Twitch’s safety blog outlining the new policy says all three of these criteria must be met as they pose the biggest risk for real-world harm.

Speaking to the New York Times (via The Verge), Twitch vice president of trust and safety Angela Hession said the platform is “taking this precautionary step and updating our policies to ensure that these misinformation superspreaders won’t find a home on our service.”

While deplatforming disinformation mouthpieces seems like a logical step in ending violence, vaccine distrust, and US election fraud propaganda, Twitch has been just as slow as its other social media peers in taking action against the internet’s worst. The report notes Twitch has maintained its userbase only accounts for a small fraction of these folks when compared to others, but it takes just a few bad actors to popularize harmful lies and conspiracies that disproportionately threaten the most vulnerable.

In 2021, another NYT article highlights how Twitch banned former president Donald Trump before many of its competitors bothered. However, it still allowed some of his loudest advocates to remain, including one QAnon user that encouraged viewers to overturn the results of the 2020 US presidential election. Several of those accounts were since banned, and anything related should be given the new policy, but Twitch has a shaky history of inaction or slow response when it comes to taking responsibility for the content it hosts.

Now, Twitch’s community guidelines have points that address “misinformation that targets protected groups” and the spread of conspiracies involving COVID-19, its vaccines, or dangerous health treatments. There are also new rules on undermining the election process or spreading false claims during a public emergency, like active shooter situations or natural disasters.

Other gaming-focused platforms like Discord have also recently taken steps to stop the spread of misinformation, and it began purging alt-right, white nationalist servers back in 2018. It’s a relief to see specific policies take aim at the rampant harm caused by the worst among us, but there’s still that bitter taste left given the slow update on policies and plans in place to protect those misinformation harms the most. I absolutely believe deplatforming works, but the longer we let something dangerous fester, the harder it becomes to stop the fire. Twitch, Discord, and any others following suit have the right idea on implementing policies; I just hope to see it acted on in ways that genuinely protect marginalized people going forward.