Content Moderation
Content moderation at scale is impossible to do well says my impossibility theorem. And, basically every day we see more examples of this in action. The latest is that the NY Times reports how YouTube took down a video that the January 6th House Select Committee had posted to the site, detailing many of the lies Donald Trump made about the 2020 election.
The House select committee investigating the Jan. 6 riot has been trying to draw more eyes to its televised hearings by uploading clips of the proceedings online. But YouTube has removed one of those videos from its platform, saying the committee was advancing election misinformation.
The excerpt, which was uploaded June 14, included recorded testimony from former Attorney General William P. Barr. But the problem for YouTube was that the video also included a clip of former President Donald J. Trump sharing lies about the election on the Fox Business channel.
It’s not difficult to figure out how this played out. Back in December of 2020, YouTube changed its policies to ban videos that claimed there was election fraud. You can agree or disagree with this policy, but YouTube realized that disingenuous smear merchants were spreading nonsense conspiracy theories leading to real world harm, and decided, reasonably, that they didn’t want it on the site.
Of course, that also makes things difficult when you have the January 6th Committee trying to show the kind of nonsense that Trump was spewing about the election, and they want that posted to YouTube. But, as YouTube notes, on a first glance, this content violates its policies.
“Our election integrity policy prohibits content advancing false claims that widespread fraud, errors or glitches changed the outcome of the 2020 U.S. presidential election, if it does not provide sufficient context,” YouTube spokeswoman Ivy Choi said in a statement. “We enforce our policies equally for everyone, and have removed the video uploaded by the Jan. 6 committee channel.”
In other words, for all the claims of how “biased” these policies are, they also (once again) show how the same policies impact people across the political spectrum.
Of course, this kind of content moderation issue isn’t new. Indeed, one of the first big controversies around content moderation on YouTube came after Congress pressured the company to remove “terrorist” videos, with the end result being that those who were documenting war crimes found that their own accounts were shut down.
This is also why I wish that companies were more thoughtful in how they handled these decisions. While it’s understandable why a company would reasonably say it doesn’t want garbage election propaganda on its site, there are two associated risks with that. The first is that it removes people who are trying to document the speakers pushing that propaganda, as is the case here. And, of course, the second is that it may make it difficult in scenarios where there actually are problems with an election (which, to be clear, there remains no evidence for this in 2020), it will be that much more difficult to discuss it.
In other words, there is no easy answer here. And, yes, I know that supporters of Trump will insist that this is why YouTube shouldn’t takedown election propaganda under any circumstances, but their unfailing willingness to continue to push the completely baseless “big lie” is proof of why perhaps we shouldn’t be making it easier for gullible people to fall down that rabbit hole.
But, still, it does seem like there ought to be better ways of seeking out the difference between “promoting” dangerous behavior and “documenting” it.
Filed Under: , , , , , , , ,
Companies: youtube

Read the latest posts:
Read All »
Become an Insider!

This feature is only available to registered users.
You can register here or sign in to use it.


Leave a Reply

Your email address will not be published.