Is content moderation futile? And — if so — what should we do instead?

Are we waging a losing war on misinformation? Do we need to try a different approach?

Adam Tinworth
Adam Tinworth

Leading tech thinker Benedict Evans has written an interesting post on the inherent problems with content moderation online. All our efforts to “clean up” misinformation seem to be having less impact than we might like. Evans argues that this is an inherent property of the internet:

The internet is people: all of society is online now, and so all of society’s problems are expressed, amplified and channeled in new ways by the internet.

This has resonance with the ideas danah boyd was exploring over a decade ago: that the internet is an amplifying mirror of humanity. As she wrote in 2009:

One thing to keep in mind about social media: the internet mirrors and magnifies pre-existing dynamics. And it makes many different realities much more visible than ever before.

The internet as a city

Evans likens the situation to a huge city: bringing all those people together has a positive impact in terms of innovation and creative mixing, but a downside in the opportunities for crime to build networks:

Anyone can find their tribe, and in the 90s this made AOL a way for gay people to meet, but the internet is also a way for Nazis or jihadis to find each other. One could also think of big European cities before modern policing — 18th century London or Paris were awash with sin and prone to mob violence, because they were cities.

However, the parallel he’s drawing with cities is not exact. Online policing may not be the way forward: reengineering our online social spaces to minimise the opportunity for bad actors might be the way to go:

Hence, I wonder how far the answers to our problems with social media are not more moderators, just as the answer to PC security was not virus scanners, but to change the model — to remove whole layers of mechanics that enable abuse. So, for example, Instagram doesn’t have links, and Clubhouse doesn’t have replies, quotes or screenshots. Email newsletters don’t seem to have virality.

The many vectors of misinformation

The problem, though, is there are so many different vectors for misinformation to spread. For example, I now have a date for my first vaccine jab. In a moment of whimsy, I thought I’d head to Etsy to see if I could find a Bill Gates badge to wear for the occasion. And guess what the first result was?

An anti-vaxxer mug on Etsy
An anti-vaxxer mug on Etsy

Is it possible to close enough vectors down that the ideas find it harder to spread? When people are printing misinformation on mugs (whether for political or financial reasons), is the problem just too pervasive now? Who is analysing the role of Etsy in misinformation spread?

Or could this be something of a “misinformation vaccine”, and if we can use it to shut down enough transmission vectors, we reach a level of population misinformation immunity?

It’s an interesting thought.



And finally…

Facebook is busy making the sort of embarrassingly basic filtering errors that made sending emails about Scunthorpe needlessly challenging 20 years ago:

Facebook takes down official page for the French town of Bitche
Bitche is a commune in the Moselle department of the Grand Est administrative region in northeastern France, ear Germany.

It wasn’t long ago that Facebook was suspending people for talking about local Sussex beauty spot Devil’s Dyke.

Morning Conference emailsmisinformationcontent moderation

Adam Tinworth Twitter

Adam is a digital journalism lecturer, trainer and writer. He's been a blogger for over 20 years, a journalist for 30 and teaches audience strategy and engagement at City St George’s, London.

Comments