What you need to know: A bug that marked legitimate stories as spam overnight has exposed that changes at Facebook around contnet moderation. These changes mean that moderation is likely to be slower - or too aggressive — in the coming weeks. Facebook — and other algorythmic services — are likely to become an unreliable source of traffic
Just after I woke up this morning, my phone pinged with a Facebook Messenger chat from an old game-writing acquintance asking if I'd reported her post about Boris Johnson. Uh, no. But several of her posts had been reported as spam, and the reach of those posts limited to her. And they were links, largely, to reputable news organisations.
A quick look at Facebook, while my youngest daughter cheerfully bounced up and down on my recumbent form demanding that I "play the wizard game" showed that this was far from unique.
What happened? A content monitoring tool based on machine learning had gone rogue, marking too much as spam.
So, it's just a bug and all fixed, right?
A crisis-led switch to AI moderation
The key issue is here:
Facebook told its workers Monday to work from home if possible. On Tuesday, the company told its contract moderators not to come into the office; however, they are barred from working from home due to privacy concerns.
If that's the case, and unless/until they modify the working arrangements for their moderators, we're likely to see two things:
- A rise in misinformation spreading unchecked
- A rise in false positives as they try to tweak AI systems
That means that we're actually going to be facing both a surge in misinformation AND likely problems reaching people through the algorythmic services. Google has already warned of this around YouTube:
What you need to do: Work on ways of getting the most important Covid-19 information to your readers through direct (owned and operated) methods, rather than through algorithmic services which are likely to be unusually unreliable in the coming months.