After Christchurch: how do we curtail the spread of video from atrocities?

The Christchurch attack was a terrorist event for the social media age. We need to address this situation on multiple levels to minimise the harm done.

Adam Tinworth
Adam Tinworth

That tragedy which was the terrorist attack on a Christchurch mosque is provoking deep debate about our relationship with social media for a very difficult reason: this was an atrocity committed in full knowledge of the impact it would have on social media. The killer was streaming footage even as he murdered people.

There's plenty to think about in relation to the traditional media' response to this - and that's a subject I hope to return to. But one of the key issues that has arisen is the tech platform's struggle to keep up with the sheer volume of footage being uploaded from the attack:

YouTube says at one point it had to remove one video per second of the Christchurch mosque attack, and says tens of thousands of copies have now been removed.

The first thing to acknowledge is that this isn't just a technical problem. There's a fundamental human problem here, with thousands of people thinking that re-uploading footage of such a horrific event was an acceptable thing to do. It's clearly not - and we need to make sure, as individuals and as a society, that we are not rewarding such individuals with our attention.

Equally, the technology platforms need to make sure they do everything they can to minimise the effects of this. Let's not be coy about this: they are making huge profits off facilitating other people's content sharing. They have the resources to do this, and they need to pay the price of their success. However, it's not an easy problem to solve.

Julia Alexander for The Verge explored the technical situation:

YouTube also has a system for immediately removing child pornography and terrorism-related content, by fingerprinting the footage using a hash system. But that system isn’t applied in cases like this, because of the potential for newsworthiness. YouTube considers the removal of newsworthy videos to be just as harmful. YouTube prohibits footage that’s meant to “shock or disgust viewers,” which can include the aftermath of an attack. If it’s used for news purposes, however, YouTube says the footage is allowed but may be age-restricted to protect younger viewers.

So, yes, part of it comes back to reporting. As long as news organisations use footage as part of their coverage, an automatic take-down may not be appropriate. That, in itself, raises a couple of questions. Is any sharing of this content appropriate - and, if it is, should there be an effective whitelist of reputable news organisations?  That would all automatic blocking of all related content bar that of a few of the most trusted news organizations. Yes, tehre are complexities there, of course, but it's an easier challenge than the technological one.

Deploying AI to minimize the spread of horrific imagery

On the technology front, our key tools seems to be machine learning and AI. The latter is already is being used to assist in verifying (and debunking) videos:

Johnson reveals that the AP is set to introduce to its newsroom a new cloud-based tool that uses artificial intelligence to instantly verify – or expose as fake – the thousands of UGC videos examined by AP journalists every week. The AP Verify programme combines visual recognition and machine learning technologies to make snap assessments of UGC. The tool has been in testing phase since 2017 when it was approved for research funding as part of the Google-supported Digital News Initiative.

Facebook is using AI to detect and proactively remove revenge porn:

By using machine learning and artificial intelligence, we can now proactively detect near nude images or videos that are shared without permission on Facebook and Instagram. This means we can find this content before anyone reports it, which is important for two reasons: often victims are afraid of retribution so they are reluctant to report the content themselves or are unaware the content has been shared.

These AI-based efforts were having some impact, with Facebook able to claim a success in restraining the spread of the video:

Facebook said it removed 1.5 million videos depicting images from the shooting in the first 24 hours after it happened – with 1.2 million of those blocked by software at the moment of upload.

Of course, the prevalence of live-streaming technology is challenging us even more. People an easily and fluidly stream from our phone, and other cameras. My latest GoPro can stream easily via my phone. The terrorist's original video was streamed. Facebook has provided some figures on the impact:

The video was viewed fewer than 200 times during the live broadcast. No users reported the video during the live broadcast. Including the views during the live broadcast, the video was viewed about 4000 times in total before being removed from Facebook.

They also go into a little details about how they managed to halt the spread of the video - and the challenges inherent in that:

We removed the original Facebook Live video and hashed it so that other shares that are visually similar to that video are then detected and automatically removed from Facebook and Instagram.

However, people find their way around such systems:

Some variants such as screen recordings were more difficult to detect, so we expanded to additional detection systems including the use of audio technology.

Responding to the challenge of live-streaming

Clearly live-streaming is a huge issue. Alexander again:

It’s why live-streaming is considered a high-risk area for YouTube. People who violate rules on live streams, who are sometimes caught using Content ID once the live stream is over, lose their streaming privileges because it’s an area that YouTube can’t police as thoroughly. The teams at YouTube are working on it, according to the company, but it’s one they acknowledge is very difficult.

In the aftermath of an event like this, there's a tendency to want to assign blame. However, we also need to acknowledge the complexity of the problem. We're still only in the early steps of dealing culturally with the impact of everyone having access to the kinds of technology that used only to be available to the biggest broadcast outlets. The response to that will need to be multi-faceted:

  • Technological - which is already being worked on, as discussed above
  • Social - there need to be deeper social consequences for people choosing to share information of this sort.
  • Political - again, it's worth considering the legislative response to this. What civl and criminal consequences should attach to people who choose to share footage of this type? For example, in New Zealand the Films, Videos and Publications Classification Act makes people who have or share a copy of this video liable to face a fine of $10,000 or up to 14 years in jail. Conversations have already started in the UK - and more than likely elsewhere, too.

You cannot uninvent technology. The capacity for live-streaming and easy video sharing is not going to go away. So, instead, we need to concentrate on framing a societal response that protects the innocent, minimises harm— and which punishes offenders.

livestreamingFacebooklegislationterrorismYouTubeAImoderation

Adam Tinworth Twitter

Adam is a lecturer, trainer and writer. He's been a blogger for over 20 years, and a journalist for more than 30. He lectures on audience strategy and engagement at City, University of London.

Comments