From NYT piece about how the platforms are preparing to deal with the predicted flood of "deepfakes" - machine learning facilitated fake videos:

“You can already see a material effect that deepfakes have had,” said Nick Dufour, one of the Google engineers overseeing the company’s deepfake research. “They have allowed people to claim that video evidence that would otherwise be very convincing is a fake.”

And here's an example they give:

In recent months, video evidence was at the center of prominent incidents in Brazil, Gabon in Central Africa and China. Each was colored by the same question: Is the video real? The Gabonese president, for example, was out of the country for medical care and his government released a so-called proof-of-life video. Opponents claimed it had been faked. Experts call that confusion “the liar’s dividend.”

Right now, the problem of people using the existence of deepfakes to dispute the veracity of genuine footage is a bigger issue than the deepfakes themselves.

This is a technology whose very existence is undermining our faith in what we see online. Our own conformation bias will tend to make us think that videos that show things that go against our beliefs are almost certainly faked.

And our sense of a shared factual reality erodes further…