A journalist and a robot working together.
AI is here to stay — so let's figure our a way to work with it. Image by Midjourney.

Learning to live with AI

Generative AI is here to stay. So, the sooner we set some guidelines, and communicate them to the reader, the sooner we can start experimenting.

Adam Tinworth
Adam Tinworth

AI Principles for Publishers

The big difference between Generative AI and the other buzzword tech of the past few years is that people are finding practical, daily uses for it. I have never knowingly used Web3 or the Metaverse in my work – but I do use generative AI to illustrate my posts.

In the last couple of weeks, I’ve come a journalist using a personal, paid ChatGPT account in their work and another worrying that their particular skills are no longer needed. I’ve seen examples of students using LLMs to generate essays (and those are just the ones that were caught…).

I’m experimenting with a Slack bot that claims to assist in your SEO work. Generative AI is here, with clear use cases that make sense to ordinary people.

That’s what those other trendy technologies never managed.

Set the rules now

The best AI is assistive intelligence. Image by Midjourney.

The last time I remember something happening quite this quickly was social media in the late 2000s. Back then, too many publishers were too slow to adopt sensible guidelines for their staff, leading to some unfortunate errors — and consequent reputation damage. Indeed, I’d argue that our slowness then to set boundaries has led to some of the trust erosion we’re seeing now.

Watching journalists tear chuncks off each other in a partisan way on Twitter is doing nothing for our reputation for impartiality and trustworthiness.

And so, I’m bloomin’ delighted to see some larger publishers getting ahead of this. Chris Moran tweeted out The Guardian’s principles earlier.

The Guardian’s approach to generative AI
Over the last three months, colleagues from our editorial, creative, engineering, product, legal, commercial and partnerships teams have set up a Guardian AI working group to consider how we respond to these risks and opportunities and to draft a set of Guardian-wide AI principles.

They worked with the former leader of the JournalismAI project to create them.

And, of course, the FT published their first pass at some a few weeks ago.

Letter from the editor on generative AI and the FT
Our journalism will continue to be reported, written and edited by humans who are the best in their fields

Where are you on getting yours sorted?

Good experiments need good boundaries

This is a serious question: if we're not ruthlessly transparent with our readers about if, how and where we're using AI, we're setting ourselves up for another serious decline in our already shaky trust levels. It's why one major point of similarity in the FT and Guardian appoaches is transparancy. Here's the relevant par from the FT for example:

We will be transparent, within the FT and with our readers. All newsroom experimentation will be recorded in an internal register, including, to the extent possible, the use of third-party providers who may be using the tool. Training for our journalists on the use of generative AI for story discovery will be provided through a series of masterclasses.

If you have staff already using AI — you need to find out, and soon. Whatever you do, don't discourage experimentation. But make sure it's clearly tagged as such to the readers.


The Reuters Digital News Report controversy

I’d missed this completely, until I head some of my colleagues at City discussing it earlier in the week:

Nobel peace laureate Maria Ressa has claimed Oxford University’s leading journalism institute is publishing flawed research that puts journalists and independent outlets at risk, particularly in the global south.

Yikes. More:

Its findings include rating Rappler – the outlet Ressa co-founded, and whose work was cited in her Nobel prize nomination – the least trusted media outlet on a list of those surveyed in the Philippines.

The reason, of course, it’s so untrusted is a long-running disinformation campaign to discredit it. And while the report acknowledges that, Ressa’s argument is that the finding can still be used against them by the government.

The Reuters Institute has published a response on its website:

Alan Rusbridger, Chair of the Reuters Institute’s Steering Committee, says: “We deplore abuse against Maria Ressa and misrepresentation of our research. The Institute has reviewed the methodology with our country partners and Advisory Board. We believe it is robust. We have taken steps to mitigate abuse, push back against it, and will continue to do so”.

I don’t have a lot to add here. There are some situations where I know it’s better for me to shut up and listen. I’m not a research academic. I’m not a journalist working under life-threatening conditions. There are people much better equipped to navigate these complexities than me.

I know, I know. Profound failure as a pundit…


Quickies


The Snopes logo on fire in a barren landscape.
Image generated by Midjourney, and edited in Pixelmator Pro

What went wrong for the internet’s original fact check site?

If you’ve been on the internet long enough, you’ll have used Snopes. In the late 90s and early 2000s, it was the go-to site for debunking viral email chain letters, and then their reincarnated form on Facebook. But, even as misinformation has surged on the old interwebs, Snopes has slowly faded from relevance.

This is the story of why:

This impulse to outwit and rile up folks felt weirdly at odds with the good faith and empathy you’d expect from someone in his business. In 2016, for example, Mikkelson kidded with Green on Slack that a man named Jeff Zarronandia was his best hire. “Model employee,” Green replied, according to screenshots. Mikkelson had made up Zarronandia a year before to troll the conservatives who insisted that Snopes was biased toward liberals. But as Snopes sought to level up, Green, then the director of business development, told Mikkelson that he thought they needed to “clean up our loose ends.” That didn’t happen, and Zarronandia was outed as an alt in a BuzzFeed exposé by the reporter Dean Sterling Jones.

Too many of the early internet successes were run by people with great ideas — but without anything like the skill needed to manage a huge success. And so things spiral into ever increasing chaos.

It’s a horribly compelling story.


Have a great weekend, folks.

AI

Adam Tinworth Twitter

Adam is a lecturer, trainer and writer. He's been a blogger for over 20 years, and a journalist for more than 30. He lectures on audience strategy and engagement at City, University of London.

Comments