Walking the tightrope between AI and audiences
Yes, being left behind is a risk. But so is destroying audience trust while leaving ourselves in hock to toxic companies.
It’s somewhat blindsided me that one of my favourite media podcasts these days is Media Confidential, from Prospect magazine. It’s two voices from the fading establishment of journalism, discussing its changes. At first glance, not where you’d expect to find insights about emerging tech.
Its hosts are, well, on the more experienced side of the industry. Alan Rusbridger is a former editor of The Guardian and now editor of Prospect, and his counterpart Lionel Barber once edited the Financial Times. It’s perhaps my own agism that led me to expect too little of the podcast, a dangerous mistake for a man the wrong side of 50 to make. But week after week, they’ve proved compelling and insightful on not only what’s happening in the mainstream of journalism, but also the external factors influencing it.
And the latest episode was a bit of a banger, as the young folks say:
I, for one, spurn our new AI overlords
On it, they chat with Karen Hao, a former journalist at the WSJ and The Atlantic, and author of a new book: Empire of AI. She spent a while “embedded” with OpenAI, the folks behind ChatGPT, which was clearly an eye-opening experience. The way they controlled her access, pushed her out of some things, and the cult-like way they behave makes me eager to dive into the book. It also puts me uncomfortably in mind of the revelations in Careless People, the Facebook exposé from earlier in the year.
These are quite alarming people to be allying ourselves with. If the senior staff at Facebook were careless, then the ones at OpenAI seem almost fanatical, based on her accounts. They are True Believers in AL, and in particular the holy grail of artificial general intelligence, human-level machine intelligence. In other words, the tech tool that can do everything we can. And they believe this, even though the path from where we are now to there is far from clear. The red flags are all there. Why aren’t more people heeding them?
The answer is simple, sadly. The deep and abiding fear of the current generation of managers — that they’ll repeat the failure of their predecessors and not adapt to digital quick enough — is bringing them to a deeper danger. A tool is a tool. The good or harm is in how you use it. A thoughtless, desperate use of a tool you don't really understand is dangerous. Yes, we're in the toddler-with-a-chainsaw stage of AI adoption. There's literally years of prior art on machine learning and its use in journalism. But how many people tasked with implementing it now are aware of it?
20 years ago, we could have used social media to build deeper relationships with our users. But, instead, too many went into broadcast mode, and built unsustainable traffic levels, rather than audiences. The deep reluctance of many of the industry to embrace social as a medium of communication rather than of marketing showed up with a spike in unsubscribes after my post earlier in the week. As long as we were building clicks, not relationships, it was traffic the platforms owned, not us. And then they took it away.
When you stare into AI, AI stares into you…
Unless we start approaching AI with a wee bit more wariness, we’ll make mistakes just as bad as we did with Facebook and its ilk – but they’ll hit us faster and harder this time. As I mentioned on LinkedIn this morning, to use AI in journalism now is to walk a very narrow tightrope. On one side there lies the risk of falling behind competitors who are adapting to it more quickly. That’s the risk that’s most clear in most people’s eyes.
But on the other side lies a deeper chasm, one of absolutely destroying user trust in our journalism. The evidence is still that the majority of our audiences are deeply sceptical about AI-generated journalism. I sometime use generative AI illustrations to head these pieces. I do it because I do feel the need to be au fait with these tools. Likewise, I also do it because generative AI gives the tools I need to create visuals I want, at a price that I can afford. Despite that, I always worry that my very use of them is alienating potential members of my audience.
(Tell you what, if lots more of you subscribe, I’ll start hiring Matt Buck to do cartoons instead…)
Every time we put generative content in front of our audiences, we run the risk of undermining our trust in them. Every time we do it without throughly vetting and fact-checking what they produce, with increasing that risk exponentially. Remember: LLMs are probability engines, not answer machines.
Swapping Audience for AI is a bad deal
We’re only just getting mainstream journalism to truly take audience work seriously. But that work is in danger of being thrown aside and lost in the race to integrate journalism into our workflows. There’s a really uncomfortable parallel here with the climate crisis. A few years ago, tech companies were making marketing capital out of their net-zero plans. Today, the hungry maw of AI demands to be fed with so much energy that demand is up, and Meta is funding nuclear power. No wonder that communities are fighting back.
Throwing away our sustainability goals for the sake of AI is swapping something we know we need to do for our survival as a species, for something we want to do because we fear being left behind. Equally, swapping cheaper production costs for audience trust is not a good bargain. It’s the sort of deal that only a company, an industry, that knows it is in trouble makes.
Yes, AI is here to stay. But its existence is not, in itself, an argument for its use, or for our acceptance of it. As Alan Jacobs put it recently:
A useful mental exercise: when people say “AI isn’t going anywhere” or “AI is here to stay,” substitute for “AI” the word “cancer.” A great many things that are here to stay are really bad and should be resisted as energetically as possible. Maybe AI isn't as bad as many fear. But the not-going-anywhere assertion is a way to avoid asking the key questions.
Charting a way forwards on newsroom AI
Luckily, that podcast episode was very interested in asking those key questions. And Hao’s solution is twofold:
- Push AI use into the back end. Use it to make sense of large data sets, to find the stories within them, that we can then go to do actual journalism about.
- Spend more time reporting on the societal impacts of AI, than we do on integrating it into our businesses.
She suggested in the podcast that the ratio of talks about using AI as opposed to reporting on it was about 3:1 at the International Journalism Festival in Perugia this year. That is, my friends, very much the wrong way around.
Now, sure, this is self-serving on Hao’s part. A journalist who does well-researched and appropriately skeptical reporting on AI wants us to do more of that. But, dammit, she’s right. One can be both self-serving and correct…
We need to do more reporting on the true impact of this technology on our cultures and societies. The form and nature of its use is not pre-defined for us. We can still choose, and need to start doing so. We don’t need the start-up founder fluff journalism of the early 2010s reborn in AI boosterism, we need genuine investigations. And we need to be reporting the harms much sooner than we did with social media.
Making smart choices about using AI
And, back here in our future of journalism circles, we have to spend at least as much time reading – and writing about – the case studies where AI went wrong and hurt the journalism business using, as we do circulating the case studies of those who made it work. You can learn as much from failure as you can success – and often more
In using AI, we may not be making a deal with the devil, but we’re certainly treating with someone with a fondness for horns and pitchforks, and a certain predilection for all red outfits…
I’ve spent a lot of time over the last couple of years experimenting with how AI can improve workflows and improve the accessibility of our sites, without its use being audience-facing. I’ve been thinking about how it can help us inform audience strategy, and be a useful brainstorming companion, testing our assumptions, and opening up other ways of telling our stories. In fact, I even have a training course outline ready to go.
But I’m reluctant to send it through to Ophelia at journalism.co.uk. I’m not — yet — sure I’ve rigged up my tightrope in the right place. However, I suspect the answer is, as is so often the case with anything audience-related, that there is no one safe spot. That narrow cord we need to walk is going to sit in very different places for different audiences. But to work that out, we need to experiment, and experiment carefully. Audience trust is difficult to win and oh, so easy to lose. The benefits of AI do not – yet – seem so clear that we can afford to really take the big risks.
But plenty of small ones are a necessity.