A digital publisher surfing the AI wav

AI and publishers: don't panic — yet

Why are publishers making fools of themselves by publishing AI content too soon? They're terrified of repeating the mistakes of the past.

Adam Tinworth
Adam Tinworth

Watching publishers make utter fools of themselves with AI is a curious experience. I’m of the generation who remembers when getting publishers to pay attention to any emerging technology was a real challenge. And yet now we see publishers throwing themselves at emerging technology before it has a chance to mature and prove itself in the marketplace.

I remember rolling my eyes a couple of years ago when a newsletter author asserted that the industry should be experimenting with NFTs. Nobody would be grateful for following that advice, would they?

So, what happened? When did publishers move from being cynical foot-draggers, unwilling to commit to digital unless it had proved itself, to gullible enthusiasts throwing themselves into new technology with no regard to technology maturity, business models or audience?

The over-correction effect

It turns out that the industry didn’t change — the people at the top of it did. Last September, in Hamburg for NEXT Conference, I finally had the chance to have a long chat, over several beers, with trend watcher David Mattin, and he provided the insight I’d long missed.

“Today’s managers aren’t the same managers you saw 20 years ago,” he pointed out. “They’re the ones who watched their bosses mess up by not embracing digital.”

Lightbulb moment. (Thanks, David!)

Today’s generation of senior managers are over-correcting for the errors of their predecessors. They saw publishers destroy businesses and titles by waiting too long to figure out what digital would mean for their business. They are not going to make that mistake. But many of them are over-correcting. They’re rushing into new technology before it’s ready.

That wasn’t the way it was 30 years ago.

Digital publishing in the 90s

I had my first meeting about the internet and how it might impact publishing in 1995. One publisher in the room, quite seriously, asserted that the future wasn’t the internet but the intranet — personal networks for companies, or maybe even industries. That was the first meeting I had about the internet that I left in despair. It would not be the last.

When I left that company two years later, they still had no websites, and they only had email because the art director and I had gone and bought a modem, and set up email addresses. We dialled up twice a day to collect the email…

Happily, I moved to a company that not only had websites in the late 90s, but they had paywalled websites then. I wasn’t — yet — allowed anywhere near them, as they had very separate print and digital teams, and so I busied myself with producing great print magazines in the day, and building websites in my spare time. I bought myself a modem and started joining online communities, seeking to understand this new medium.

The 2000s: cynicism and rejection

I first saw what had actually happened on 9/11 via a tiny postage stamp sized video of a plane going into the towers, sourced via online contacts, before any footage hit the TV everyone was clustered around in the office. Within a few months, I was publishing on Livejournal, and pondering why it was easier for me to publishing on this blogging site than it was for our big, commercial CMS to get anything live. The one we were using only allowed us to push something live twice a day — so we had to stack stories up and release them in batches.

Ah, a CMS built on the print model.

By the mid-2000s it was clear that both blogging and online communities were going to be important, so, as the company’s most experienced blogger, with five whole years under my belt, I became part of the company’s editorial development team. To my surprise, I found the biggest blockers to getting the company to adopt what was then called Web 2.0 weren’t the print folks. It was the first wave of digital folks. They knew how to publish to the web (twice a day, only), and this new fangled blogging and emergent social media weren’t serious. They wouldn’t amount to anything. It was just people talking about what they had for lunch, right?

Cynicism was still the default, until they had no choice but to take technologies seriously.

The 2010s: the confused decade

By the 2010s, things were starting to change. For the first time, I was trying to calm people down, not enthuse them. In the late 2000s, I’d been encouraging people to have a Twitter strategy, and was mocked for my efforts. By the early 2010s, I was encouraging them to watch and wait before they threw precious time and energy at Google+. Would it survive? Would their audience go there?

Those were the questions they weren't asking, and I couldn't understand why not.

People who didn’t join Twitter until 2008 or 2009, several years into its life, were suddenly convinced that we had to be on a new platform or app now, or we’d be too late. I started hearing from friends in social media roles at the big newspapers that they would be told to get a strategy together for Brand New Platform X urgently. And then they’d be told to abandon that and have something else thrown at them by the same editor only weeks later.

The transition was underway. Managers who had watched their old bosses screw up were desperate not to make the same mistake. They’d rather waste time on platforms that never took off than leave it too late and miss the opportunity. But a team’s time is precious. And it's only grown more precious as staff numbers have dropped. Making smart choices about how to spend that time is important.

And some bits of the company were still dragging their feet. One of the last meetings I had in my final corporate role was about the company’s mobile strategy. I’d been pushing this for two years, and we were finally getting somewhere. And then marketing got involved, and demanded three screens of data capture before any of our readers could actually use the app. If I'd been staying, I would have cried. Instead, I was liberated to laugh out loud.

That meeting will always stick in my head because it made me so very, very glad I was leaving.

The 2020s: the gullible decade

The pace of change has only picked up. But our level of discernment has not risen to match it. The first four years of this decade have been a long succession of platforms and technologies that we need to be part of right now. Off the top of my head:

  • Clubhouse
  • NFTs
  • Blockchain
  • The Metaverse

And now, of course, AI. AI is the first of those likely to have an impact in the short term. The Metaverse is an idea searching for technology to make it work — and while the Apple Vision Pro might be a first step in that direction, at $3500, it’s not a step most of us can take yet. Blockchain is a technology in search of an application, and NFTs look like a busted flush. And audio chat rooms remain situationally useful, there’s no way it’s the next big platform.

Managers and publishers have rushed into these and then rushed on to the next, desperate not to be too late, not be the luddite, not to be left behind. And they’ve wasted time and money.

The mature approach to emerging tech

We have time. We can wait, see which technologies establish themselves, and adopt them in slightly maturer forms. I do think, that of all the tech I’ve listed, AI is the one most likely to be here to stay. But I think we’re right in the classic tech adoption curve of, as Bill Gates put it, over-estimating what it will do in the first year, and under-estimating the changes it will have over a decade.

There are some things that AI is very good at already: writing code, for example, or generating illustrative images that don’t need to be too specific. Ai-driven transcription of podcasts is a boon to any podcast producer.

But it’s not yet ready for primetime. It’s not good enough to create articles whole cloth without human intervention or editing. And we haven’t really tested yet whether humans will actually get value and appreciate AI-written content. Do we run the risk of cutting the costs of content creation but paying a price in reader relationships? If so, that might well be a disaster.

How to surf technology, not be drowned by it

We’ve abandoned the mistakes of the 1990s and 2000s. We’ve opened our minds to new technology. But let’s be careful we haven’t opened them so far that our brains — and our businesses — fall out.

We were too cynical. And then we were too gullible. And now it’s time to be measured. Only then can we surf the waves of technological change, rather than being drowned by them.

This is the first in an occasional series about ways of thinking about AI in journalism. Check out the whole series.

Subscribe now to get each post in your in-box as it's published.
AIpublishing strategytechtech adoptionPutting AI in a journalism context

Adam Tinworth Twitter

Adam is a lecturer, trainer and writer. He's been a blogger for over 20 years, and a journalist for more than 30. He lectures on audience strategy and engagement at City, University of London.