Your Sceptical AI Reader
Somewhere between AI Boosterism and Doomerism, there's a realistic path to be navigated.
I'm teaching my first AI in journalism course starting this week. I've had one ready to go for a couple of years now, but my normal training partner wanted to work with someone else on AI. I contemplated bringing it to market directly myself, or via City St George's, but I found myself vacillating backwards and forwards on the wisdom of moving into this space.
Why? Well, like more people than are prepared to admit it, I have serious reservations about the all-out enthusiasm for AI in some circles. Anil Dash summed up this reticence even amongst the tech industry beautifully:
And what [knowledgeable people in the tech industry] all share is an extraordinary degree of consistency in their feelings about AI, which can be pretty succinctly summed up:
Technologies like LLMs have utility, but the absurd way they've been over-hyped, the fact they're being forced on everyone, and the insistence on ignoring the many valid critiques about them make it very difficult to focus on legitimate uses where they might add value.
If I was to run a course, it needed to stare that reality hard in the face.
Unflinchingly.
The AI realist
I am not an AI doomer, but nor am I an AI booster. I believe it is a transformative technology, that will have profound implications for society, but equally I believe that people are too willing to ignore the problems hallucinations bring, as well as the overwhelming environmental cost of using this tech. I am all but certain that we are in for an AI crash, both in terms of the economy and in perception of AI, in the near term, and some people will be quietly trying to “disappear” their more bullish statements, much as they had to about blockchain and the metaverse. But, unlike those two, I do think AI will come through the slough of disillusionment into something useful.
Thankfully, the British Guild of Agricultural Journalists were completely understanding of my desire to find a very careful intersection point between understanding AI, protecting yourself against AI, and using AI. And that's what the course will be doing.
All of this is by way of warning. AI is likely to be high in my mind for the next few weeks, and that's equally likely to be reflected here on the blog.
Like, for example, in today's selection of reading, which is informing my course prep.
Workslop is killing your business
This is a great example of the problems I discussed above. Harvard Business Review on why AI projects can kill productivity:
As AI tools become more accessible, workers are increasingly able to quickly produce polished output: well-formatted slides, long, structured reports, seemingly articulate summaries of academic papers by non-experts, and usable code. But while some employees are using this ability to polish good work, others use it to create content that is actually unhelpful, incomplete, or missing crucial context about the project at hand. The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work. In other words, it transfers the effort from creator to receiver.
Emphasis mine. We absolutely want to be focusing on the former, not the latter. But the latter (getting AI to generate copy) feels more attractive, because of the cost saving involved. Saving cost, at the cheap, cheap price of destroying your reputation and relationship with your audience…
The problem is that the output of AI looks good. AI is very good at creating things with the veneer of professionalism, but of little or no real insight or value. Fill your site with it, and you drift towards being a snake oil salesman. Sure, it looks like medicine…

AI is destroying what it needs to grow
Emanuel Maiberg for 404 Media:
Ironically, while generative AI and search engines are causing a decline in direct traffic to Wikipedia, its data is more valuable to them than ever. Wikipedia articles are some of the most common training data for AI models, and Google and other platforms have for years mined Wikipedia articles to power its Snippets and Knowledge Panels, which siphon traffic away from Wikipedia itself.
The giant, trumpeting elephant in the room of any AI discussion is how the industry is going to be able to keep training on freely available digital data when its use is destroying the economic model for the very stuff it trains on. AI is literally destroying its own food source.
There is a huge head-in-the-sand attitude to this, although the AI industry is desperately hoping they're find a way of training models on synthetic data without creating model collapse. That's wishful thinking.
Worth noting that Wikipedia editors have had to adopt a new speedy deletions policy to stop the site being overwhelmed by AI slop, too. Which, if it was allowed to go unchecked, would corrupt the very thing AI is training on. 🤦🏼
What you need to use AI well: restraint
When we last had a major technological revolution (the advent of Web 2.0 and then social media, which was accelerated by the smartphone), publishers had to be dragged into it kicking and screaming. 20 years later, they're rushing into AI like those urban legend lemmings off a cliff.
Maybe, one day we'll learn a sensible balance, but today is not that day. And so, Pete Pachal's take on AI matters:
And that’s the reality check: your time. Generative AI’s tendency to hallucinate means anything that touches public-facing content requires human verification. Having an AI intern do all your research and data visualizations sounds magical until it imagines some data and your attempt to creatively illuminate a critical fact turns into fiction.
And, of course, that might cost more than having a human do it in the first place.

Claude vs ChatGPT for media analysis: Fight!
A fun, but technical post, from Mr Thomas Baekdal, on a long, crafted prompt he uses to analyse pieces of journalism, and how which AI model performed better for him. Happily, it was the one he considered more ethical…
Can you imagine if something like this was directly integrated into your newsroom CMS? Imagine if this form of automatic classification could be used to define, rank, and show your news? And also imagine if this could be used as a news story evaluation for editors to get a quick summary of each article submitted?
If you want to know what he's talking about, you know what to do…
Prompt Crafting
Full disclosure: the lead image of this post was created with Midjourney. This was the prompt I used:
An illustration of a group of journalists, of mixed age, gender and race, looking down at a laptop, with a smiling AI face on the screen. The journalists are looking quizzical and sceptical, although a few are exited. The AI face is looking over-eager, to the point of creepiness.
Lesson: the tool couldn't generate exactly what I was looking for. The closest versions had one of the journalists looking on with a robot face. I would have rewritten the prompt, but I was happy with the version I've chosen.

