An AI-generated image of Shrimp Jesus surrounded by Shrimp Angels.
Midjourney's idea of Shrimp Jesus at work

Shrimp Jesus and other unexpected results of the AI boom

AI is at once more ubiquitous and less useful than people think. Navigating the coming flood of generated content is going to be a challenge.

Adam Tinworth
Adam Tinworth

I've been busy researching LLMs and their use in journalism while I've been quiet here — and I'm beginning to suspect that I'm a natural contrarian. Back in 2008, when nobody thought Twitter would be important, I was out there bigging it up. In the early 2010s, I was warning against Facebook while the publishing world thought it was the answer to our prayers. I've been banging the drum for on-site community building for over a decade — and finally the world is coming around to my view.

And, right now, I'm deep in AI, pulling together a practical training course for newsrooms and journalists that don't have access to bespoke tools. The more I look, the more convinced I am that we need to be taking a much more sceptical look at the claims for AI, scything through the breathless Silicon Valley hype, to find where the value might actually be. And, of course, where the threats are.

For example, Shrimp Jesus proves that AI is already being used for mass engagement farming on social platforms. Who or what is Shrimp Jesus? I'm so glad you asked. Here He is:

Shrimp Jesus
Shrimp Jesus

And Shrimp Jesus has been garnering serious amounts of engagement on Facebook. Lots of “Amens” and reactions. Yes, for something that's so obviously AI. Really, sometimes I despair of humanity.

Victor Tangermann explains this madness for Futurism:

As the Stanford and Georgetown University researchers point out in their yet-to-be-peer-reviewed study, Facebook is more than likely rewarding AI spam accounts for flooding feeds with more than 50 AI-generated images a day, which are pulling in tens of millions of views.
Facebook's algorithms are ranking these images high on users' feeds, generating huge amounts of engagement. The company has long demoted links to external websites, and image posts appear to rank much higher.

My own “favourite” is this obviously AI-generated image, which is being used to farm engagement in various wildlife and nature related groups on Facebook:

Obvious AI generated image of a bird mother.
Yeah, no bird mother does this.

There's a whole load of these going around, it appears.

Anyway, let's leave Facebook, and focus on more practical uses of AI - like in data journalism. It's to to be a help there, right?

Don't hire AI to do Data Journalism

Jon Keegan found it a lot less useful than he expected:

For example, when ChatGPT fulfilled my request to “generate a simple map centered on the location of the crash,” I immediately noticed that the pin on the map was far from any train tracks. When I asked where it got the coordinates for the crash location, it replied that these were “inferred based on general knowledge of the event’s location.” When I pressed it for a more specific citation, it could not provide one, and kept repeating that it was relying on “general knowledge.” I had to remind the tool that I had told it at the start of the chat that “it is crucial that you cite your sources, and always use the most authoritative sources.”

It sounds like a slightly dim assistant, that needs constant reminders - and monitoring to make sure it isn't just making things up… But it did excel in one area:

Based on my interactions, by far the most useful capabilities of ChatGPT are its ability to generate and debug programming code. (At one point during the East Palestine exercise, it generated some simple Python code for creating a map of the derailment.)
I Used ChatGPT as a Reporting Assistant. It Didn’t Go Well – The Markup
The AI tool ignored basic instructions about sourcing and citations. But it’s a pretty good newsroom coding partner.

Charles Arthur's thoughts on it:

So.. I wonder if, when presented with a list of connected events in non-time-sequential order (“The policeman is in hospital” “A woman was robbed at knifepoint” “A policeman tackled the robber”), ChatGPT or others could assemble them into a news story? It’s a classic exercise for would-be journalists. If it can’t do that, what actually is it good for in journalism?

AI is less A than you thought…

Another thing to consider: how “artificial” is the “artificial intelligence”? There's growing evidence that there's plenty of human labour hiding behind the inviting looking chat prompt:

The magical claim of machine learning is that if you give the computer data, the computer will work out the relations in the data all by itself. Amazing! In practice, everything in machine learning is incredibly hand-tweaked. Before AI can find patterns in data, all that data has to be tagged, and output that might embarrass the company needs to be filtered. Commercial AI runs on underpaid workers in English-speaking countries in Africa creating new training data and better responses to queries. It’s a painstaking and laborious process that doesn’t get talked about nearly enough. 

Why? Because it doesn't suit the narrative the would-be AI tech billionaires want to spin. Just like Facebook rests on legions of under-paid and over-worked content moderators, so too is AI built on more human labour than you think.

Pivot to AI: Pay no attention to the man behind the curtain
The LLM is for spam

Bonus link: it turns out that Amazon's AI-powered “just walk out” stores were actually largely powered by people:

So, Amazon’s ‘AI-powered’ cashier-free shops use a lot of … humans. Here’s why that shouldn’t surprise you | James Bridle
This is how these bosses get rich: by hiding underpaid, unrecognised human work behind the trappings of technology, says the writer and artist James Bridle

The AI Bubble Burst is coming

John Naughton suggests in The Observer that AI is speedrunning the financal boom and bust curve:

Stage five – panic – lies ahead. At some stage a bubble gets punctured and a rapid downward curve begins as people frantically try to get out while they can. It’s not clear what will trigger this process in the AI case. It could be that governments eventually tire of having uncontrollable corporate behemoths running loose with investors’ money. Or that shareholders come to the same conclusion. Or that it finally dawns on us that AI technology is an environmental disaster in the making; the planet cannot be paved with datacentres.
From boom to burst, the AI bubble is only heading in one direction | John Naughton
No one should be surprised that artificial intelligence is following a well-worn and entirely predictable financial arc

Believe it or not, this is probably a good thing. The real work of the web was done after the dot.com crash of the late 90s. (I once interviewed the core boo.com team, for a long-defunct magazine called Occupier. I'm glad that one's never been on the web.) I expect the same will be true of AI: we'll figure it out once the initial hype deflates. And I do think there's a lot of useful tools in there - but perhaps not the ones being most actively promoted right now.

Look, I'm not an AI sceptic, despite the tone of what I've just written. I use various AI tools to generate images, to help with research, SEO brainstorming and more. But if the past 25 years of tech have taught us anything, it's that a little bit of realism in the face of tech hyperbole can save an awful lot of pain. Don't dance to the AI Pied Piper's tune.

Back to positivity next time. Promise.

AIdata journalismengagementengagement farming

Adam Tinworth Twitter

Adam is a lecturer, trainer and writer. He's been a blogger for over 20 years, and a journalist for more than 30. He lectures on audience strategy and engagement at City, University of London.

Comments