The problem isn't AI, the problem is humans
We have a great new tool. We just have to learn to use it properly…

I blame Star Trek.
It's made it a cultural expectation that we can speak to a computer and get accurate answers out. The current wave of AIs looks like that sci-fi staple: we ask them questions, and they confidently give us authoritative answers back. But it's an illusion. Those answers are often wrong. Unless we start understanding the tool we've created, we're going to keep making mistakes.
And, soon, that's going to cost people their lives:
In a recent case, Calkin and his team were called to Unnecessary Mountain after two hikers used ChatGPT and Google Maps to select a trail, unaware they would encounter snow. Wearing flat-soled shoes, the men soon realized their mistake and turned around, but became unsteady on the descent.
Experienced, well-prepared people die when hiking and mountaineering. These are invigorating but risky experiences.
So why the hell would you put your life in the hands of a guessing machine, rather than people who actually know what they're talking about? As one rather angry mountain rescuer put it:
Calkin said Facebook groups and Reddit hiking forums can be one of the best places to get current information. It’s common to see people asking for updates on trail conditions or whether a trail is appropriate given their fitness level.
You really want a human for this:

Case Law or Case Lies?
Another example of a terrible misuse of AI for you:
A former solicitor presented dozens of false cases to the High Court to support his appeal against strike-off, it has emerged.
The Solicitors Regulation Authority, the respondent in the appeal, identified 27 cases presented by Venkateshwarlu Bandla which appeared not to exist. Mr Justice Fordham accepted that two were misspelt and could be linked to real cases, but said the remainder were false and amounted to an abuse of process.
With my academic hat on, fake citations are one of the giveaways we often find in AI-generated student submissions.

If you're a lawyer, people are literally paying for your expertise, not just to have you prompt ChatGPT in exactly the same way they could themselves. The more you use LLMs at the heart of your workflow, the more you make yourself inherently disposable.
And, in this case, open yourself up to a lawsuit from an angry client…
Brought to (non-existent) book
Right. Let's bring this back to journalism. As we cruise towards the summer break, lots of publications start producing summer reading lists. Last month, The Chicago Sun-Times published one that was extremely notable because the books on the list were, shall we say, challenging to obtain:
One problem: The authors are real, but the books they supposedly wrote are not. Turns out, the list was generated by artificial intelligence. Of the 15 books, only five are real. The rest? Made up by AI.
To give the team at the newspaper a break, it wasn't their doing. This was in one of the licensed sections:
Now, to be clear, the Sun-Times and Inquirer newsrooms were not responsible for the summer reading list. Victor Lim, marketing director for the Chicago Sun-Times’ parent company, Chicago Public Media, told NPR’s Elizabeth Blair that the list was licensed content provided by King Features, a unit of the publisher Hearst Newspapers.
Read more:

If you think journalism has a trust problem now (and it does), just think how much worse it's going to get when we're publishing ever more easily-checkable falsehoods as our publishers force us to use AI slop instead of researched copy. It's going to be incredibly hard to claw back any credibility from that at all.
AI = Assistive Intelligence
I'll keep banging this drum until people grasp it fully: AI is assistive intelligence, not truly artificial intelligence. It has no inherent reasoning capability. It just makes guesses based on patterns derived from vast qualities of data.
If you don't understand that, if you mistake a guessing engine for an answer engine, you'll going to end up as a case study like the ones above.
In short:
- Don't use it to plan a route in the mountains – use it to find guides from people who have
- Don't use it to prepare a court case – use it to help you find relevant case law that you wouldn't have discovered on you own
- Don't use it to write copy – use it to help you research that copy, but for God's sake, check what it outputs
Tony Elkins, a member of Poynter’s faculty who co-authored Poynter’s AI ethics handbook, was quoted in the piece about the imaginary reading list:
Generative AI has a lot of potential use cases, but we’re still in the experimentation phase. The technology simply hallucinates too much to grant it any amount of autonomy. What seemed to happen here, based on the reporting, is a human failure by a freelancer, and an organizational failure to ensure it knew how AI tools are being used.
Twenty years ago, both students and journalists had to learn to use Wikipedia. Don't just cite it, or rely on it for facts, use it for its referencing to original sources. But it took students failing essays for improper citations and journalists being caught out publishing incorrect “facts” direct from Wikipedia to hammer this home.
We're going through this again with AI – but the stakes are so much higher now.
Let's get it right.