People still misuse AI

But it’s not their fault. Well, not entirely.

People still misuse AI

How many people are going to die, or get fired, before we realise that AI does not di what we think it does? If this cock-up was the result of a travel publication competitor making an error, we’d think that it was in big trouble:

The couple used ChatGPT to plan a romantic hike to the top of Mount Misen on the Japanese island of Itsukushima earlier this year. After exploring the town of Miyajima with no issues, they set off at 15:00 to hike to the montain's summit in time for sunset, exactly as ChatGPT had instructed them.
"That's when the problem showed up," said Yao, a creator who runs a blog about traveling in Japan, "[when] we were ready to descend [the mountain via] the ropeway station. ChatGPT said the last ropeway down was at 17:30, but in reality, the ropeway had already closed. So, we were stuck at the mountain top."

But because it’s the New Big Thing, we just shrug and move on. We are being persistently mis-sold AI as answer engines. They’re not. They’re guessing engines. Yet, people are entrusting their lives to a guess, because OpenAI markets it as a machine to deliver answers.

Failing to understand this is putting businesses’ reputations on the line:

Deloitte will partially refund payment for an Australian government report that contained multiple errors after admitting it was partly produced by artificial intelligence. The Big Four accountancy and consultancy firm will repay the final instalment of its government contract after conceding that some footnotes and references it contained were incorrect, Australia’s Department of Employment and Workplace Relations said on Monday.

I’m not arguing that AI isn’t a transformative technology.

I am arguing that it is being misused and missold. And it’ll ever achieve its potential until we start looking at the reality of these tools.