Is AI search a sideshow, or a potential economic apocalypse?
Will AI-infused search be the end of publishing economics as we know it? Or is the hype outrunning the utility?
We're in an odd, liminal time. For the past 20 years, we've been dealing with the consequences of existent technology — tech that's been around, and we slowly realise will be important. But now, we're waiting, worrying and working towards a technology that's not really here yet: generative search. This particular flavour of search infused with AI will change everything, or so we're told.
But it's not actually here yet. Or, at least, Google's version is only in testing. The US has been able to use it for a few months, and now we in the UK can, too.
On the other hand, there is some existing AI search tech out there. Bing has had a ChatGPT integration for over a year. It's interesting — but slow. And, more to the point, it hasn't moved the needle on Bing's market share dramatically. It's crept up from around 8% to around 10% of search over the last year — which isn't revolutionary, but isn't negligible either. So, there's clearly potential, but it's not gobbling up all queries away from other search engines like Google did in the early 2000s.
Perplexity AI
One example that's slightly more impressive is Perplexity AI. I have used it from time to time, and, it's actually pretty impressive, for some simple queries. It lacks the delay we see in many other AI products, but I only really find myself reaching for it when I want a quick, referenced answer, not when I want something in-depth.
It is, though, a great example of what some search could look like in the coming years.
Rotten Google
What we can say with some confidence is that there is a big market opportunity right now. Google, as most of us have experienced over the past few years, is most definitely not what it was. The quality of results is down, and too much over SEOed nonsense is creeping into results. Google has been through bad patches like this before — but has usually managed to turn it around.
This downturn has gone on for too long though, and there might be a good reason why. Ed Zitron has been going through Google's emails that were released as part of a court case, and he's pieced together a fascinating story of the techies devoted to producing a great search engine slowly being edged out by the money guys.
This is what I mean when I talk about the Rot Economy — the illogical, product-destroying mindset that turns the products you love into torturous, frustrating quasi-tools that require you to fight the company’s intentions to get the service you want.
Ouch. And he doesn't hold back on the person he perceives as being responsible for the rot: Prabhakar Raghavan. He was previously — get this — head of search at Yahoo!, and presided over its collapse into irrelevance. And Zitron argues that he's doing the same again:
As I’ve argued previously, we — with good reason — continually complain about the state of Twitter under Elon Musk, but I’d argue Raghavan (and, by extension, Google CEO Sundar Pichai) deserve as much criticism, if not more, for the damage they’ve done to society. Because Google is the ultimate essential piece of online infrastructure, just like power lines and water mains are in the physical realm.
It's well worth kicking back this weekend and reading the whole thing:
Rotten technology
In 2022, Cory Doctorow coined the phrase “enshittification” for the process whereby tech products become palpably worse over time, not better. That phrase has entered the language because it perfectly filled the void for a word that described the process we are all familiar with.
So, obviously, he had something to say about Zitron's piece:
Enshittification is a macroeconomic phenomenon, determined by the regulatory environment for competition, privacy, labor, consumer protection and IP. But enshittification is also a microeconomicphenomenon, the result of innumerable boardroom and product-planning fights within companies in which would-be enshittifiers try to do things that make the company's products and services shittier wrestle with rivals who want to keep things as they are, or make them better, whether out of principle or fear of the consequences.
Some people who were in the big publishers before internet disruption really hit might be squirming a little uncomfortably right now. We have seen this process before…
The AI Search opportunity
What this all points to, though, is that Google has actually created the perfect environment for its own disruption. Search has become palpably worse just as a new technology arises that provides an alternative way of getting at the same sorts of answers.
Google is having the same sort of “oh, shit” moment that Microsoft went through in the late 90s as it became apparent that this kooky “internet” thing might actually be important. And it's pivoting as hard as it can towards generative AI search — but with the knowledge that even if it manages the transition, its business model may well still be screwed.
The FT reported earlier this month that Google is considering charging for elements of SGE, a first for the company:
The proposed revamp to its cash cow search engine would mark the first time the company has put any of its core product behind a paywall, and shows it is still grappling with a technology that threatens its advertising business, almost a year and a half after the debut of ChatGPT.
Google is still an ad business, and if people aren't seeing ads in search results or ads on webpages, because they never leave the AI interface, it's not just publishers in trouble. Google itself is. It could be that, if SGE succeeds, that success will slowly starve Google of the massive cash flow it's been used to.
AI could gorge itself into starvation
And here's the big AI dilemma: if AI wins, it loses. If people just use AI instead of visiting search engine results or websites themselves, the amount of (non-AI generated) content on the web will start to dry up, because the economic and social incentives vanish.
And if that happens, what does the AI train on? The big AI companies are already starting to struggle with finding new training data. Sure, there's milage yet to be had in building better models with existing data, but this tech is voracious for two things: energy and raw data.
So, maybe they could train AIs on the output of other AIs? So far, that's not going well. It leads to model collapse. If you think existing AI hallucinations are a problem, just wait until you see what model collapse does. And bear in mind that we don't actually know how much of today's web is AI-generated. But we do know that the percentage is creeping up every single day. AIs are going to end up poisoned by AI-generated data sooner or later.
So, we're not just looking at a potential disruption of existing search practices here. We're looking at a complete collapse of the current publishing, advertising and AI ecosystems. This is speed running enshittification, rushing out a new technology to maximise profits now, while burning the planet to run it, and destroying the economic basis of all the human activity that feeds the AI.
Fuc… Flip.
I don't know about you, but I've got that “watching a road crash that now can't be avoided” feeling.
The big “if”s…
Let's step back a second. I'm guilty of catastrophising, as my counsellor often tells me. I'm outlining an extreme case, with a lot of built-in assumptions in it. For example, will AI search actually replace all existing search? It might not. Sure, there are some categories that are more vulnerable than others. If you've built a business on affiliate revenue from reviews and buyers guides, you need to look at new revenue models and soon.
Ian Betteridge has made this point in the past, but others are catching on. Bauer's global head of SEO, Stuart Forrest talked about this at the PPA conference this week:
Forrest acknowledged the potential severity of the arrival of SGE, saying that although publishers don’t build business models on answering questions like “how tall is Tom Cruise?” for which Google already provides a quick answer, they do monetise the answers to more nuanced queries like “what is the best laptop for me to buy for my gaming child under £1,000?”
His point, though, is that LLM's tendency to make stuff up — “hallucinate” — means that it remains unreliable for many things:
“It’s absolutely staying clear of some very contentious issues like the gun debate in the US or the US election. I would not want SGE to be telling me whether my child’s rash might be meningitis, and actually I think Google probably wouldn’t…”
Another slice off the Google sausage
We've been here before: social media has been stealing the “I just need something to read” traffic that Google used to get for 15 years now. It adapted and grew, evolving into and “intents and answers” engine over that time. If generative AI search is going to steal some more use cases away from it, it will need another evolution: towards trusted, in-depth information from identifiable people, with actual expertise in the subject.
That could work for them. It could work for us in the journalism racket. And it could even work for the AI companies, as a steady supply of fresh training data would result. But that depends on Google still having the technical acumen to build both a compelling search experience and a compelling generative answer machine at the same time — and to deliver on a user interface that would make both work.
I'd have trusted the Google of 10 years ago to do that. Today's Google? I'm not so sure. And is we barrel down the current path, with AI destroying the business model for the very data it needs, it's hard to see anybody winning in the digital chaos that will follow.
Fasten your safety belts. There's serious digital turbulence ahead, folks.
Sign up for e-mail updates
Join the newsletter to receive the latest posts in your inbox.
Anyone know of any big stories happening today? Paid Members Public
There's got to be something. Maybe an election somewhere?