Last week at its annual I/O conference for developers, Google announced plans to roll out AI-generated search summaries for general users by the end of this year. The system has already been beta testing with users who opted in via Google’s “Search Labs” program for several months. Essentially, it moves traditional, algorithmically generated Google results down the page in favor of a generative AI chatbot. Which attempts to answer or resolve relevant kinds of queries all on its own.
From a user perspective, this raises some pressing questions. Most significantly, there’s the question of accuracy. AI chatbots are improving all the time, at least in some respects. OpenAI debuted the latest incarnation of ChatGPT – dubbed GPT-4o or Omni – just last week; it can now accept prompts via not just text. But also audio and video. The model also got an upgrade in its ability to ask and respond to prompts with natural, conversational language.
Despite these improvements, the bots still make frequent, sometimes significant errors. There are, of course, ongoing viral stories about AI chatbots inventing fictional scientific studies. Or legal cases in their outputs which sometimes get professionals who should not be using AI chatbots at work into hot water. Twitter user @daltoneverett recently noted that Google’s beta AI search results recommend replacing your car’s “blinker fluid” should the turn signal stop making noise. One problem: blinker fluid is not a real thing that exists.
Now, obviously, third-party search results aren’t 100% reliable either. You might wind up on the random blog of some guy who also thinks there’s a product called blinker fluid. In many cases, it might not even matter if Google’s AI results are 100% accurate, so long as they roughly get the job done. But in other situations – say, if someone is searching for coverage of a breaking news story, or has a legal, health, or medical-related question – total accuracy might actually prove important.
[AUTHOR’S NOTE: I’ve been trying out Google AI search results for a while now. Anecdotally, it works pretty well. I’d say, at least 30-40% of the time, I can find the information I’m looking for in the AI overview without proceeding to any outgoing links, and it turns out to be accurate.]
There are also, naturally, some red flags here concerning user privacy. AI tools like Google’s tend to work best with more information. Not just about the data out there on the internet, but the person who’s interacting with Google. Additionally, Google already offers a whole suite of products and tools that potentially interact with nearly every aspect of a person’s online life, from their photo albums to their correspondence and beyond.
The company’s AI search bot might produce more effective results for you if it reads your Google Docs and your Gmail and checks out your recent purchases. “Help our system learn more about you so it can work more efficiently” is a pretty strong pitch, but are internet users really ready to turn over everything about themselves to Google’s software, for its own black-box analysis? Will they even realize how much they’re sharing and how Google plans to use it before it’s too late?
Accuracy and privacy are concerns for everyone, but for people who rely on incoming traffic from Google and its related products for their livelihoods – which includes a large chunk of independent internet creators – the move from algorithmic search results to AI overviews will likely have a major impact, if not posing an existential threat.
Though the move itself may seem relatively subtle, dropping algorithmically generated link lists in favor of AI overviews represents a massive sea change in how the web is organized and curated.
Google once served as a waystation. Not only did they provide media companies, publishers, and creators with visibility and access to a vast audience via search. But also allowed them to monetize this audience via the AdSense program. They’ve chipped away at this concept over the years, adding more advertising and “Shopping” links to products and other self-serving features that drive more traffic back to Google rather than sending it out to other companies and creators.
Nonetheless, moving entirely over to AI-scripted topic pages that complete the information exchange without a user ever having to leave Google changes the entire dynamic significantly. Google is no longer the waystation, but the destination in and of itself. For creators, the scariest possible version of this future is that Google essentially turns off the traffic firehose. It vacuums up everyone’s knowledge and insight and information. And uses it to train its own AI, then keeps all those precious subscribers, views, and eyeballs for itself.
Over the past week, Google CEO Sundar Pichai has made the media rounds attempting to address these concerns and calm these quite rational fears. He suggests that sending traffic to third-party content creators and publishers remains a strong priority for Google. And noted that when external links are included within the AI overviews, they tend to get clicked more frequently than links contained within a list of search results.
In a chat with The Verge, Pichai also noted that the history of the internet is filled with doomsday predictions just like this one – that some new change or shift in policy was about to decimate the free and open exchange of ideas – and it has never actually come to pass.
That’s all well and good. But it’s still pretty vague. And it doesn’t directly address the core issues at play. It’s also exactly what you’d expect to hear from the CEO of a company that still relies on creators and publishers to feed it fresh content, at least for now. Tech founders, investors, and CEOs are offering a lot of these kinds of optimistic but obscure platitudes these days, and asking the public for a lot of trust around artificial intelligence apps and how they’re used. But trust is earned, and rather than proving their good faith intentions, sometimes it seems like these companies and products are trying purposefully to skirt the rules instead.
Scarlett Johansson is currently threatening legal action against OpenAI, which she claims adapted her voice for its new conversational AI system even after she declined to license it to them. And she’s a wealthy, famous movie star who can attract a lot of attention and sympathy to her cause. If they’re willing to steal Black Widow’s voice, how would OpenAI approach a decision that negatively impacted everyday creators without the resources to fight back?
Similarly, while Pichai suggests that creators and publishers remain Google’s central priority, his company’s actions don’t necessarily align with this vision. When it comes to ensuring that the outgoing traffic that powers the entire internet remains viable, there are only personal vows and requests for patience on offer.
But Google has already started introducing ad tools for their AI Overviews, with opportunities for sponsors to start paying them for prime placement. It would almost seem like monetizing the AI results is actually Pichai’s focus. And he’ll maybe figure out how to share the wealth and traffic down the road. Eventually. Perhaps.