Google puts limits on AI Overviews after it showed odd, incorrect answers to users – Moneycontrol

4 minutes, 27 seconds Read

AI Overviews was made available to “hundreds of millions” of users in the United States, with plans to roll it out to over a billion people by the end of the year.

Google is introducing limits on its artificial intelligence search experience, AI Overviews, after it delivered several odd and inaccurate results to consumers, the company said on May 30.

Google had announced AI Overviews at its annual developer conference Google I/O 2024 on May 14, after a nearly year-long experiment. The feature provides consumers a quick AI-generated summary of a topic along with links to go deeper, at the top of the search results.

Story continues below Advertisement

AI Overviews was made available to “hundreds of millions” of users in the United States, with plans to roll it out to over a billion people by the end of the year.

Ads


Sponsor A War Children Today: 
SaveWorldChildren.org

This launch came as the tech giant seeks to reimagine its flagship search product in the generative AI era, amid renewed competition from rivals such as Microsoft and OpenAI and upstarts such as Perplexity.

However, over the past week or so, the search experience delivered several bizarre and erroneous results to consumers, such as telling users to eat rocks, and put glue on pizza to help cheese stick better, the screenshots of which were widely shared online.

In a blogpost on May 30, Google’s search head Liz Reid said the company has now built detection mechanisms for “nonsensical queries” that shouldn’t show an AI Overview, and has limited the inclusion of satire and humor content.

The tech giant has also updated its systems to limit the use of user-generated content in responses that could offer misleading advice and added triggering restrictions for queries where AI Overviews were not proving to be as helpful, she said.

Google is also taking action on the “small number of AI Overviews that violate content policies”, Reid said. This includes overviews that contain information that’s potentially harmful, obscene, or otherwise violative.

Story continues below Advertisement

Reid said they found a content policy violation on “less than one in every 7 million unique queries” on which AI Overviews appeared.

These developments  also come at a time when Google is looking to monetise its AI offerings, by testing search and shopping ads in AI overviews.

Why did this happen?

In the blogpost, Reid blamed “data void” or “information gap,” where there’s a limited amount of high quality content about a topic, for these inaccurate results.

“Prior to these screenshots going viral, practically no one asked Google that question. There isn’t much web content that seriously contemplates that question, either” she said.

Regarding the specific rocks example, Reid said there was satirical content which also “happened to be republished on a geological software provider’s website. So when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question”

For some cases such as the pizza example, Reid said AI Overviews misinterpreted language on webpages, featuring sarcastic content from forums.

“Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza” Reid said.

‘AI Overviews don’t hallucinate’

Reid argued that AI Overviews generally don’t “hallucinate” or make things up. “When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available” she said.

Reid also compared the feature with “featured snippets”, a longstanding search feature that uses AI to identify and show key information with links to web content. She claimed that the “accuracy rate for AI Overviews is on par with featured snippets”

In the blogpost, the Google search head said that they tested the feature extensively prior to launch, including “robust red-teaming efforts, evaluations with samples of typical user queries and tests on a proportion of search traffic to see how it performed”

“There’s nothing quite like having millions of people using the feature with many novel searches. We’ve also seen nonsensical new searches, seemingly aimed at producing erroneous results” she said.

Reid also noted that a “large number of faked screenshots” have also been shared widely on the Internet.

“Some of these fake results have been obvious and silly. Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression. Those AI Overviews never appeared. So we’d encourage anyone encountering these screenshots to do a search themselves to check” she said.

Reid also claimed that user feedback has shown that people have “higher satisfaction” with their search results after the launch of AI Overviews, and they are asking longer, more complex questions.

“They use AI Overviews as a jumping off point to visit web content, and we see that the clicks to webpages are higher quality – people are more likely to stay on that page, because we’ve done a better job of finding the right info and helpful webpages for them” she said.

Reid also said that the search giant plans to not show AI Overviews for hard news topics, where freshness and factuality are important and in the case of certain health topics, the firm has added additional triggering refinements to improve its quality protections.

This post was originally published on 3rd party site mentioned in the title this site

Similar Posts