SEO News

Google Advises Caution With AI Generated Answers


Google’s Gary Illyes cautioned about the use of Large Language ****** (LLMs), affirming the importance of checking authoritative sources before accepting any answers from an LLM. His answer was given in the context of a question, but curiously, he didn’t publish what that question was.

LLM Answer Engines

Based on what Gary Illyes said, it’s clear that the context of his recommendation is the use of AI for answering queries. The statement comes in the wake of OpenAI’s announcement of SearchGPT that they are testing an AI Search Engine prototype. It may be that his statement is not related to that announcement and is just a coincidence.

Gary first explained how LLMs craft answers to questions and mentions how a technique called “grounding” can improve the accuracy of the AI generated answers but that it’s not 100% perfect, that mistakes still slip through. Grounding is a way to connect a database of facts, knowledge, and web pages to an LLM. The goal is to ground the AI generated answers to authoritative facts.

This is what Gary posted:

“Based on their training data LLMs find the most suitable words, phrases, and sentences that align with a prompt’s context and meaning.

This allows them to generate relevant and coherent responses. But not necessarily factually correct ones. YOU, the user of these LLMs, still need to validate the answers based on what you know about the topic you asked the LLM about or based on additional reading on resources that are authoritative for your query.

Grounding can help create more factually correct responses, sure, but it’s not perfect; it doesn’t replace your brain. The internet is full of intended and unintended misinformation, and you wouldn’t believe everything you read online, so why would you LLM responses?

Alas. This post is also online and I might be an LLM. Eh, you do you.”

AI Generated Content And Answers

Gary’s LinkedIn post is a reminder that LLMs generate answers that are contextually relevant to the questions that are asked but that contextual relevance isn’t necessarily factually accurate.

Authoritativeness and trustworthiness is an important quality of the kind of content Google tries to rank. Therefore it is in publishers best interest to consistently fact check content, especially AI generated content, in order to avoid inadvertently becoming less authoritative. The need to verify facts also holds true for those who use generative AI for answers.

Read Gary’s LinkedIn Post:

Answering something from my inbox here

Featured Image by Shutterstock/Roman Samborskyi



Source link : Searchenginejournal.com

Related Articles

Back to top button
error

Enjoy Our Website? Please share :) Thank you!