SEO News

Google-Extended Is The New Google Crawler To Block Bard Or Google AI


Google-Extended Is The New Google Crawler To Block Bard Or Google AI

Google has announced a new Googlebot, a new Google crawler, named Google-Extended that you can use to control if your content can help improve Bard and Vertex AI generative APIs or future Google AI products. So if you want to disallow Bard from using your content, you specify so in your robots.txt with the user agent Google-Extended.

Google won’t crawl from Google-Extended, Google will still crawl from its normal Googlebot or other bots. But using Google-Extended will communicate to Google not to use that content for Bard or other AI Google projects. A Google spokesperson told me, “Google-Extended will tell Google not to use the site’s content for Bard and Vertex AI generative APIs.” “For Search, website administrators should continue to use the Googlebot user agent through robots.txt and the NOINDEX meta tag to manage their content in search results, including experiments like Search Generative Experience,” Google added.

Essentially this allows you to allow Google Search to crawl, index and rank your website but disallow Bard or other Google AI projects from using your content.

This comes after Bing offered controls to block Bing Chat AI from using your site a week ago.

“Today we’re announcing Google-Extended, a new control that web publishers can use to manage whether their sites help improve Bard and Vertex AI generative APIs, including future generations of ****** that power those products. By using Google-Extended to control access to content on a site, a website administrator can choose whether to help these AI ****** become more accurate and capable over time,” Google wrote.

Google-Extended is a “standalone product token that web publishers can use to manage whether their sites help improve Bard and Vertex AI generative APIs, including future generations of ****** that power those products,” Google explained.

The User agent token is Google-Extended

“Google-Extended doesn’t have a separate HTTP request user agent string. Crawling is done with existing Google user agent strings; the robots.txt user-agent token is used in a control capacity,” Google added.

I am not sure if this is the alternative approach for robots.txt for AI…

Note, Google News bot also works a similar way, where it does not crawl but uses the directive for using that content in Google News:

Forum discussion at X.





Source link : Seroundtable.com

Related Articles

Back to top button
error

Enjoy Our Website? Please share :) Thank you!