If you are looking for the answer to the question “Why isn’t Google indexing my page”, you have to focus on understanding the causes of this situation. There might be plenty of them! This article examines three primary indexing issues and presents 14 potential causes that can lead to them.
How do you make sure why your website is not on Google?
There are various reasons why your website may not show up in Google search results. Before you take any action, it’s crucial to understand the cause of your indexing troubles. You can do so by using the following three methods.
- Google Search Console (GSC) – a free tool provided by Google that contains various tools and reports. Some of these will allow you to check your website’s indexation.
- ZipTie.dev – a tool that allows you to check indexation using a sitemap crawl, an URL list, or a crawl of your entire website. It also allows you to schedule a recrawling of your sample, so you can easily monitor the indexation.
- “Site:” command – you can check if your page has been indexed by using the “site:” command in Google search. Type “site:yourdomain.com” into the search bar, replacing “yourdomain.com” with your website’s URL.
This will show you a list of pages that Google has indexed. Be careful though! Using search operators does not give you the full picture and this method might not show all pages.
14 reasons why your site is not indexed by Google
Let’s take a look at the most common reasons why pages are not indexed by Google. Maybe one of them applies to your situation.
Your page wasn’t discovered
This means that Google was unable to find the page on the website. When Google is not able to discover a page, it cannot be indexed and will not appear in the search results. There are three main reasons why Google might struggle to find your page.
Your page isn’t linked internally
Internal links play a crucial role in a website’s indexation by search engines like Google. When search engines’ bots crawl a website, they follow links to discover and index new pages. Internal links, which are links that connect pages within the same website, help robots like Googlebot navigate a website and understand its structure.
If a website lacks internal links, search engines’ bots may have difficulty discovering all of its pages, and this can result in some pages not being indexed.
Want to know more? Check out our Ultimate Guide to Internal Linking in SEO!
Your page is not in the sitemap
A sitemap is a file that lists a website’s most important indexable pages (or all of them in some cases). Search engine robots can use this file to discover and index the website’s content.
When a page is not included in the sitemap, it does not mean that it won’t be indexed by search engines. However, not including a page in the sitemap can make it harder for search engine robots to discover and crawl it. If a page is not included in the sitemap, it may be perceived as less important or lower in the hierarchy. In some cases, this situation can result in some pages not being discovered, even with internal linking in place.
On the other hand, including a page in the sitemap can help search engines in two ways. It’s easier to discover the page, and its presence in the sitemap serves as a clue that this particular page is important and should be indexed.
Find out more by reading our article: Ultimate Guide to XML Sitemaps for SEO!
Your website is too large and you have to wait
When Googlebot crawls a website to index its content, it has a limited amount of time to do so. When a website is both large and to make matters worse, slow to load, crawling it can present a challenge for search engine bots. As a result, robots like Googlebot may be unable to index all pages within the given time limit. This can cause issues for your website because any pages that are not indexed do not appear in the search results and do not work for your website’s visibility.
Learn more about crawling through our article: The Beginner’s Guide to Crawling
Your page wasn’t crawled
When bots crawl a website, they discover new pages and content that can be added to Google’s index. This process is essential to ensure that pages are visible in the search results. However, if a page isn’t crawled, it won’t be added to the search engine’s index. There are several reasons why a page might not be crawled by a search engine; these include a low crawl budget, errors, or the fact that the page is disallowed in robots.txt.
These articles may help you with this problem:
Your page is disallowed in robots.txt
The robots.txt file is a text file used to instruct search engine robots which pages or directories on their site to crawl or not to crawl. Website admin. can optimize the robots.txt to show search engines which content should be accessible to crawl.
As a general rule, if a page is disallowed in the robots.txt file, search engine bots should not be able to crawl and index that page. However, there are exceptions to this. For example, if a page is linked from an external resource, it can get indexed even though it’s blocked in robots.txt. Another common mistake is treating robots.txt as a tool to block indexing. If you disallow the page in robots.txt, it will prohibit Googlebot from crawling it, but if a page was indexed before – it will remain indexed.
However – most of the time, the page will not be accessible for crawling and indexing if you block it in robots.txt. And if you discover that your page wasn’t crawled at all, it might be because you accidentally blocked it with a robots.txt file.
If you are not sure what to do in this situation, feel free to reach out to an SEO specialist who will be able to help.
Find out more:
Your crawl budget is too low
The crawl budget refers to the number of pages or URLs that Google’s bots will crawl and index within a given timeframe. When the crawl budget allocated to a website is too low, it means that the search engine’s crawler won’t be able to crawl and index all the pages right away. This means that some of the website’s pages may not show up in the search results.
This is a simplified definition, but if you’d like to learn more – check out our guide:
Remember that you can have an impact on your crawl budget. It is typically determined by the search engine based on several factors. There are many problems that may negatively affect your crawl budget, the most common being:
- too many low-quality pages
- an abundance of URLs with non-200 status codes or non-canonical URLs
- slow server and page speed
If you believe your website has issues with the crawl budget, you should try to find the cause of this situation. An experienced SEO Specialist will definitely help you with that.
Server error prevents Googlebot from crawling
When Googlebot tries to crawl a web page, it sends a request to the server hosting the website to retrieve the page’s content. If the server encounters an issue, it will respond with a server error code, indicating that it could not provide the requested content. Googlebot interprets this as a temporary unavailability or as an issue with the website; this might slow down crawling.
As a result, some of your pages may not be indexed by the search engine. Furthermore, if this happens repeatedly and the website keeps returning consistent server errors, it might lead to pages getting dropped from the index.
If your website has significant server problems, you can review these issues in one of GSC’s reports.
More information and recommendations on how to fix that problem:
If you want to check how particular status codes (including server errors) affect Googlebot’s behavior, you can learn about it in Google’s official documentation: How HTTP status codes, and network and DNS errors affect Google Search.
Google didn’t index your page or deindexed it
If Google doesn’t index a page or deindexes a previously indexed one, the page won’t appear in the search results. It can be caused by technical problems, low-quality content, guideline violations, or even manual actions.
Your page has a noindex meta tag
If a page on a website has a noindex meta tag, it instructs Google not to index the page. This means that the page will not appear in the search results.
In some instances, meta tags may inadvertently be set to “noindex, nofollow” due to a development error. Consequently, the page may get removed from the index. If this is later combined with a robots.txt blockade, a page might not get crawled and indexed again. In some cases, it might be intended and could be a solution to some kind of index bloat issue. However, we recommend being extremely careful with any actions that may disturb crawling and indexing.
Read our articles and learn how to get rid of unnecessary noindex:
Your page has a canonical tag pointing to a different page
A canonical tag on a website’s page instructs search engines to treat the canonical URL as the preferred URL for that page’s content. This tag is used when the page’s content is a duplicate or variation of another page on the site. If the canonical tag is not implemented correctly, it can cause indexation issues.
You can learn more about canonical tags in our article:
For the purpose of this article, please remember that all original pages should have a self-referencing canonical tag. A page might end up not getting indexed if it has a canonical to another URL.
Your page is a duplicate or near duplicate of a different page
When a page on a website is a duplicate or near duplicate of another page, it can cause indexation and ranking issues. If a page is a duplicate of another one, Googlebot may not index it. And even if such a page is indexed, search engines usually will not allow duplicate content to rank well.
Duplicate content can also affect a website’s crawl budget. Googlebot needs to crawl each URL to identify if they have the same content, which can consume more time and resources. As a result, Googlebot has less capacity for crawling other, more valuable pages.
While there is no specific “duplicate content penalty” from Google, there are penalties related to having the same content as another site. Actions such as scraping content from other sites or republishing content without adding additional value are not welcome in the world of SEO, and may even hurt your rankings.
Do you struggle with duplicate content? Check out our guide to fix it:
The quality of your page is too low
Google aims to provide the best possible user experience by ranking pages with high-quality content higher in search results. If the content on the page has poor quality, Google may not consider it valuable to users and may not index it. Additionally, poor-quality content can lead to a high bounce rate, which is when users quickly leave the page without interacting with it. This can signal to Google that the page is irrelevant or not valuable to users, resulting in not indexing it.
Your page has an HTTP status other than 200 (OK)
The HTTP status code is part of a response that a server sends to a client, after receiving a request to access a webpage. The HTTP status code 200 OK indicates that the server has successfully responded to the request, and the page is accessible.
If a page returns an HTTP status code other than 200 OK, it won’t get indexed. As for why, it depends on the particular status code. For example, a 404 error status code indicates that the requested page is not found, and a 500 error status code indicates that there was an internal server error. If Googlebot encounters these errors while crawling a page, it may assume that said page is not available or not functional, and it will not index it. And if a non-200 HTTP status code persists for a long time, a page may be removed from the index.
Your page is in the indexing queue
When a page is in the indexing queue, it means that Google has not yet indexed it. This process can take some time, especially for new or low-traffic websites, and it can be delayed further if the website has technical issues, a low crawl budget, or robots.txt blockades and other restrictions.
Additionally, if the website has a lot of pages, Google may not be able to index all of them at once. As a result, some pages may remain in the indexing queue longer. This is a common problem which may get resolved with time, but if it doesn’t – it might be necessary to analyze it further and take action.
Google couldn’t render your page
When Googlebot crawls a page, it not only retrieves the HTML content but also renders the page like a browser does. If Googlebot encounters issues while rendering the page, it may not be able to properly understand the content of the page. If Google can’t render the page, it may not be able to identify certain elements, such as JavaScript-generated content or structured data, that are important for indexing and ranking.
As Google admits in their article Understand the JavaScript SEO basics:
“If the content isn’t visible in the rendered HTML, Google won’t be able to index it.”
In some cases, this can affect the indexing of the URL. If a significant part of your page isn’t rendered, it won’t be visible to Google. A page like this will likely be considered a duplicate or low quality, and may end up not getting indexed.
Read more about this topic:
Your page takes too long to load
Sometimes, when clients ask us “why isn’t Google indexing my page” the answer is that a page takes too long to load. That might be also your case!
If Googlebot is crawling a website that loads slowly, it may not be able to crawl and index all of the pages on the site within the allocated crawl budget.
Moreover, website loading speed is an important factor that can impact user experience and search rankings – so it’s definitely a critical part of website optimization.
How to get indexed by Google
If your website is completely new, it may take some time before it’s fully indexed. We recommend waiting a few weeks and monitoring the situation with tools like Google Search Console or ZipTie.dev.
If that’s not the case and your website has ongoing problems with indexing, you can follow these steps:
- Start by identifying the root cause of the problems using our list of possible factors.
- Once the cause is identified, make the necessary fixes.
- After all changes are implemented, submit the page again in Google Search Console.
If your actions do not bring the intended results, consider seeking the assistance of a professional technical SEO agency.
Wrapping up
If you’re experiencing indexing issues and your pages aren’t showing up on Google, you should investigate the root causes behind this. If you want to find the answer to your question – “why isn’t Google indexing my page” such analysis should be a critical first step.
Attempting to fix the issue without determining the causes of indexing problems is unlikely to be successful, and may even bring more harm than good.
However, some indexing issues can be quite complex and difficult to handle if you don’t have practical experience in this area. If the documentation we provided in this article is not enough, it’s recommended to seek help from a professional technical SEO agency to ensure that the issue is resolved effectively.