Crawled page count in Search console
-
Hi Guys,
I'm working on a project (premium-hookahs.nl) where I stumble upon a situation I can’t address. Attached is a screenshot of the crawled pages in Search Console.
History:
Doing to technical difficulties this webshop didn’t always no index filterpages resulting in thousands of duplicated pages. In reality this webshops has less than 1000 individual pages. At this point we took the following steps to result this:
- Noindex filterpages.
- Exclude those filterspages in Search Console and robots.txt.
- Canonical the filterpages to the relevant categoriepages.
This however didn’t result in Google crawling less pages. Although the implementation wasn’t always sound (technical problems during updates) I’m sure this setup has been the same for the last two weeks. Personally I expected a drop of crawled pages but they are still sky high. Can’t imagine Google visits this site 40 times a day.
To complicate the situation:
We’re running an experiment to gain positions on around 250 long term searches. A few filters will be indexed (size, color, number of hoses and flavors) and three of them can be combined. This results in around 250 extra pages. Meta titles, descriptions, h1 and texts are unique as well.
Questions:
- - Excluding in robots.txt should result in Google not crawling those pages right?
- - Is this number of crawled pages normal for a website with around 1000 unique pages?
- - What am I missing?
-
Ben,
I doubt that crawlers are going to access the robots.txt file for each request, but they still have to validate any url they find against the list of the blocked ones.
Glad to help,
Don
-
Hi Don,
Thanks for the clear explanation. I always though disallow in robots.txt would give a sort of map to Google (at the start of a site crawl) with the pages on the site that shouldn’t be crawled. So he therefore didn’t have to “check the locked cars”.
If I understand you correctly, google checks the robots.txt with every single page load?
That could definitely explain high number of crawled pages per day.
Thanks a lot!
-
Hi Bob,
About the nofollow vs blocked. In the end I suppose you have the same results, but in practice it works a little differently. When you nofollow a link it tells the crawler as soon as it encounters the link not to request or follow that link path. When you block it via robots the crawler still attempts to access the url only to find it not accessible.
Imagine if I said go to the parking lot and collect all the loose change in all the unlocked cars. Now imagine how much easier that task would be if all the locked cars had a sign in the window that said "Locked", you could easily ignore the locked cars and go directly to the unlocked ones. Without the sign you would have to physically go check each car to see if it will open.
About link juice, if you have a link, juice will be passed regardless of the type of link. (You used to be able to use nofollow to preserve link juice but no longer). This is bit unfortunate for sites that use search filters because they are such a valuable tool for the users.
Don
-
Hi Don,
You're right about the sitemap, noted it on the to do list!
Your point about nofollow is intersting. Isn't excluding in robots.txt giving the same result?
Before we went on with the robots.txt we didn't implant nofollow because we didn't want any linkjuice to pass away. Since we got robots.txt I assume this doesn’t matter anymore since Google won’t crawl those pages anyway.
Best regards,
Bob
-
Hi Bob,
You can "suggest" a crawl rate to Google by logging into your webmasters tools on Google and adjusting it there.
As for indexing pages.. I looked at your robots and site. It really looks like you need to employ some No Follow on some of your internal linking, specifically on the product page filters, that alone could reduce the total number of URLS that the crawlers even attempts to look at.
Additionally your sitemap http://premium-hookahs.nl/sitemap.xml shows a change frequency of daily, and probably should be broken out between Pages / Images so you end up using two sitemaps one for images and one for pages. You may also want to review what is in there. Using ScreamingFrog (free) the sitemap I made (link) only shows about 100 urls.
Hope it helps,
Don
-
Hi Don,
Just wanted to add a quick note: your input made go through the indexation state of the website again which was worse than I through it was. I will take some steps to get this resolved, thanks!
Would love to hear your input about the number of crawled pages.
Best regards,
Bob
-
Hello Don,
Thanks for your advice. What would your advice be if the main goal would be the reduction of crawled pages per day? I think we got the right pages in the index and the old duplicates are mostly deindexed. At this point I’m mostly worried about Google spending it’s crawlbudget on the right pages. Somehow it still crawls 40.000 pages per day while we only got around 1000 pages that should be crawled. Looking at the current setup (with almost everything excluded though robots.txt) I can’t think of pages it does crawl to reach the 40k. And 40 times a day sounds like way to many crawled pages for a normal webshop.
Hope to hear from you!
-
Hello Bob,
Here is some food for thought. If you disallow a page in Robots.txt, google for example will not crawl that page. That does not however mean they will remove it from the index if it had previously been crawled. It simply treats it as inaccessible and moves on. It will take some time, months before Google finally says, we have no fresh crawls of page x, its time to remove it from the index.
On the other hand if you specifically allow Google to crawl those pages and show a no-index tag on it, Google now has a new directive it can act upon immediately.
So my evaluation of the situation would be to do 1 of 2 things.
1. Remove the disallow from robots and allow Google to crawl the pages again. However, this time use no-index, no-follow tags.
2. Remove the disallow from robots and allow Google to crawl the pages again, but use canonical tags to the main "filter" page to prevent further indexing the specific filter pages.
Which option is best depends on the amount of urls being indexed, a few thousand canonical would be my choice. A few hundred thousand, then no index would make more sense.
Whichever option, you will have to insure Google re-crawls, and then allow them time to re-index appropriately. Not a quick fix, but a fix none the less.
My thoughts and I hope it makes sense,
Don
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Images on their own page?
Hi Mozers, We have images on their own separate pages that are then pulled onto content pages. Should the standalone pages be indexable? On the one hand, it seems good to have an image on it's own page, with it's own title. On the other hand, it may be better SEO for crawler to find the image on a content page dedicated to that topic. Unsure. Would appreciate any guidance! Yael
Intermediate & Advanced SEO | | yaelslater1 -
How long after https migration that google shows in search console new sitemap being indexed?
We migrated 4 days ago to https and followed best practices..
Intermediate & Advanced SEO | | lcourse
In search console now still 80% of our sitemaps appear as "pending" and among those sitemaps that were processed only less than 1% of submitted pages appear as indexed? Is this normal ?
How long does it take for google to index pages from sitemap?
Before https migration nearly all our pages were indexed and I see in the crawler stats that google has crawled a number of pages each day after migration that corresponds to number of submitted pages in sitemap. Sitemap and crawler stats show no errors.0 -
A/B Testing - Should I add product descriptions on my category landing pages as well as on product pages and if so . how to do this to avoid duplicate content
Hi All, I recently relaunched a new design on my tool hire eCommerce website and now display my products in grid form on my category landing pages as opposed to just a list view which we previously had on the old design. My bounce rates are alot higher than they use to be and my gut instinct is telling me maybe this is wrong . I want to do some a/b testing using a list view. My question is , previously in our list views we just showed the images and pricing and had on page content on the bottom of the page. The user would click on the product image and they would then we taken to the product page which has the product description , t&c, etc etc.. If I was to do this in my a/b testing but change it so we also displayed the product descriptions as well on the category landing pages . Is there a special way to do this as in effect, we would have duplicate content as the product descriptions are also on the product page?. Does anyone have any thoughts on this as to whether its a No No from an SEO point of view ?... Heres a short url link to one of my category pages - http://goo.gl/QJv5gw Historically we use to rank well for the category landing pages and not for the product pages.Our Rankings are down , bounce rates are higher so I am trying to sort both. We have good content on pages etc. Any advice greatly appreciated as always thanks Pete
Intermediate & Advanced SEO | | PeteC120 -
Putting "noindex" on a page that's in an iframe... what will that mean for the parent page?
If I've got a page that is being called in an iframe, on my homepage, and I don't want that called page to be indexed.... so I put a noindex tag on the called page (but not on the homepage) what might that mean for the homepage? Nothing? Will Google, Bing, Yahoo, or anyone else, potentially see that as a noindex tag on my homepage?
Intermediate & Advanced SEO | | Philip-DiPatrizio0 -
Should i redirect this page?
Hi I have the following 2 pages: http://www.over50choices.co.uk/Funeral-Planning.aspx http://www.over50choices.co.uk/Funeral-Planning/Funeral-Plans.aspx My dilema is that google sees the words "funeral planning" & "funeral plans" as the same thing, which might explain why the "funeral plan" page is not ranked v well. My issue is that the "funeral planning" page is at category level and introduces the wider subject of funeral planning, which isnt just funeral plans, so if i 301 my "funeral plan" page i will have no where to talk about funeral plans. My question is, Is the "funeral plan" page not ranked v well because of this or do i just need better optimisation of the funeral plan page so google is clear which is the key focus for each page? Thanks Ash
Intermediate & Advanced SEO | | AshShep10 -
Home page not being indexed
Hi Moz crew. I have two sites (one is a client's and one is mine). They are both Wordpress sites and both are hosted on WP Engine. They have both been set up for a long time, and are "on-page" optimized. Pages from each site are indexed, but Google is not indexing the homepage for either site. Just to be clear - I can set up and work on a Wordpress site, but am not a programmer. Both seem to be fine according to my Moz dashboard. I have Webmaster tools set up for each - and as far as I can tell (definitely not an exper in webmaster tools) they are okay. I have done the obvious and checked that the the box preventing Google from crawling is not checked, and I believe I have set up the proper re-directs and canonicals.Thanks in advance! Brent
Intermediate & Advanced SEO | | EchelonSEO0 -
410 pages
Do you need to optimize a 410 page like you do for 404 pages? What does a visitor see when a page is 410 compared to a 404?
Intermediate & Advanced SEO | | WebServiceConsulting.com0 -
Why Would This Old Page Be Penalized?
Here's an old page on a trustworthy domain with no apparent negative SEO activity according to OSE and ahrefs: http://www.gptours.com/Monaco-Grand-Prix They went from page 1 to page 13 for "monaco grand prix" within about 4 weeks. Week 2 we pulled out all the duplicate content in the history section. When rank slipped further, we put it back. Yet it's still moving down, while other pages on the website are holding strong. Next steps will be to add some schema.org/Event microformats, but beyond that, do you have any ideas?
Intermediate & Advanced SEO | | stevewiideman0