Large site with faceted navigation using rel=canonical, but Google still has issues
-
First off, I just wanted to mention I did post this on one other forum so I hope that is not completely against the rules here or anything. Just trying to get an idea from some of the pros at both sources. Hope this is received well. Now for the question.....
"Googlebot found an extremely high number of URLs on your site:"
Gotta love these messages in GWT. Anyway, I wanted to get some other opinions here so if anyone has experienced something similar or has any recommendations I would love to hear them.
First off, the site is very large and utilizes faceted navigation to help visitors sift through results. I have implemented rel=canonical for many months now to have each page url that is created based on the faceted nav filters, push back to the main category page. However, I still get these damn messages from Google every month or so saying that they found too many pages on the site. My main concern obviously is wasting crawler time on all these pages that I am trying to do what they ask in these instances and tell them to ignore and find the content on page x.
So at this point I am thinking about possibly using robots.txt file to handle these, but wanted to see what others around here thought before I dive into this arduous task. Plus I am a little ticked off that Google is not following a standard they helped bring to the table.
Thanks for those who take the time to respond in advance.
-
Yes that's a different situation. You're now talking about pagination, which quite rightly, canonicals to parent page is not to be used.
For faceted/filtered navigation it seems like canonical usage is indeed the right way to go about it, given Peter's experience just mentioned above, and the article you linked to that says, "...(in part because Google only indexes the content on the canonical page, so any content from the rest of the pages in the series would be ignored)."
-
As for my situation it worked out quite nicely, I just wasn't patient enough. After about 2 months the issue corrected itself for the most part and I was able to reduce about a million "waste" pages out of the index. This is a very large site so losing a million pages in a handful of categories helped me gain in a whole lot of other areas and spread the crawler around to more places that were important for us.
I also spent some time doing some restructuring of internal linking from some of our more authoritative pages that I believe also assisted with this, but in my case rel="canonical" worked out pretty nicely. Just took some time and patience.
-
I should actually add that Google doesn't condone using rel-canonical back to the main search page or page 1. They allow canonical to a "View All" or a complex mix of rel-canonical and rel=prev/next. If you use rel-canonical on too many non-identical pages, they could ignore it (although I don't often find that to be true).
Vanessa Fox just did a write-up on Google's approach:
http://searchengineland.com/implementing-pagination-attributes-correctly-for-google-114970
I have to be honest, though - I'm not a fan of Google's approach. It's incredibly complicated, easy to screw up, doesn't seem to work in all cases, and doesn't work on Bing. This is a very complex issue and really depends on the site in question. Adam Audette did a good write-up:
http://searchengineland.com/five-step-strategy-for-solving-seo-pagination-problems-95494
-
Thanks Dr Pete,
Yes I've used meta no-index on pages that are simply not useful in any way shape or form for Google to find.
I would be hesitant noindexing my filters in question, but it sounds promising that you are backing the canonical approach and there is a latency on reporting. Our PA and DA is extremely high and we get crawled daily, so curious about your measurement tip (inurl) which is a good one!
Many thanks.
Simon
-
I'm working on a couple of cases now, and it is extremely tricky. Google often doesn't re-crawl/re-cache deeper pages for weeks or months, so getting the canonical to work can be a long process. Still, it is generally a very effective tag and can happen quickly.
I agree with others that Robots.txt isn't a good bet. It also tends to work badly with pages that are already indexed. It's good for keeping things out of the index (especially whole folders, for example), but once 1000s of pages are indexed, Robots.txt often won't clean them up.
Another option is META NOINDEX, but it depends on the nature of the facets.
A couple of things to check:
(1) Using site: with inurl:, monitor the faceted navigation pages in the Google index. Are the numbers gradually dropping? That's what you want to see - the GWT error may not update very often. Keep in mind that these numbers can be unreliable, so monitor them daily over a few weeks.
(2) Are there are other URLs you're missing? On a large, e-commerce site, it's entirely possibly this wasn't the only problem.
(3) Did you cut the crawl paths? A common problem is that people canonical, 301-redirect, or NOINDEX, but then nofollow or otherwise cut links to those duplicates. Sounds like a good idea, except that the canonical tag has to be crawled to work. I see this a lot, actually.
-
Did you find a solution for this? I have exactly the same issue and have implemented the rel canonical in exactly the same way.
The issue you are trying to address is improving crawl bandwidth/equity by not letting Google crawl these faceted pages.
I am thinking of Ajax loading in these pages to the parent category page and/or adding nofollow to the links. But the pages have already been indexed, so I wonder if nofollow will have any effect.
Have you had any progress? Any further ideas?
-
Because rel canonical does nothing more than give credit to teh chosen page and aviod duplicat content. it does not tell the SE to stop indexing or redirect. as far as finding the links it has no affect
-
thx
-
OK, sorry I was thinking too many pages, not links.
using no-index will not stop PR flowing, the search engine will still follow the links. -
Yeah that is why I am not real excited about using robots.txt or even a no index in this instance. They are not session ids, but more like:
www.example.com/catgeoryname/a,
www.example.com/catgeoryname/b
www.example.com/catgeoryname/c
etc
which would show all products that start with those letters. There are a lot of other filters too, such as color, size, etc, but the bottom line is I point all those back to just www.example.com/categoryname using rel canonical and am not understanding why it isn't working properly.
-
There are a large number of urls like this because of the way the faceted navigation works and I have considered no index, but somewhat concerned as we do get links to some of these urls and would like to maintain some of that link juice. The warning shows up in Google Webmaster tools when Googlebot finds a large number of urls. The rest of the message reads like this:
"Googlebot encountered extremely large numbers of links on your site. This may indicate a problem with your site's URL structure. Googlebot may unnecessarily be crawling a large number of distinct URLs that point to identical or similar content, or crawling parts of your site that are not intended to be crawled by Googlebot. As a result Googlebot may consume much more bandwidth than necessary, or may be unable to completely index all of the content on your site."
rel canonical should fix this, but apparently it is not
-
Check how you are getting these pages.
Robots.txt is not an ideal solution. If Google finds pages in other places, still these pages will be crawled.
Normally print pages won't have link value and you may no index them.
If there are pages with session ids or campaign codes, use canonical if they have link value. Otherwise no index will be good.
-
the rel canonical with stop you getting duplicate content flags, but there is still a large number of pages its not going to hide them.
I have never seen this warning, how many pages are we talking about?, either it is very very high, or they are confusing the crawler.You may need to no index them
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SERP Cannibalization Issue
Hi Moz Community, Glad to be here with you. I need some help from you guys. I have a new client website: [https://deluxemassage.co.uk/](link url) My client runs this type of business offering professional therapeutic health services. It offers massage services on demand ("mobile", therapists visiting clients at home). The client has his business registered, and the therapists listed on the website are carefully vetted, all having training in different types of massage. The client's problem is that until 2 months ago the site was in position 3-4 in Google (somewhat stable) with the main page for the keyword "mobile massage London". Recently I noticed that another page on the site that lists all therapists - i.e. this page [https://deluxemassage.co.uk/therapists/](link url)- is on the first page of Google for the above keyword, although both the title tag, meta description and the page on the site do not target the above keyword.
Algorithm Updates | | Koyel9754
I notice that both pages have serious fluctuations in the SERP for the mentioned keyword, sometimes appearing both in the results.
The Home page is marked as rel=canonical Of course the position in SERP fluctuates from 8, 11, 12, 7. What should I do to solve this problem? Any thoughts here would be appreciated. Thanks0 -
Google inaccurate results: Common or error?
Hi all, While searching for our primary keyword, I can see 2 websites on second page results which are non-related to the keyword or industry but their company name is this keyword. Like if I want to rank and searching for "SEO", there are 2 websites which called "seo trucks" and "seo paints". I wonder how Google is ranking these websites for high competition keyword with 1 million searches per month. So the keyword in URL and this keyword mentioned across the website being their brand name taking over the other potential ranking factors like backlinks, relevant content, user clicks, etc..... Thanks
Algorithm Updates | | vtmoz0 -
Google indexing https sites by default now, where's the Moz blog about it!
Hello and good morning / happy Friday! Last night an article from of all places " Venture Beat " titled " Google Search starts indexing and letting users stream Android apps without matching web content " was sent to me, as I read this I got a bit giddy. Since we had just implemented a full sitewide https cert rather than a cart only ssl. I then quickly searched for other sources to see if this was indeed true, and the writing on the walls seems to indicate so. Google - Google Webmaster Blog! - http://googlewebmastercentral.blogspot.in/2015/12/indexing-https-pages-by-default.html http://www.searchenginejournal.com/google-to-prioritize-the-indexing-of-https-pages/147179/ http://www.tomshardware.com/news/google-indexing-https-by-default,30781.html https://hacked.com/google-will-begin-indexing-httpsencrypted-pages-default/ https://www.seroundtable.com/google-app-indexing-documentation-updated-21345.html I found it a bit ironic to read about this on mostly unsecured sites. I wanted to hear about the 8 keypoint rules that google will factor in when ranking / indexing https pages from now on, and see what you all felt about this. Google will now begin to index HTTPS equivalents of HTTP web pages, even when the former don’t have any links to them. However, Google will only index an HTTPS URL if it follows these conditions: It doesn’t contain insecure dependencies. It isn’t blocked from crawling by robots.txt. It doesn’t redirect users to or through an insecure HTTP page. It doesn’t have a rel="canonical" link to the HTTP page. It doesn’t contain a noindex robots meta tag. It doesn’t have on-host outlinks to HTTP URLs. The sitemaps lists the HTTPS URL, or doesn’t list the HTTP version of the URL. The server has a valid TLS certificate. One rule that confuses me a bit is : **It doesn’t redirect users to or through an insecure HTTP page. ** Does this mean if you just moved over to https from http your site won't pick up the https boost? Since most sites in general have http redirects to https? Thank you!
Algorithm Updates | | Deacyde0 -
Need Advice - Google Still Not Ranking
Hi Team - I really need some expert level advice on an issue I'm seeing with our site in Google. Here's the current status. We launched our website and app on the last week of November in 2014 (soft launch): http://goo.gl/Wnrqrq When we launched we were not showing up for any targeted keywords, long tailed included, even the title of our site in quotes. We ranked for our name only, and even that wasn't #1. Over time we were able to build up some rankings, although they were very low (120 - 140). Yesterday, we're back to not ranking for any keywords. Here's the history: While developing our app, and before I took over the site, the developer used a thin affiliate site to gather data and run a beta app over the course of 1 - 2 years. Upon taking on the site and moving to launch the new website/app I discovered what had been run under the domain. Since than the old site has been completely removed and rebuild, with all associated urls (.uk, .net, etc...) and subdomains shutdown. I've allowed all the old spammy pages (thousands of them to 404). We've disavowed the old domains (.net, .uk that were sending a ton of links to this), along with some links that seemed a little spammy that were pointing to our domain. There are no manual actions or messaged in Google Webmaster Tools. The new website uses (SSL) https for the entire site, it scores a 98 / 100 for a mobile usability (we beat our competitors on Google's PageSpeed Tool), it has been moved to a business level hosting service, 301's are correctly setup, added terms and conditions, have all our social profiles linked, linked WMT/Analytics/YouTube, started some Adwords, use rel="canonical", all the SEO 101 stuff ++. When I run the page through the moz tool for a specific keyword we score an A. When I did a crawl test everything came back looking good. We also pass using other tools. Google WMT, shows no html issues. We rank well on Bing, Yahoo and DuckDuckGo. However, for some reason Google will not rank the site, and since there is no manual action I have no course of action to submit a reconsideration request. From an advanced stance, should we bail on this domain, and move to the .co domain (that we own, but hasn't been used before)? If we 301 this domain over, since all our marketing is pointed to .com will this issue follow us? I see a lot of conflicting information on algorithmic issues following domains. Some say they do, some say they don't, some say they do since a lot of times people don't fix the issue. However, this is a brand new site, and we're following all of Google's rules. I suspect there is an algorithmic penalty (action) against the domain because of the old thin affiliate site that was used for the beta and data gathering app. Are we stuck till Google does an update? What's the deal with moving us up, than removing again? Thoughts, suggestions??? I purposely, did a short url to leave out the company name, please respect that, since I don't want our issues to popup on a web search. 🙂
Algorithm Updates | | get4it0 -
Site titles / descriptions change - Google Algo Change ?
Hello, During the weekend 4 of our sites automatically changed their search titles and descriptions at the same time.
Algorithm Updates | | lordish
They are not picking up the real pages: Title, Description. Our ranks are dropping because of this. can you please tell if it happened to you as well or if you recognize a problem here? sites:
http://www.robinhoodbingo.com
http://www.gossipbingo.com
http://www.moonbingo.com in the attached examples:
for the kws searched - the results show different titles and descriptions. results for these pages:
moon bingo - http://www.moonbingo.com
mobile bingo - http://www.robinhoodbingo.com/skin/mobile.php rhMzURw.png 2tRL5dZ.png0 -
Struggling with Google Bot Blocks - Please help!
I own a site called www.wheretobuybeauty.com.au After months and months we still have a serious issue with all pages having blocked URLs according to Google Webmaster Tools. The 404 errors are returning a 200 header code according to the email below. Do you agree that the 404.php code should be changed? Can you do that please ? The current state: Google webmaster tools Index Status shows: 26,000 pages indexed 44,000 pages blocked by robots. In late March, we implemented a change recommended by an SEO expert and he provided a new robots.txt file, advised that we should amend sitemap.xml and other changes. We implemented those changes and then setup a re-index of the site by google. The no of blocked URLs eventually reduced in May and June to 1,000 for a few days – but now the problem has rapidly returned. The no of pages that are displayed in a google search request of www.google.com.au where the query was ‘site:wheretobuybeauty.com.au’ is 37,000: This new site has been re-crawled over last 4 weeks. About the site This is a Linux php site and has the following: 55,000 URLs in sitemap.xml submitted successfully to webmaster tools robots.txt file has been modified several times: Firstly we had none Then we created one but were advised that it needed to have this current content: User-agent: * Disallow: Sitemap: http://www.wheretobuybeauty.com.au/sitemap.xml
Algorithm Updates | | socialgrowth0 -
Google Places/Points of Interest Rankings?
Does anyone have an idea on how Google ranks or determines the 'Points of Interests' that come up when searching about places/cities?
Algorithm Updates | | CarlLarson0 -
Google SERP UI in December
For retailers (or commercial queries), it seems like PPC ads, product ads and google shopping links were allocated more pixel real estate in December than in previous years, and the amount of pixel real estate allocated to organic listings declined further. I was wondering if anyone had any knowledge on when these changes were rolled out.
Algorithm Updates | | enoch0