Large site with faceted navigation using rel=canonical, but Google still has issues
-
First off, I just wanted to mention I did post this on one other forum so I hope that is not completely against the rules here or anything. Just trying to get an idea from some of the pros at both sources. Hope this is received well. Now for the question.....
"Googlebot found an extremely high number of URLs on your site:"
Gotta love these messages in GWT. Anyway, I wanted to get some other opinions here so if anyone has experienced something similar or has any recommendations I would love to hear them.
First off, the site is very large and utilizes faceted navigation to help visitors sift through results. I have implemented rel=canonical for many months now to have each page url that is created based on the faceted nav filters, push back to the main category page. However, I still get these damn messages from Google every month or so saying that they found too many pages on the site. My main concern obviously is wasting crawler time on all these pages that I am trying to do what they ask in these instances and tell them to ignore and find the content on page x.
So at this point I am thinking about possibly using robots.txt file to handle these, but wanted to see what others around here thought before I dive into this arduous task. Plus I am a little ticked off that Google is not following a standard they helped bring to the table.
Thanks for those who take the time to respond in advance.
-
Yes that's a different situation. You're now talking about pagination, which quite rightly, canonicals to parent page is not to be used.
For faceted/filtered navigation it seems like canonical usage is indeed the right way to go about it, given Peter's experience just mentioned above, and the article you linked to that says, "...(in part because Google only indexes the content on the canonical page, so any content from the rest of the pages in the series would be ignored)."
-
As for my situation it worked out quite nicely, I just wasn't patient enough. After about 2 months the issue corrected itself for the most part and I was able to reduce about a million "waste" pages out of the index. This is a very large site so losing a million pages in a handful of categories helped me gain in a whole lot of other areas and spread the crawler around to more places that were important for us.
I also spent some time doing some restructuring of internal linking from some of our more authoritative pages that I believe also assisted with this, but in my case rel="canonical" worked out pretty nicely. Just took some time and patience.
-
I should actually add that Google doesn't condone using rel-canonical back to the main search page or page 1. They allow canonical to a "View All" or a complex mix of rel-canonical and rel=prev/next. If you use rel-canonical on too many non-identical pages, they could ignore it (although I don't often find that to be true).
Vanessa Fox just did a write-up on Google's approach:
http://searchengineland.com/implementing-pagination-attributes-correctly-for-google-114970
I have to be honest, though - I'm not a fan of Google's approach. It's incredibly complicated, easy to screw up, doesn't seem to work in all cases, and doesn't work on Bing. This is a very complex issue and really depends on the site in question. Adam Audette did a good write-up:
http://searchengineland.com/five-step-strategy-for-solving-seo-pagination-problems-95494
-
Thanks Dr Pete,
Yes I've used meta no-index on pages that are simply not useful in any way shape or form for Google to find.
I would be hesitant noindexing my filters in question, but it sounds promising that you are backing the canonical approach and there is a latency on reporting. Our PA and DA is extremely high and we get crawled daily, so curious about your measurement tip (inurl) which is a good one!
Many thanks.
Simon
-
I'm working on a couple of cases now, and it is extremely tricky. Google often doesn't re-crawl/re-cache deeper pages for weeks or months, so getting the canonical to work can be a long process. Still, it is generally a very effective tag and can happen quickly.
I agree with others that Robots.txt isn't a good bet. It also tends to work badly with pages that are already indexed. It's good for keeping things out of the index (especially whole folders, for example), but once 1000s of pages are indexed, Robots.txt often won't clean them up.
Another option is META NOINDEX, but it depends on the nature of the facets.
A couple of things to check:
(1) Using site: with inurl:, monitor the faceted navigation pages in the Google index. Are the numbers gradually dropping? That's what you want to see - the GWT error may not update very often. Keep in mind that these numbers can be unreliable, so monitor them daily over a few weeks.
(2) Are there are other URLs you're missing? On a large, e-commerce site, it's entirely possibly this wasn't the only problem.
(3) Did you cut the crawl paths? A common problem is that people canonical, 301-redirect, or NOINDEX, but then nofollow or otherwise cut links to those duplicates. Sounds like a good idea, except that the canonical tag has to be crawled to work. I see this a lot, actually.
-
Did you find a solution for this? I have exactly the same issue and have implemented the rel canonical in exactly the same way.
The issue you are trying to address is improving crawl bandwidth/equity by not letting Google crawl these faceted pages.
I am thinking of Ajax loading in these pages to the parent category page and/or adding nofollow to the links. But the pages have already been indexed, so I wonder if nofollow will have any effect.
Have you had any progress? Any further ideas?
-
Because rel canonical does nothing more than give credit to teh chosen page and aviod duplicat content. it does not tell the SE to stop indexing or redirect. as far as finding the links it has no affect
-
thx
-
OK, sorry I was thinking too many pages, not links.
using no-index will not stop PR flowing, the search engine will still follow the links. -
Yeah that is why I am not real excited about using robots.txt or even a no index in this instance. They are not session ids, but more like:
www.example.com/catgeoryname/a,
www.example.com/catgeoryname/b
www.example.com/catgeoryname/c
etc
which would show all products that start with those letters. There are a lot of other filters too, such as color, size, etc, but the bottom line is I point all those back to just www.example.com/categoryname using rel canonical and am not understanding why it isn't working properly.
-
There are a large number of urls like this because of the way the faceted navigation works and I have considered no index, but somewhat concerned as we do get links to some of these urls and would like to maintain some of that link juice. The warning shows up in Google Webmaster tools when Googlebot finds a large number of urls. The rest of the message reads like this:
"Googlebot encountered extremely large numbers of links on your site. This may indicate a problem with your site's URL structure. Googlebot may unnecessarily be crawling a large number of distinct URLs that point to identical or similar content, or crawling parts of your site that are not intended to be crawled by Googlebot. As a result Googlebot may consume much more bandwidth than necessary, or may be unable to completely index all of the content on your site."
rel canonical should fix this, but apparently it is not
-
Check how you are getting these pages.
Robots.txt is not an ideal solution. If Google finds pages in other places, still these pages will be crawled.
Normally print pages won't have link value and you may no index them.
If there are pages with session ids or campaign codes, use canonical if they have link value. Otherwise no index will be good.
-
the rel canonical with stop you getting duplicate content flags, but there is still a large number of pages its not going to hide them.
I have never seen this warning, how many pages are we talking about?, either it is very very high, or they are confusing the crawler.You may need to no index them
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does cached duplicate content hurts seo by Google
If we have duplicate content or pages cached in Google which has been indexed months back, still it hurts the original pages? Old URLs with cache can be seen now in Google when we search for the same URLs.
Algorithm Updates | | vtmoz0 -
On-Page Markup: Still a Worthwhile Practice?
So I have a question for the community that hopefully someone can help me with. Previously, whenever I created/worked on a website, when I would create or edit the content, I would bold the keywords, italicize certain items, add internal/external links and generally mark-up the content. More recently, however, I've noticed that both my client and many of their leading competitors have abandoned this practice. Now, it appears that all the text appears as plain text, there are rarely bold or italicized items and there does not seem to be as much emphasis on inserting internal/external links. While I understand the ladder to still be an effective/holistic approach to SEO, I'm wondering why the former (the bold, italicized, text variation) has gone by the wayside. So with that, is adding bold/italicized text still a worthwhile SEO technique and is it something I should continue applying to sites I work on? Please advise.
Algorithm Updates | | maxcarnage0 -
Is it better to build a large site that covers many verticals or many sites dedicated to each vertical
Just wondering from an seo perspective is it better to build a large site that covers many verticals or build out many sites one for each vertical?
Algorithm Updates | | tlhseo0 -
Our Developer Site randomly drops 10+ places in Google searches for our Company Name. Why?
Hey everyone, At Betable, we have a player-facing site and a developer-facing site. We also have a developer-facing blog. We have this issue where our developer-facing site will randomly drop 10+ places in Google's Search results for the keyword "betable". This problem can be reproduced by others and in incognito mode, so it's not just one person's results. Furthermore, the developer-facing blog and our social media accounts all suddenly rank higher than the developer site. Even stranger, this problem randomly fixes itself after a few days. This has happened twice so far, and on each occasion there were no changes to the website that would have prompted a drop in rank. After the first drop, we did our best to neutralize any SEOMoz "red alerts" but to no avail, the drop happened again last week. Can someone help us understand what's going on? Are there ways to avoid this? Thanks, Tyler
Algorithm Updates | | Betable0 -
Google dance/over optimized/paranoid?
Hi guys, hope your all OK and thanks in advance for taking a nosey at this. OK where to start - my rankings for the last 12 months have progressively improved every week, usually of the 300 KWs i track the last few months has seen approx 70 up/70down per week, but the improvements usually outweigh the declines. This week I saw a sudden drop though - 35 improvements and 112 declines. The strange thing was though, the improvements came on the more competitive KWs, and the less competitive words I haven't done much or any back linking for dropped. Seems silly me asking this question when I run that through my head ofcouse KWs you don;t work on will drop like flies? It should be plainly obvious those words would drop off but all have been improving on there own slowly over the last 6/7 months. Now if this was a penalty (nothing showing in webmaster tools) I would have expected it to come through on my KWs I have over done the backlinking for, but these are the 1's that improved. So is it just the Google Dance? I normally see some words such as the big 1 we target DJ Equipment go from position 13 - 24 can change hourly sometimes! Could it just be quite a few have dropped all at once and will pop back up this week? Also if anyone could give us any pointers in general on where you think we should be taking our SEO it would be much appreciated. I know we have been a little lazy with our backlinking and could do with some much better/ industry related websites linking to us, and there are title tags/metas on product page that need sorting.. aside these couple of issue's? DJs Only
Algorithm Updates | | allan-chris0 -
Are multiple domains for my website hurting my Google ranking?
Hello, I currently have two domains showing up in google search: shwoodshop.com shop.shwoodshop.com These domains are currently ranked in the #2 and #3 spot, however my page is much more trafficked than the current #1 ranking. I am wondering if the fact that I have two domains competing for the #1 spot is hurting my search ranking. If so, what is the best way to remedy this issue and get back my #1 spot? I'm rather new to SEO and teaching myself as I go, so I appreciate the feedback!
Algorithm Updates | | shwoodshop0 -
How do you get photo galleries indexed on Google News?
I work for a news site and some of our photo galleries get indexed by Google News while others never do. I'm trying to determine why some are more successful than others even though they all follow the same guidelines regarding keyword-rich headlines & copy, h1s, etc. When comparing what's been indexed in the past with current galleries, there doesn't appear to be an obvious pattern. Can anyone share some insight into this?
Algorithm Updates | | BostonWright0 -
What are the differences between Google SEO and Bing SEO?
I came across this question on why the poster's rankings in Bing/Yahoo were so much lower than his rankings in Google. One of the links responded with was a presentation Rand gave about the difference in ranking elements of Google and Bing. My purpose for looking into this is to boost rankings in Bing to be more in line with my Google rankings. My takeaways from Rand's presentation were that Bing likes shorter URLs than Google and it's better to have more links from more root domains with more precise anchor text. Unfortunately this presentation was given at last year's SMX Advanced and is almost a year old. Since then Microsoft has been accused of basically scraping the Google SERPs and Google unleashed at least two maybe three rabid Pandas. Needless to say the environment has changed. So my question is for those people who are happy with how they rank in Bing: What SEO factors are you seeing make a bigger impact in Bing vs. how they impact your Google rankings?
Algorithm Updates | | rball11