Non-Canonical Pages still Indexed. Is this normal?
-
I have a website that contains some products and the old structure of the URL's was definitely not optimal for SEO purposes. So I created new SEO friendly URL's on my site and decided that I would use the canonical tags to transfer all the weight of the old URL's to the New URL's and ensure that the old ones would not show up in the SERP's. Problem is this has not quite worked. I implemented the canonical tags about a month ago but I am still seeing the old URL's indexed in Google and I am noticing that the cache date of these pages was only about a week ago.
This leads me to believe that the spiders have been to the pages and seen the new canonical tags but are not following them. Is this normal behavior and if so, can somebody explain to me why?
I know I could have just 301 redirected these old URL's to the new ones but the process I would need to go through to have that done is much more of a battle than to just add the canonical tags and I felt that the canonical tags would have done the job. Needless to say the client is not too happy right now and insists that I should have just used the 301's. In this case the client appears to be correct but I do not quite understand why my canonical tags did not work.
Examples Below-
Old Pages:
www.awebsite.com/something/something/productid.3254235
New Pages:
www.awebsite.com/something/something/keyword-rich-product-name
Canonical tag on both pages:
rel="canonical" href="http://www.awebsite.com/something/something/keyword-rich-product-name"/> Thanks guys for the help on this.
-
It can take a while. I disagree very slightly with Alan and EGOL on one point - while 301s are traditionally more appropriate here, I often find that canonicals are pretty strong (and more than a hint). Both suffer the same problem, though - the signal has to be crawled and processed, and that doesn't always take right away. I haven't seen any reports on it taking 2, 3, etc. times to happen, but I've definitely seen a page re-cache without the indexation signals beign honored.
Are these true duplicates or did something change in the interim a bit? If the duplicates don't seem like true duplicates or you put 1000s of them out there all at once, Google could choose to ignore the canonicals.
If these really seem stuck, though, switching to 301s is harmless, and for a permanent URL change, it is probably the better way to go. I wouldn't expect that to kick in instantly either, though.
-
Yes... I agree with Alan. Canonical is a hint.
We put rel=canonical on about 250 pages in early February. As of today about 1/2 of those pages are still in the SERPs. The numbers are falling but this is really really slow to implement.
If you have done everything correctly it will probably work but requires patience.
-
Alan, I appreciate the help. I will go with this and see what happens and try to find those videos. Graci.
-
Matt cutts has said it a few times in videos, i could not tell you what ones without doing a far bit of searching.
-
Yes they should, but 301's and canonicals leak link juice, so you want your links to go directly to the correct page where you can.
See half way down this page, you will see just how easy it is to do all this, with a few clicks.
http://www.seomoz.org/ugc/microsoft-technologies-and-seo-web-development
for you it may not be quiest as easy as you are converting from id to product name, but if you look into the url rewrite module a bit further you will see it is posible to do this once for all pages
-
Also do you know of any documentation that states that it takes a few passes for a canonical tag to be honored and also for 301's as well? That would really help me explain my initial thoughts on using the canonical tag.
-
I get the part about the 301's and I believe we have iis7 but between departments, just not as simple of a change especially regarding the number of products I have to do this for, 800+.
Regarding the links to the old URL, it was my belief that with the canonical tag, that weight should transfer over to the the new URL as well or was I mistaken on that?
-
You seem to have done everything ok, but from my understanding google does not honer 301's or caninicals first crawl, they wait a few times to make sure its not a mistake.
What sort of server are you using? if you are using windows with iis7 is is very easy to impliment the urlrewites and corasponding 301's
i would 301, a canonical is a hint, a301 is a directive. and also if people stil go to your old pages, they may make a link to the old page rather then the new url.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Japanese URL-structured sitemap (pages) not being indexed by Bing Webmaster Tools
Hello everyone, I am facing an issue with the sitemap submission feature in Bing Webmaster Tools for a Japanese language subdirectory domain project. Just to outline the key points: The website is based on a subdirectory URL ( example.com/ja/ ) The Japanese URLs (when pages are published in WordPress) are not being encoded. They are entered in pure Kanji. Google Webmaster Tools, for instance, has no issues reading and indexing the page's URLs in its sitemap submission area (all pages are being indexed). When it comes to Bing Webmaster Tools it's a different story, though. Basically, after the sitemap has been submitted ( example.com/ja/sitemap.xml ), it does report an error that it failed to download this part of the sitemap: "page-sitemap.xml" (basically the sitemap featuring all the sites pages). That means that no URLs have been submitted to Bing either. My apprehension is that Bing Webmaster Tools does not understand the Japanese URLs (or the Kanji for that matter). Therefore, I generally wonder what the correct way is to go on about this. When viewing the sitemap ( example.com/ja/page-sitemap.xml ) in a web browser, though, the Japanese URL's characters are already displayed as encoded. I am not sure if submitting the Kanji style URLs separately is a solution. In Bing Webmaster Tools this can only be done on the root domain level ( example.com ). However, surely there must be a way to make Bing's sitemap submission understand Japanese style sitemaps? Many thanks everyone for any advice!
Technical SEO | | Hermski0 -
Cached pages still showing on Google
We noticed our QA site showing up on Google so we blocked them in our robot.txt file. We still had an issue with them crawling it so we blocked the site from the public. Now Google is still showing a cached version from the first week in March. Do we just have to wait until they try to re-crawl the site to clear this out or is there a better way to try and get these pages removed from results?
Technical SEO | | aspenchicago0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Translating Page Titles & Page Descriptions
I am working on a site that will be published in the original English, with localized versions in French, Spanish, Japanese and Chinese. All the versions will use the English information architecture. As part of the process, we will be translating the page the titles and page descriptions. Translation quality will be outstanding. The client is a translation company. Each version will get at least four pairs of eyes including expert translators, editors, QA experts and proofreaders. My question is what special SEO instructions should be issued to translators re: the page titles and page descriptions. (We have to presume the translators know nothing about SEO.) I was thinking of: stick to the character counts for titles and descriptions make sure the title and description work together avoid over repetition of keywords page titles (over-optimization peril) think of the descriptions as marketing copy try to repeat some title phrases in the description (to get the bolding and promote click though) That's the micro stuff. The macro stuff: We haven't done extensive keyword research for the other languages. Most of the clients are in the US. The other language versions are more a demo of translation ability than looking for clients elsewhere. Are we missing something big here?
Technical SEO | | DanielFreedman0 -
How do I 301 redirect a number of pages to one page
I want to redirect all pages in /folder_A /folder_B to /folder_A/index.php. Can I just write one or two lines of code to .htaccess to do that?
Technical SEO | | Heydarian0 -
How do I redirect non www pages to www on a windows server?
As the .htaccess file cannot be worked on, I added this php code 301 redirect if the URL does not contain a www on all the pages (small website - 10 pages) : header( "HTTP/1.1 301 Moved Permanently" ); header( "Location: $location" ); I want to know if this is ok for SEO? Has anyone done this on a windows server? Or if you have any better methods, it would be great if you can share. Please help. Thanks.
Technical SEO | | ArjunRajkumar0 -
Why googlebot indexing one page, not the other?
Why googlebot indexing one page, not the other in the same conditions? In html sitemap, for example. We have 6 new pages with unique content. Googlebot immediately indexes only 2 pages, and then after sometime the remaining 4 pages. On what parameters the crawler decides to scan or not scan this page?
Technical SEO | | ATCnik0 -
Importance of an optimized home page (index)
I'm helping a client redesign their website and they want to have a home page that's primarily graphics and/or flash (or jquery). If they are able to optimize all of their key sub-pages, what is the harm in terms of SEO?
Technical SEO | | EricVallee340