Will rel=canonical work here?
-
Dear SEOMOZ groupies,
I manage several real estate sites for SEO which we have just taken over. After running the crawl on each I am find 1000's of errors relating to just a few points and wanted to find out either suggestion to fix or if the rel=canonical will resolve it as it is in bulk. Here are the problems...Every property has the following so the more adverts the more errors.
-
each page has a contact agent url. all of these create dup title and content
-
each advert has the same with printer friendly
-
each advert has same with as a favorites page
several other but I think you get the idea.
Help!!! .... suggestions overly welcome
Steve
-
-
Hi Thanks also as very much appreciated. The no 3 is just another url for a user to choose this page to add to a shortlist. Thanks again to you both.
-
I agree with Tom that rel=canonical should work here, but it does depend a bit on the scope and structure of the site. I might actually META NOINDEX the printable versions, as these are usually dead-ends and have no value to search visitors. I'm not entirely sure I understand what (3) is or why it's a separate page.
In general, though, you should get these under control, as it sounds like every advert basically has 4 different URLs. This could dilute your ranking ability and even cause Panda problems.
-
Hey Tom,
Thanks. One question, would it be best to take one of the urls for each prob point to be the target or or use the canonical on all?
Thanks
Steve
-
Hi Steve
I think implementing a canonical tag here is your best course of action, as you have rightly pointed out.
If you have different URL versions of the same page, like you might with the printer version of the pages, implementing the tag will stop Google from indexing any variants of the URL. Only one URL will be indexed and the dynamic URLs will eventually be deindexed. That should solve that particular duplicate content problem.
If you have two different pages that need to exist, but they have similar content, you will need to add a canonical tag to one and add the same canonical tag to the other. By that I mean the second, duplicate page needs to have its tag's URL point to the original. Again, that should prevent it from being indexed.
More information on canonicalisation can be found with the SEOMoz canonical guide.
Hope this helps!
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Canonical Tags Before HTTPS MIgration
Hi Guys I previously asked a question that was helpfully answered on this forum, but I have just one last question to ask. I'm migrating a site tomorrow from http to https. My one question is that it was mentioned that I may need to "add canonical tags to the http pages, pointing to their https equivalent prior to putting the server level redirect in place. This is to ensure that you won't be causing yourself issues if the redirect fails for any reason." This is an e-commerce site with a number of links, is there a quick way of doing this? Many Thanks
Technical SEO | | ruislip180 -
Does using data-href="" work more effectively than href="" rel="nofollow"?
I've been looking at some bigger enterprise sites and noticed some of them used HTML like this: <a <="" span="">data-href="http://www.otherodmain.com/" class="nofollow" rel="nofollow" target="_blank"></a> <a <="" span="">Instead of a regular href="" Does using data-href and some javascript help with shaping internal links, rather than just using a strict nofollow?</a>
Technical SEO | | JDatSB0 -
Duplication, pagination and the canonical
Hi all, and thank you in advance for your assistance. We have an issue of paginated pages being seen as duplicates by pro.moz crawlers. The paginated pages do have duplicated by content, but are not duplicates of each other. Rather they pull through a summary of the product descriptions from other landing pages on the site. I was planing to use rel=canonical to deal with them, however I am concerned as the paginated pages are not identical to each other, but do feature their own set of duplicate content! We have a similar issue with pages that are not paginated but feature tabs that alter the URL parameters like so: ?st=BlueWidgets ?st=RedSocks ?st=Offers These are being seen as duplicates of the main URL, and again all feature duplicate content pulled from elsewhere in the site, but are not duplicates of each other. Would a canonical tag be suitable here? Many Thanks
Technical SEO | | .egg0 -
Canonical Tag - Magento - Help
Hello, I was hoping to get some help or tips on how to best control the canonical tag on a Magento based website. When you go into the Magento admin and enable the option to use the canonical tag on pages, all that does is input the canonical tag to the exact page just with the http:// in the url. My goal is to use the canonical tag on specific pages and point it to other pages, not just the same page with an http:// For example, right now for page: example.com/question/baseball the canonical tag is pointing to http://example.com/question/baseball What i want is to be able to do is take: example.com/question/baseball and have the canonical tag point to example.com/question/baseballbats Is this possible? Does what I'm saying make sense? Please let me know what you all think.... Thanks!
Technical SEO | | Prime850 -
Rel=canonical for similar (not exact) content?
Hi all, We have a software product and SEOMOZ tools are currently reporting duplicate content issues in the support section of the website. This is because we keep several versions of our documentation covering the current version and previous 3-4 versions as well. There is a fair amount of overlap in the documentation. When a new version comes out, we simply copy the documentation over, edit it as necessary to address changes and create new pages for the new functionality. This means there is probably an 80% or so overlap from one version to the next. We were previously blocking Google (using robots.txt) from accessing previous versions of the sofware documentation, but this is obviously not ideal from an SEO perspective. We're in the process of linking up all the old versions of the documenation to the newest version so we can use rel=canonical to point to the current version. However, the content isn't all exact duplicates. Will we be penalized by Google because we're using rel=canonical on pages that aren't actually exact duplicates? Thanks, Darren.
Technical SEO | | dgibbons0 -
De-indexing millions of pages - would this work?
Hi all, We run an e-commerce site with a catalogue of around 5 million products. Unfortunately, we have let Googlebot crawl and index tens of millions of search URLs, the majority of which are very thin of content or duplicates of other URLs. In short: we are in deep. Our bloated Google-index is hampering our real content to rank; Googlebot does not bother crawling our real content (product pages specifically) and hammers the life out of our servers. Since having Googlebot crawl and de-index tens of millions of old URLs would probably take years (?), my plan is this: 301 redirect all old SERP URLs to a new SERP URL. If new URL should not be indexed, add meta robots noindex tag on new URL. When it is evident that Google has indexed most "high quality" new URLs, robots.txt disallow crawling of old SERP URLs. Then directory style remove all old SERP URLs in GWT URL Removal Tool This would be an example of an old URL:
Technical SEO | | TalkInThePark
www.site.com/cgi-bin/weirdapplicationname.cgi?word=bmw&what=1.2&how=2 This would be an example of a new URL:
www.site.com/search?q=bmw&category=cars&color=blue I have to specific questions: Would Google both de-index the old URL and not index the new URL after 301 redirecting the old URL to the new URL (which is noindexed) as described in point 2 above? What risks are associated with removing tens of millions of URLs directory style in GWT URL Removal Tool? I have done this before but then I removed "only" some useless 50 000 "add to cart"-URLs.Google says themselves that you should not remove duplicate/thin content this way and that using this tool tools this way "may cause problems for your site". And yes, these tens of millions of SERP URLs is a result of a faceted navigation/search function let loose all to long.
And no, we cannot wait for Googlebot to crawl all these millions of URLs in order to discover the 301. By then we would be out of business. Best regards,
TalkInThePark0 -
Rel="canonical" and rewrite
Hi, I'm going to describe a scenario on one of my sites, I was wondering if someone could tell me what is the correct use of rel="canonical" here. Suppose I have a rewrite rule that has a rule like this: RewriteRule ^Online-Games /main/index.php So, in the index file, do I set the rel="canonical" to Online-Games or /main/index.php? Thanks.
Technical SEO | | webtarget0 -
Syndication: Link back vs. Rel Canonical
For content syndication, let's say I have the choice of (1) a link back or (2) a cross domain rel canonical to the original page, which one would you choose and why? (I'm trying to pick the best option to save dev time!) I'm also curious to know what would be the difference in SERPs between the link back & the canonical solution for the original publisher and for sydication partners? (I would prefer not having the syndication partners disappeared entirely from SERPs, I just want to make sure I'm first!) A side question: What's the difference in real life between the Google source attribution tag & the cross domain rel canonical tag? Thanks! PS: Don't know if it helps but note that we can syndicate 1 article to multiple syndication partners (It would't be impossible to see 1 article syndicated to 50 partners)
Technical SEO | | raywatson0