Duplicate Titles caused by multiple variations of same URL
-
Hi. Can you please advise how I can overcome this issue. Moz.com crawle is indicating I have 100's of Duplicate Title tag errors. However this is caused because many URL's have been indexed multiple times in Google. For example.
www.abc.com
www.abc.com/?b=123What can I do to stop this issue being reported as duplictae Titles, as well as content?
I was thinking maybe I can use Robots.txt to block various query string parameters. I'm Open to ideas and examples.
-
Depending on how you implement the canonicals, you should see a decrease in your duplicate errors, which will be replaced by canonical notices. Ideally, there won't be anything to ignore.
-
Thank you for your response.
Does this mean for each main page I have i.e.
etc,
if I put a Rel="canonical", I can then ignore messages of duplicate content for URL's reported such as
abc.com/page1 (put a rel="canonical" on this page)
abc.com/page2 (put a rel="canonical" on this page)
abc.com/index.html (put a rel="canonical" on this page)
etc,
?
-
*Edit: Miki beat me to it, but here's a little more explanation.
The first thing to note here is that Google's indexing doesn't actually have any effect on your Moz crawl report. All of the data you see there comes from our very own rogerbot, which crawls similarly to googlebot.
Though Google's crawler has a wide variety of ways to locate and index content, rogerbot can only crawl links on your site. If your crawl report is picking up each of these URLs, then there must be links pointing to those URLs somewhere on your site. The danger here is that Google and the other search engines will pick up those variants and not be able to determine which of them is the "real" one. That could lead to a) Google listing a URL you'd rather it didn't, or b) Google not understanding how to list your site at all.
A few of these have pretty simple fixes—index.html should be 301 redirected to your root domain, for example. Rel="canonical" is very applicable here, too. Here are a couple resources you may want to check out:
http://moz.com/learn/seo/canonicalization - Best practices article on canonicalization
http://moz.com/learn/seo/redirection - Best practices article on redirectsI hope that helps!
Matt Roney
Moz Customer Mentor -
I would redirect all variations to www.abc.com as well as REL=Canonical back to www.abc.com. This should solve you issues.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Search Causing Duplicate Content
I use Opencart and have found that a lot of my duplicate content (mainly from Products) which is caused by the Search function. Is there a simple way to tell Google to ignore the Search function pathway? Or is this particular action not recommended? Here are two examples: http://thespacecollective.com/index.php?route=product/search&tag=cloth http://thespacecollective.com/index.php?route=product/search
Intermediate & Advanced SEO | | moon-boots0 -
Process for moving existing articles to new structure (URLs, titles, etc)
I am in the midst of a major redesign of my site, including revamping existing articles . I have a couple of hundred articles and I am reviewing all aspects of these articles, including titles, URLs, content, etc. I am putting together a process as I move each article across to the new site and have SEO very much in mind. I'd appreciate any feedback on this. First off, let me be clear that I consider the quality of the content paramount. Anything suggested below is considered "supporting" (that content) from an SEO perspective. But, since I am moving this content across, I may as well take the opportunity to clean things up. The existing articles don't have particularly good SEO-related attributes, in terms of their titles, URLs, use of keywords and so on. So, I plan to do the following for each article. For illustrative purposes (our site serves the wedding industry), I will use an article about how to involve children at a wedding. Questionsunder each bullet. Use the "Keyword Difficulty" feature on Moz Pro to research a specific keyword for each article. In the example case I used "involving children in our wedding". Honestly, I am not really sure what to do with this feature 🙂 I've read everything from "focus on the long tail" to "don't fear highly competitive keywords". So, my current thinking is merely to use it as interesting information for they keyword I choose but not actually make any specific decisions from that ie. make sure the keyword is relevant to the article as the first priority and use the tool to check out search volume. Not sure what I should read into a zero for recent Bing searches. Is that really an important factor? I'm assuming the Google information is not available from Google (it would be displayed here otherwise, I'm guessing) Use a title that uses these keywords. In this case, I simply went with "Involving children in our wedding". Same for URL - /wedding-guests/involving-children-in-our-wedding If I have a reasonable, short and human-friendly term like this (I can do this with virtually every article quite easily), is there any reason why the URL and the title should not be the same? In short, the title and URL are both a relatively concise "mini-sentence" Make sure the meta description of the article is easy-to-read (for humans) and uses the keyword (sentence) Make sure that the theme (we are moving to WordPress) uses H1 for the page header/title and H2 for sections within the document Implement 301 redirects from the old URL (old site) to the new URL This seems like a pretty obvious approach for articles where the URL has changed (which will be most of them). But what do I do with articles that I am going to remove. Should I redirect (301) to a related article (so at least the visitor ends up on a page that is generally relevant) or just let this "fall through" as a non-existent page (401)? As I say, I have 200+ articles to go through I want to make sure I am taking this advantage to clean things up. Anything leaping out as missing/problematic? Thanks in advance Mark
Intermediate & Advanced SEO | | MarkWill0 -
Expired urls
For a large jobs site, what would be the best way to handle job adverts that are no longer available? Ideas that I have include: Keep the url live with the original content and display current similar job vacancies below - this has the advantage of continually growing the number of indexed pages. 301 redirect old pages to parent categories - this has the advantage of concentrating any acquired link juice where it is most needed. Your thoughts much appreciated.
Intermediate & Advanced SEO | | cottamg0 -
Is Google indexing Mp3 audio and MIDI music files? Can that cause any duplicate problems?
Hello, I own virtualsheetmusic.com website and we have several thousands of media files (Mp3 and MIDI files) that potentially Google can index. If that's the case, I am wondering if that could cause any "duplicate" issues of some sort since many of such media files have exact file names or same meta information inside. Any thoughts about this issue are very welcome! Thank you in advance to anyone.
Intermediate & Advanced SEO | | fablau0 -
Artist Bios on Multiple Pages: Duplicate Content or not?
I am currently working on an eComm site for a company that sells art prints. On each print's page, there is a bio about the artist followed by a couple of paragraphs about the print. My concern is that some artists have hundreds of prints on this site, and the bio is reprinted on every page,which makes sense from a usability standpoint, but I am concerned that it will trigger a duplicate content penalty from Google. Some people are trying to convince me that Google won't penalize for this content, since the intent is not to game the SERPs. However, I'm not confident that this isn't being penalized already, or that it won't be in the near future. Because it is just a section of text that is duplicated, but the rest of the text on each page is original, I can't use the rel=canonical tag. I've thought about putting each artist bio into a graphic, but that is a huge undertaking, and not the most elegant solution. Could I put the bio on a separate page with only the artist's info and then place that data on each print page using an <iframe>and then put a noindex,nofollow in the robots.txt file?</p> <p>Is there a better solution? Is this effort even necessary?</p> <p>Thoughts?</p></iframe>
Intermediate & Advanced SEO | | sbaylor0 -
Multiple URLs for the same page
I am working with a client and recently discovered that they have several URLs that go to the same page. http://www.maps.com/FunFacts.aspx
Intermediate & Advanced SEO | | WebMarketingandDesign
http://www.maps.com/funfacts.aspx
http://www.maps.com/FunFacts.aspx?nav=FF
http://www.maps.com/FunFacts.aspx?nav=FS
http://www.maps.com/funfacts.aspx?nav=FF
http://www.maps.com/funfacts.aspx?nav=ffhttp://www.maps.com/FunFacts.aspx?nav=MShttp://www.maps.com/funfacts.aspx?nav=
http://www.maps.com/FunFacts.aspx?nav=FF#
http://www.maps.com/FunFacts
http://www.maps.com/funfacts.aspx?.nav=FF I am afraid this is happening all over the site. So, my question is: Is this hurting the SEO and how? If so what is the best way to go about fixing this problem? Thanks for your help!0 -
How are they avoiding duplicate content?
One of the largest stores in USA for soccer runs a number of whitelabel sites for major partners such as Fox and ESPN. However, the effect of this is that they are creating duplicate content for their products (and even the overall site structure is very similar). Take a look at: http://www.worldsoccershop.com/23147.html http://www.foxsoccershop.com/23147.html http://www.soccernetstore.com/23147.html You can see that practically everything is the same including: product URL product title product description My question is, why is Google not classing this as duplicate content? Have they coded for it in a certain way or is there something I'm missing which is helping them achieve rankings for all sites?
Intermediate & Advanced SEO | | ukss19840 -
BEING PROACTIVE ABOUT CONTENT DUPLICATION...
So we all know that duplicate content is bad for SEO. I was just thinking... Whenever I post new content to a blog, website page etc...there should be something I should be able to do to tell Google (in fact all search engines) that I just created and posted this content to the web... that I am the original source .... so if anyone else copies it they get penalised and not me... Would appreciate your answers... 🙂 regards,
Intermediate & Advanced SEO | | TopGearMedia0