How to avoid duplicate content on internal search results page?
-
Hi,
according to Webmaster Tools and Siteliner our website have an above-average amount of duplicate content.
Most of the pages are the search results pages, where it finds only one result. The only difference in this case are the TDK, H1 and the breadcrumbs. The rest of the layout is pretty static and similar.
Here is an example for two pages with "duplicate content":
https://soundbetter.com/search/Globo
https://soundbetter.com/search/Volvo
Edit: These are legitimate results that happen to have the same result. In this case we want users to be able to find the audio engineers by 'credits' (musicians they've worked with). Tags. We want users to rank for people searching for 'engineers who worked with'. And searching for two different artists (credit tags) returns this one service provider, with different urls (the tag being the search parameter) hence the duplicate content.
I guess every e-commerce/directory website faces this kind of issue.
What is the best practice to avoid duplicate content on search results page?
-
It really depends on your developers and your budget. I do development and SEO, so this is how I would handle it. On searches that are returning just one result, I would put something in place to see how many results are returned, if it is only one result returned, in the head of the page I would set the canonical url for the search page to the actual page that is being returned as the result.
If more result is being returned, you can handle that in many different ways. One way would be to create a pseudo category out of the results page. I would use this sparingly and only for popular search terms. But you could have an extension written for your site that can give you some on page control of the text, the url, the meta areas, and things like that. I wrote a module for a platform I use a couple of years ago that does something like it. http://blog.dh42.com/search-pages-landing-pages/ You can get the gist of the idea by reading about it there, but that is one good way to handle a limited number of them to get them to rank better. I would not do it with every search result though, you might get a penalty.
-
Sorry, I misread it. I think either or in regards to the robots or on page is applicable. I think the on page would make them fall out faster though.
-
I wouldn't do a no follow however
I agree. My solution was to use NOINDEX, FOLLOW.
-
Thanks Prestashop for your answer.
Is there another solution other than no-indexing all our search results?
Like many sites (yelp, tripadvisor and others) our search results help drive traffic. They aggregate the answer to questions that are asked in searches, such as 'recording studios in london'.
https://soundbetter.com/search/Recording Studio - Engineer/London, UK
-
I would add it to the robots.txt file. Depending on how your cms is set up, you can grab the search string from the current url and also use the presence of it to fire a no index as well. I wouldn't do a no follow however, there is nothing bad about following it, it is just the indexing of the search pages.
-
Hey Prestashop
To add a little more clarity - would you:
a.) add /search/ to robots.txt, like so:
Disallow: /search/or
b.) add noindex/nofollow at page level: like so:
in the search results page template.I would opt for option b, but it would be interested to hear your thoughts too and why.
Thanks,
-
No-index your search results. Most platforms do it by default to eliminate that error.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Safety Data Sheet PDFs are Showing Higher in Search Results than Product Pages
I have a client who just launched an updated website that has WooCommerce added to it. The website also has a page of Safety Data Sheets that are PDFs that contain information about some of the products. When we do a Google search for many of the products the Safety Data Sheets show up first in the search results instead of the product pages. Has anyone had this happen and know how to solve the issue?
Technical SEO | | teamodea0 -
Should search pages be indexed?
Hey guys, I've always believed that search pages should be no-indexed but now I'm wondering if there is an argument to index them? Appreciate any thoughts!
Technical SEO | | RebekahVP0 -
Do multipe empty search result pages count as duplicate content?
I am writing an online application that among other things allows the users to search through our database for results. Pretty simply stuff. My question is this. When the site is starting out, there will probably be a lot of searches that will bring back empty pages since we will still be building it up. Each page will dynamically generate the title tags, description tags, H1, H2, H3 tags - so that part will be unique - but otherwise they will be almost identical empty results pages until then. Would Google Count all these empty result pages as duplicate content? Anybody have any experience with this? Thanks in advance.
Technical SEO | | rayvensoft0 -
Duplicate content in product listing
We have "duplicate content" warning in our moz report which mostly revolve around our product listing (eCommerce site) where various filters return 0 results (and hence show the same content on the page). Do you think those need to be addressed, and if so how would you prevent product listing filters that appearing as duplicate content pages? should we use rel=canonical or actually change the content on the page?
Technical SEO | | erangalp0 -
Https Duplicate Content
My previous host was using shared SSL, and my site was also working with https which I didn’t notice previously. Now I am moved to a new server, where I don’t have any SSL and my websites are not working with https version. Problem is that I have found Google have indexed one of my blog http://www.codefear.com with https version too. My blog traffic is continuously dropping I think due to these duplicate content. Now there are two results one with http version and another with https version. I searched over the internet and found 3 possible solutions. 1 No-Index https version
Technical SEO | | RaviAhuja
2 Use rel=canonical
3 Redirect https versions with 301 redirection Now I don’t know which solution is best for me as now https version is not working. One more thing I don’t know how to implement any of the solution. My blog is running on WordPress. Please help me to overcome from this problem, and after solving this duplicate issue, do I need Reconsideration request to Google. Thank you0 -
"noindex" internal search result urls
Hi, Would applying "noindex" on any page (say internal search pages) or blocking via robots text, skew up the internal site search stats in Google Analytics? Thanks,
Technical SEO | | RaksG0 -
Duplicate Content Issue
Hello, We have many pages in our crawler report that are showing duplicate content. However, the content is not duplicateon the pages. It is somewhat close, but different. I am not sure how to fix the problem so it leaves our report. Here is an example. It is showing these as duplicate content to each other. www.soccerstop.com/c-119-womens.aspx www.soccerstop.com/c-120-youth.aspx www.soccerstop.com/c-124-adult.aspx Any help you could provide would be most appreciated. I am going through our crawler report and resolving issues, and this seems to be big one for us with lots in the report, but not sure what to do about it. Thanks
Technical SEO | | SoccerStop
James0 -
Htm vs. aspx page extensions & duplicate content
We have a client whose site is fairly new. There isn't much in the way of SEO results so far. In their content management system they have implemented friendly URLs and changed the extensions from aspx to htm. Now the htm pages are all indexed in Google but when I run a campaign report in SEOmoz it shows that all pages are duplicated with there being both htm and aspx pages for each page. Should we do 301 redirects from the aspx pages to the htm pages? Or would we be safe by removing the htm pages and letting Google reindex the site with the aspx page extensions? Does Google have any kind of preference as to what the page extensions are as long as the URLs include keywords?
Technical SEO | | IvieDigital0