Should I nofollow search results pages
-
I have a customer site where you can search for products they sell
url format is:
domainname/search/keywords/
keywords being what the user has searched for.
This means the number of pages can be limitless as the client has over 7500 products.
or should I simply rel canonical the search page or simply no follow it?
-
cheers
-
Hi there,
You've got the right idea, but let me suggest another tactic.
It's true that search functions can generate 1000's of urls that all tend to look like one another. Google suggests that you keep search result pages non-indexed, as these pages offer very little value and create tons of duplicate content.
http://www.seomoz.org/learn-seo/duplicate-content
Here's one way to handle your situation:
1. Put a meta "noindex,follow" tag in your search pages header, like this:
This tells search engines not to index the page, but allows them to follow the links on the page and flow link juice.
2. Hopefully you have a good site architecture and ways for search engines to discover your content. After step one, you can put a directive in your robots.txt file to block that directory from being crawled.
Something like:
User-agent: * Disallow: /search/
Which blocks anything in the search directory.
3. Find out if search engines have already indexed a lot of your search pages by performing a site: search in Google, like so:
site:yourdomain.com/search
If you find pages in Google's index that shouldn't be there, you can use Google Webmasters URL removal tool to take these out of the index. You can remove the entire search directory with a single request.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663427
This is a powerful and sometimes dangerous tool, so be careful!
4. Finally, if you'd like to add "nofollow" to your search results pages, this should be fine, but only after you've completed the steps above.
Keep in mind, this is only one possible solution. If you have significant link juice flowing through your search results, this strategy may not be the best. But in general, you want to keep search results out of Google's index, so I'm comfortable recommending this strategy for 90% of all cases.
Hope this helps! Best of luck with your SEO.
-
yes i would leave them.
-
so even though the search could generate lots of extra pages you think I should leave the pages as is?
-
Don't use the "no follow" attribute. The only time i'd recommend using "no follow" is on pages where you have external links . Blog comment pages, resources page etc.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Adding directories to robots nofollow cause pages to have Blocked Resources
In order to eliminate duplicate/missing title tag errors for a directory (and sub-directories) under www that contain our third-party chat scripts, I added the parent directory to the robots disallow list. We are now receiving a blocked resource error (in Webmaster Tools) on all of the pages that have a link to a javascript (for live chat) in the parent directory. My host is suggesting that the warning is only a notice and we can leave things as is without worrying about the page being de-ranked/penalized. I am wondering if this is true or if we should remove the one directory that contains the js from the robots file and find another way to resolve the duplicate title tags?
Technical SEO | | miamiman1000 -
Skip indexing the search pages
Hi, I want all such search pages skipped from indexing www.somesite.com/search/node/ So i have this in robots.txt (Disallow: /search/) Now any posts that start with search are being blocked and in Google i see this message A description for this result is not available because of this site's robots.txt – learn more. How can i handle this and also how can i find all URL's that Google is blocking from showing Thanks
Technical SEO | | mtthompsons0 -
Onsite Search Engine
Hi, We have a search engine on our website. Whenever a user searches on our site it goes to subdomain - search.mysite.com. This domain is not hosted by us, is this negatively affecting our SEO? Does google interpret this as a bounce since a user is technically leaving our site to search for something? Are there any other ramifications this would have on our site? Thanks!
Technical SEO | | EcomLkwd1 -
Results Pages Duplication - What to do?
Hi all, I run a large, well established hotel site which fills a specific niche. Last February we went through a redesign which implemented pagination and lots of PHP / SQL wizzardy. This has left us, however, with a bit of a duplication problem which I'll try my best to explain! Imagine Hotel 1 has a pool, as well as a hot tub. This means that Hotel 1 will be in the search results of both 'Hotels with Pools' and 'Hotels with Hot Tubs', with exactly the same copy, affiliate link and thumbnail picture in the search results. Now imagine this issue occurring hundreds of times across the site and you have our problem, especially since this is a Panda-hit site. We've tried to keep any duplicate content away from our landing pages with some success but it's just all those pesky PHP paginated pages which doing us in (e.g. Hotels/Page-2/?classifications[]263=73491&classifcations[]742=24742 and so on) I'm thinking that we should either a) completely noindex all of the PHP search results or b) move us over to a Javascript platform. Which would you guys recommend? Or is there another solution which I'm overlooking? Any help most appreciated!
Technical SEO | | dooberry0 -
How to verify a page-by-page level 301 redirect was done correctly?
Hello, I told some tech guys to do a page-by-page relevant 301 redirect (as talked about in Matt Cutts video https://www.youtube.com/watch?v=r1lVPrYoBkA) when a company wanted to move to a new domain when their site was getting redesigned. I found out they did a 302 redirect on accident and had to fix that, so now I don't trust they did the page-by-page relevant redirect. I have a feeling they just redirected all of the pages on the old domain to the homepage of the new domain. How could I confirm this suspicion? I run the old domain through screaming frog and it only shows 1 URL - the homepage. Does that mean they took all of the pages on the old domain offline? Thanks!
Technical SEO | | EvolveCreative0 -
Why are pages linked with URL parameters showing up as separate pages with duplicate content?
Only one page exists . . . Yet I link to the page with different URL parameters for tracking purposes and for some reason it is showing up as a separate page with duplicate content . . . Help? rpcIZ.png
Technical SEO | | BlueLinkERP0 -
Index page
To the SEO experts, this may well seem a silly question, so I apologies in advance as I try not to ask questions that I probably know the answer for already, but clarity is my goal I have numerous sites ,as standard practice, through the .htaccess I will always set up non www to www, and redirect the index page to www.mysite.com. All straight forward, have never questioned this practice, always been advised its the ebst practice to avoid duplicate content. Now, today, I was looking at a CMS service for a customer for their website, the website is already built and its a static website, so the CMS integration was going to mean a full rewrite of the website. Speaking to a friend on another forum, he told me about a service called simple CMS, had a look, looks perfect for the customer ... Went to set it up on the clients site and here is the problem. For the CMS software to work, it MUST access the index page, because my index page is redirected to www.mysite.com , it wont work as it cant find the index page (obviously) I questioned this with the software company, they inform me that it must access the index page, I have explained that it wont be able to and why (cause I have my index page redirected to avoid duplicate content) To my astonishment, the person there told me that duplicate content is a huge no no with Google (that's not the astonishing part) but its not relevant to the index and non index page of a website. This goes against everything I thought I knew ... The person also reassured me that they have worked within the SEO area for 10 years. As I am a subscriber to SEO MOZ and no one here has anything to gain but offering advice, is this true ? Will it not be an issue for duplicate content to show both a index page and non index page ?, will search engines not view this as duplicate content ? Or is this SEO expert talking bull, which I suspect, but cannot be sure. Any advice would be greatly appreciated, it would make my life a lot easier for the customer to use this CMS software, but I would do it at the risk of tarnishing the work they and I have done on their ranking status Many thanks in advance John
Technical SEO | | Johnny4B0 -
Local Search | Website Issue with Duplicate Content (97 pages)
Hi SEOmoz community. I have a unique situation where I’m evaluating a website that is trying to optimize better for local search and targeting 97 surrounding towns in his geographical location. What is unique about this situation is that he is ranking on the 1st and 2nd pages of the SERPs for his targeted keywords, has duplicate content on 97 pages to his site, and the search engines are still ranking the website. I ran the website’s url through SEOmoz’s Crawl Test Tool and it verified that it has duplicate content on 97 pages and has too many links (97) per page. Summary: Website has 97 duplicate pages representing each town, with each individual page listing and repeating all of the 97 surrounding towns, and each town is a link to a duplicate page. Question: I know eventually the site will not get indexed by the Search Engines and not sure the best way to resolve this problem – any advice?
Technical SEO | | ToddSEOBoston0