How should I handle URL's created by an internal search engine?
-
Hi,
I'm aware that internal search result URL's (www.example.co.uk/catalogsearch/result/?q=searchterm) should ideally be blocked using the robots.txt file. Unfortunately the damage has already been done and a large number of internal search result URL's have already been created and indexed by Google. I have double checked and these pages only account for approximately 1.5% of traffic per month.
Is there a way I can remove the internal search URL's that have already been indexed and then stop this from happening in the future, I presume the last part would be to disallow /catalogsearch/ in the robots.txt file.
Thanks
-
Basic cleanup
From a procedural standpoint, you want to first add the noindex meta tag to the search results first. Google has to see that tag to then act on it and remove the URLs. You can also enter some of the URLs into the Webmaster tools removal tool.
Next you would want to add /catalogsearch/ to robots.txt once you see all the pages getting out of the index.
Advanced cleanup
If any of these search result URLs are ranking and are landing pages in Google. You may want to consider 301 redirecting those pages to the properly related category pages.
My 2 cents. I only use the GWT parameter handler on parameters that I have to show to the search engines. I otherwise try to hide all those URLs from Google to help with crawl efficiency.
Note that it is really important that you do the work to find what pages/urls Google has cataloged to make sure you dont delete a page that is actually generating some traffic for you. A landing page report from GA would help with this.
Cheers!
-
On top of Lesley's recommendations, both google and bing have url parameter exclusion options in webmaster tools.
-
I am guessing that you are using a system that templates pages and maybe adds a query string after the search, something like search.php?caws+cars. I would set in the header of all of the pages that use the search template a noindex, nofollow. Then I would also add it to the robots text as well to disregard the search pages. They will start dropping out of the results pages in about a week or so.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO's Structuring Your Work Week
Hi I wanted some feedback on how other SEO's structure their time. I feel as though I'm falling into the trap of fire fighting with tasks rather than working on substantial projects... I don't feel as though I'm being as effective as I could be. Here's our set up - Ecommerce site selling thousands of products - more of a generalist with 5 focus areas. 2 x product/merchandising teams - bring in new products, write content/merchandise products Web team - me (SEO), Webmaster, Ecommcerce manager Studio - Print/Email marketing/creative/photography. A lot of my time is split between working for the product teams doing KWD research, briefing them on keywords to use, checking meta. SEO Tasks - Site audits/craws, reporting Blogs - I try and do a bit as I need it so much for SEO, so I've put a content/social plan together but getting a lot of things actioned is hard... I'm trying to coordinate this across teams Inbetween all that, I don't have much time to work on things I know are crucial like a backlink/outreach plan, blog/user guide/content building etc. How do you plan your time as an SEO? Big projects? Soon I'm going to pull back from the product optimisation & try focussing on category pages, but for an Ecommerce site they are extremely difficulty to promote. Just asking for opinions and advice 🙂
Intermediate & Advanced SEO | | BeckyKey3 -
What's wrong with the algorithm?
Is it possible that Google is penalising a specific page and in the same time it shows unrelated page in the search results? "rent luxury car florence" shows https://lurento.com/city/munich/on the 2nd page (that's Munich, Germany) and in the same time completely ignores the related page https://lurento.com/city/florence/ How I can figure out if the specific page has been trashed and why? Thanks,
Intermediate & Advanced SEO | | lurento.com
Mike0 -
What's the best way to A/B test new version of your website having different URL structure?
Hi Mozzers, Hope you're doing good. Well, we have a website, up and running for a decent tenure with millions of pages indexed in search engines. We're planning to go live with a new version of it i.e a new experience for our users, some changes in site architecture which includes change in URL structure for existing URLs and introduction of some new URLs as well. Now, my question is, what's the best way to do a A/B test with the new version? We can't launch it for a part of users (say, we'll make it live for 50% of the users, an remaining 50% of the users will see old/existing site only) because the URL structure is changed now and bots will get confused if they start landing on different versions. Will this work if I reduce crawl rate to ZERO during this A/B tenure? How will this impact us from SEO perspective? How will those old to new 301 URL redirects will affect our users? Have you ever faced/handled this kind of scenario? If yes, please share how you handled this along with the impact. If this is something new to you, would love to know your recommendations before taking the final call on this. Note: We're taking care of all existing URLs, properly 301 redirecting them to their newer versions but there are some new URLs which are supported only on newer version (architectural changes I mentioned above), and these URLs aren't backward compatible, can't redirect them to a valid URL on old version.
Intermediate & Advanced SEO | | _nitman0 -
How do I prevent 404's from hurting my site?
I manage a real estate broker's site on which the individual MLS listing pages continually create 404 pages as properties are sold. So, on a site with 2200 pages indexed, roughly half are 404s at any given time. What can I do to mitigate any potential harm from this?
Intermediate & Advanced SEO | | kimmiedawn0 -
What's the best way to check Google search results for all pages NOT linking to a domain?
I need to do a bit of link reclamation for some brand terms. From the little bit of searching I've done, there appear to be several thousand pages that meet the criteria, but I can already tell it's going to be impossible or extremely inefficient to save them all manually. Ideally, I need an exported list of all the pages mentioning brand terms not linking to my domain, and then I'll import them into BuzzStream for a link campaign. Anybody have any ideas about how to do that? Thanks! Jon
Intermediate & Advanced SEO | | JonMorrow0 -
Url structure for multiple search filters applied to products
We have a product catalog with several hundred similar products. Our list of products allows you apply filters to hone your search, so that in fact there are over 150,000 different individual searches you could come up with on this page. Some of these searches are relevant to our SEO strategy, but most are not. Right now (for the most part) we save the state of each search with the fragment of the URL, or in other words in a way that isn't indexed by the search engines. The URL (without hashes) ranks very well in Google for our one main keyword. At the moment, Google doesn't recognize the variety of content possible on this page. An example is: http://www.example.com/main-keyword.html#style=vintage&color=blue&season=spring We're moving towards a more indexable URL structure and one that could potentially save the state of all 150,000 searches in a way that Google could read. An example would be: http://www.example.com/main-keyword/vintage/blue/spring/ I worry, though, that giving so many options in our URL will confuse Google and make a lot of duplicate content. After all, we only have a few hundred products and inevitably many of the searches will look pretty similar. Also, I worry about losing ground on the main http://www.example.com/main-keyword.html page, when it's ranking so well at the moment. So I guess the questions are: Is there such a think as having URLs be too specific? Should we noindex or set rel=canonical on the pages whose keywords are nested too deep? Will our main keyword's page suffer when it has to share all the inbound links with these other, more specific searches?
Intermediate & Advanced SEO | | boxcarpress0 -
Sub domain versus separate domains, which is better for Search engine purposes?
We are pitching to a hotel client to build two new websites, a summer website and a winter website, two completely different looking websites. The client wants to automatically switch their domain name to point to one or the other, depending on the time of year. The customer does not want to use a landing page where you would choose which site to visit; they want the domain name to go directly to the relevant website. Our options: Set up two new domain names and optimise each website based on the holiday season and facilities offered at that time of year. Then change the exisiting domain name to point at the website that is in season. Or Use the existing domain name and setup two sub domains, switching the home page as necessary. We have been chewing this one over for a couple of days, the concern that we have with both options is loss of search visibility. The current website performs well in search engines, it has a home page rank of 4 and sub-pages ranking 2 and 3’s, when we point the domain at the summer site (the client only has a winter website at present) then we will lose all of the search engine benefits already gained. The new summer content will be significantly different to the winter content. We then work hard for six months optimising the summer site and switch back to the Winter site, the content will be wrong. Maybe because it's Friday afternoon we cannot see the light for the smoke of the cars leaving the car park for the weekend, or maybe there is no right or wrong approach. Is there another option? Are we not seeing the wood for the trees? Your comments highly welcome. Martin
Intermediate & Advanced SEO | | Bill-Duff0 -
How long a domain's bad reputation last?
I catched a dropped domain with a nice keyword, but poor reputation. It used to have some malware on the site and WOT (site review tool available at Chrome among others) has very negative reviews tied to the site. I guess that Google has to have records about that as well, because Chrome used to prompt a warning when I entered the site. My question is: how long will the bad reputation last if I build a legitimate website there?
Intermediate & Advanced SEO | | zapalka0