Best way to handle indexed pages you don't want indexed
-
We've had a lot of pages indexed by google which we didn't want indexed. They relate to a ajax category filter module that works ok for front end customers but under the bonnet google has been following all of the links.
I've put a rule in the robots.txt file to stop google from following any dynamic pages (with a ?) and also any ajax pages but the pages are still indexed on google.
At the moment there is over 5000 pages which have been indexed which I don't want on there and I'm worried is causing issues with my rankings.
Would a redirect rule work or could someone offer any advice?
-
Gavin Since you have added the noindex in the pages, the best way is to let Google crawl those pages, see the noindex and remove them. The other option is to keep everything as is and request these parameter pages via your Google Webmaster Console. Option 1: You never know how long it takes Option 2: This should happen relatively fast I would therefore suggest keeping everything as is and doing a removal request.
-
Right... We think we've been able to get the code noindex code into the dodgy pages. The only way we could think of doing it without breaking the user interface was to put this rule into the PHP.
if(!empty($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest')
{normal code
}
else
{echo '';
echo '';
echo '';
echo '';
echo '';
echo '404';
echo '';
echo '';
}Its rendering ok for us front end, if anyone would like to test... I'm just hopeful it would work for google?
http://www.outdoormegastore.co.uk/cycling/cycling-clothing/protective-clothing.html?ajax=1
One thing I am not sure about is how google is going to revisit the said pages. I have put in various rules to the robots.txt files as well as the url parameter handling in webmaster tools to prevent any future pages from being followed... Would these rules need to be removed?
-
The AJAX URLs are used by the site, though, right (for visitors)? If you 404 them, you may be breaking the functionality and not just impacting Google.
Another problem is that, if these pages are no longer crawlable, and you add a page-level directive (whether it's a 404, 301, canonical, NOINDEX, etc.), Google won't process those new instructions. So, they could get stuck in the index. If that's the case, ti may actually be more effective to block the "ajax=" parameter with parameter handling in Google Webmaster Tools (there's a similar option in Bing).
If you know the path is cut and this isn't a recurrent problem, that could be the fastest short-term solution. You do need to monitor, though, as they can re-enter the index later.
-
Gavin, that's a more generic response. In this scenario, unless you can make a 404 happen, it won't work and therefore is not applicable. Noindex and / or the canonical tag are the choices and I would try and get those going if possible.
-
Thanks for all of the replies... My best option seems to be the meta noindex rule but the nature of the pages that are getting indexed are just one long ajax string with no access to the header are. I hope I have already 'prevented' google from following the links in the future by adding the rules to robots.txt but I'm now desperate to clean up (cure) the existing ones.
My next thought would be to put a rule in htaccess and redirect anything with ajax in the url to a 404 page?
I'm worried that this may have even worse side effects with rankings but its based on this article that google publish: https://support.google.com/webmasters/bin/answer.py?hl=en&answer=59819
"To remove a page or image, you must do one of the following:
- Make sure the content is no longer live on the web. Requests for the page must return an HTTP 404 (not found) or 410 status code
What would your thoughts be on this?
-
Definitely review George's comment as you need to figure out why they're being crawled. As Andrea said, any solution takes time, I'm sorry to say. Robots.txt is not a good solution for getting pages removed that are already indexed, especially in bulk. It's better at prevention than cure.
META NOINDEX can be effective, or you could rel=canonical these pages to the appropriate non-AJAX URL - not sure exactly how the structure is set up. Those are probably the two fastest and more powerful approaches. Google parameter handling (in Webmaster Tools) is another option, but it's a bit unpredictable whether they honor it and how quickly.
You can only do mass removal if everything is in a folder, if I recall. There's no way to bulk remove unless all of the pages are structurally under one root URL.
-
I'm not sure if you're aware or not, but I think I know why Google is indexing these pages.
Right now, you are outputting URLs into your source code of your page in the form of a JavaScript function call similar to the following:
I believe this is because your page (and this function call) is programmatically created. Instead of outputting the whole URL to the page, you could output only what needs to be there.
For example:
Then change the signature of the JavaScript function so that it accepts this new input and builds the URL from your inputs:
function initSlider(price, low, high, category, subcategory, product, store, ajax, ?) {
// build URL
var URL = 'http://www.outdoormegastore.co.uk/' + category + '/' + subcategory + '/' + product + '.html?_' + store + '&' + ajax;
// continue...
}
Right now, because that URL is being outputted to the page, I think Google sees it as a URL it should follow and index. If you build this URL with the function in an external JavaScript file, I don't think it will be indexed.
Your developer(s) should know what I'm talking about.
Hope this helps!
-
If they are already indexed, it's going to take time for Google to recrawl, read the tag and get them to fall out, so patience will be key. It's not a quick thing to undo.
If the pages are all in one location, you can add a disallow robots/text to Webmaster Tools command to prevent that entire folder from being indexed, but again, it's already done so you are going to have to wait for all those pages to fall out.
-
Thanks for the quick reply! I'm desperate to get these removed as soon as possible now. I've got webmaster tools access but requesting over 5,000 pages to be removed one by one will take too long. You can't do page removal in bulk can you?
I'm going to work on the noindex option
-
OMG, that does not look good. I completely understand. The best way in my opinion would be to add a noindex meta tag on these pages and let Google crawl them. Once they re-index them with the noindex, that should take care of the problem. However, be careful since you want to make sure that noindex tag does not appear on your real pages, just the AJAX ones.
Another option might be to consider the canonical tag, but then technically these pages are not duplicate pages, they just should not exist. Are you verified and using the Google Webmaster Console ? If yes, see if you can get some of these pages excluded via the URL removal tool. The best way is to add the noindex tag in my opinion.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can a page that's 301 redirected get indexed / show in search results?
Hey folks, have searched around and haven't been able to find an answer to this question. I've got a client who has very different search results when including his middle initial. His bio page on his company's website has the slug /people/john-smith; I'm wondering if we set up a duplicate bio page with his middle initial (e.g. /people/john-b-smith) and then 301 redirect it to the existent bio page, whether the latter page would get indexed by google and show in search results for queries that use the middle initial (e.g. "john b smith"). I've already got the metadata based on the middle initial version but I know the slug is a ranking signal and since it's a direct match to one of his higher volume branded queries I thought it might help to get his bio page ranking more highly. Would that work or does the 301'd page effectively cease to exist in Google's eyes?
Technical SEO | | Greentarget0 -
How do I handle a redirect chain issue pertaining to a page that doesn't actually exist on my site?
I have a page showing up on the insights report as being a redirect chain. This page however does not exist as far as I can tell. It is not on my dashboard anywhere and pointing a browser to it produces a messy page with Wordpress theme error code spit out. How do I track this down to clean it up if the page does not exist within my Wordpress installation? The page for reference is https://butlermobility.com/dealers/downloads. As it stands today the dealers and downloads pages are separate. There is no downloads sub page within the dealers section.
Technical SEO | | NiteSkirm0 -
Organic search traffic has dropped by 35% since 18 September, we don't know why.
Organic traffic to our website has dropped 35% since 18 September 2017 to date. From 1 January to 18 September 2017 organic traffic was up by just under 1% over all (Google up by 1.32%). Paid search traffic over the same time has remained steady. There is nothing we can think of that we've done that has caused the drop. We had an issue with Google page speed test failing when running a test but we resolved this issue on 20 November and in that time we've seen an even greater drop (44% in the last week). The drop is seen across the 3 main search engines, not just Google, which points toward something we've done, but as mentioned, we can't think of any significant change we made in September that would have such negative effects. There is little difference across devices. Is anyone aware of a significant event in September in the search engine world that may have influenced our organic traffic? Any help gratefully received.
Technical SEO | | imaterus0 -
Want to change URL for a page
Hey there Mozzers. I want to change the url of a certain page on my website. Example: www.example.com/poker-face I want to change this www.example.com/poker-faces Should I create a new page and make the old one 301? Does 301 pass all the link juice in the new page or do i have to make a rel=canonical also ?
Technical SEO | | Angelos_Savvaidis0 -
Should We Index These Category Pages?
Currently we have marked category pages like http://www.yournextshoes.com/celebrities/kim-kardashian/ as follow/noindex as they essentially do not include any original content. On the other hand, for someone searching for Kim Kardashian shoes, it's a highly relevant page as we provide links to all the Kim Kardashian shoe sightings that we have covered. Should we index the category pages or leave them unindexed?
Technical SEO | | Jantaro0 -
Page has a 301 redirect, now we want to move it back to it's original place
Hi - This is the first time I've asked a question! My site, www.turnkeylandlords.co.uk is going through a bit of a redesign (for the 2nd time since it launched in July 2012...) First redesign meant we needed to move a page (https://www.turnkeylandlords.co.uk/about-turnkey-mortgages/conveyancing/) from the root to the 'about-us' section. We implemented a 301 redirect and everything went fine. I found out yesterday that the plan is to move this page (and another one as well, but it's the same issue so no point in sharing the URL) back to the root. What do I do? A new 301? Wouldn't this create a loop? Or just delete the original 301? Thanks in advance, Amelia
Technical SEO | | CommT0 -
What's the best canonicalization method?
Hi there - is there a canonicalization method that is better than others? Our developers have used the
Technical SEO | | GBC0 -
How to handle (internal) search result pages?
Hi Mozers, I'm not quite sure what the best way is to handle internal search pages. In this case it's for an ecommerce website with about 8.000+ products and search pages currently look like: example.com/search.php?search=QUERY+HERE. I'm leaning towards making them follow, noindex. Since pages like this can be easily abused for duplicate content and because I'd rather have the category pages ranked. How would you handle this?
Technical SEO | | Qon0