How long til meta robots noindex takes effect?
-
I have a wordpress site with about 3,000 posts and over 1,000 tags. All of the tag archives are currently indexed in Google and I don't want them to be.
I just set the meta robots to no-index all the tag archives and was wondering how long it will take til they're out of the search engines?
Since there are close to 1,500 of these and they are duplicate content it would be nice to have them gone asap.
I noticed Webmaster Tools allows me to resubmit my site to index if my site has changed significantly... should I try that??
Any other advice would be greatly appreciated!
-
Thanks for the responses! The pages just got noindexed today... it took almost exactly a week.
-
I personally would update your robots.txt and then submit a sitemap with the URLs in question via GWMT - this will result in your pages being crawled very quickly (within 48 hours) and then dropped out.
I would also look at either using 301's if you see link-juice from those URLs or using 404s/410s.
I personally found that resubmitting takes longer than pushing an updated sitemap.
-
There's no set time, typically it can take a week or two, I've seen it take a month or longer though to completely noindex all desired results on bigger sites.
There's no harm in re-submtting the site, try submitting URL and all linked pages via webmaster tools, it may help get crawled a little faster.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Dark Traffic & Long URLs - Clarification
Hi everyone, I've been reading the 2017 report by Groupon and the 2017 article by Neil Patel r.e. dark traffic. Both of these articles work on the assumption that most long URLs are not direct traffic because people wouldn't type a long URL into their browser. However, what happens to me personally all the time is that I start typing a URL into the browser, and the browser brings up a list of pages I've visited recently, and I click on some super long URL that I didn't bookmark but have visited in the past. That is legitimate direct traffic, but it's a long URL. I'm just wondering if there's something flawed in my reasoning or in the reasoning of Patel and Groupon. Maybe most people aren't relying on browsers like I am, or maybe things have changed a lot in the past 3 years. What do you think? And are there any more recent resources/articles that you would recommend r.e. trying to parse out dark traffic? https://neilpatel.com/blog/dark-traffic-stealing-data/ Thanks!
Search Behavior | | LivDetrick0 -
Deceptive site warning from Google: Java script and meta descriptions deployed.
Hi all, We received a deceptive site warning from Google recently. Seems like there is a deceptive content on some pages of the website. Some pages are removed from Google index. Meta descriptions have been deployed on the site few days ago. There is also java script on the website which we use for tracking visitors like GA. I wonder what's the reason for this alert? We believe its due to the Java script. Do you think meta descriptions will harm this bad? Any ideas? Thanks
Search Behavior | | vtmoz0 -
Long list or paginated pages
Hi peeps, I am just interested in this from a usability POV and to see what you would prefer to see when you are met with a page that has multiple options. Lets say that the page looks like a list of services, each clearly marked out in its own segment, but there are 50-60 options that match your requirements. Do you like to keep scrolling, or would you prefer to take what is there and then move on if you feel you want to dig deeper? Would you like to see a long list, of have the options loaded in as you get to them? -Andy
Search Behavior | | Andy.Drinkwater2 -
Vanish from google after choosing preferred domain and fixing 40 duplicate meta descriptions?
I recently followed 2 Google webmaster suggestions to clean up the on page SEO for our site. I chose a preferred domain 2 weeks ago(to www.website.com) and fixed the duplicate meta descriptions that our CMS was setting to unique and more natural descriptions for each page. I did that 3 days ago. Webmaster tools still says they are duplicates because it hasn't crawled the whole site yet. We have been fortunate enough to have some of our blog posts be covered by yahoo.com, cnet.com, huffingtonpost.com, gizmodo.com, etc. That is some major backlink juice and, as recently as 2 weeks ago, our website would be the #1 result when searching Google for "ourwebsite.com exact title of very popular blog post". Now it is on the 3rd page, and the top results are the websites that linked to our blog post. So....what gives? Is there a specific area I should look at? Our should I wait for Google to fully index our whole site now that changes have been made? It should be noted that our rankings have stayed the same in yahoo and bing. Any thoughts would be much appreciated.
Search Behavior | | garyislearning0 -
Dupe Content: Canonicalize the Wordpress Tag or NoIndex?
Mozzers, Here we go. I've read multiple posts for years on taxonomy dupe content. In fact, I've read 10 articles tonight on taxonomies and categories. A little background: I am using Wordpress SEO with the Yoast plugin. **Here is the scenario: We have 560 tags - some make sense - some do not. ** What do I do? Do I not worry about it? Matt Cutts said twice that I should not stress about it, because in the worse non-spammy case, Google may just ignore the duplicate content. Matt said in the video, “I wouldn’t stress about this unless the content that you have duplicated is spammy or keyword stuffing.” (Found Via Search Engine Land - http://searchengineland.com/googles-matt-cutts-duplicate-content-wont-hurt-you-unless-it-is-spammy-167459). Do I NoIndex,Follow the Tags? Yoast and a Moz post both say I should NoIndex and Follow the Tags. From the post: "Tag, author, and date archives will all look too similar to other content. So it does not make sense to have them indexed." BUT! **The tags have been indexed for YEARS! And both articles go onto say **"if your blog has already existed for some time, and you've been indexing tags all along for example, you shouldn't just go deindexing them" (http://moz.com/blog/setup-wordpress-for-seo-success). So do I deindex tags that have been indexed for years? I checked the analytics, and in the past month, tags have brought in less than 1% of traffic, but they are bringing in traffic. Do I canonicalize the tags? Canonicalize the URL from "http://domain.com/blog/tag/addiction/" to "http://domain.com/blog/" ? And if I canonicalize, would you canonicalize to the /blog or to the base /tag? Thanks for any and all help. I just want to clarify this issue. One of the reasons is because I received a Moz Report with a TON of dupe content warning from the tags and categories.
Search Behavior | | Thriveworks-Counseling2 -
Disallow robots on a url effect?
Hello, I am using wordpress and on some of my top tag urls e.g : http://www.designzzz.com/tag/brushes/ there is an avg page rank of 4. I was reading somewhere that my huge jump in not selected count in index status in GWT is becuase of tags/categories urls. So i added disallow in my robots.txt for the url /tag What sort of effect will it have on my tag urls rankings or PR ?
Search Behavior | | wickedsunny10 -
Recovering from a Hack: How long until Google reindexes changes?
In a previous post I made, I was able to determine that one of my sites; http://pokeronamac.com/ was hacked and was feeding spam perscription drug content to search engines, then redirecting to another site when clicked on Google. I then contacted my web host, and, after they did a scan of our files, they determined that something within the wp-includes directory was compromised and malicious. They removed the file, though they weren't able to determine the source of the attack, or how they god in (should we be scared?). Anyway, its been several days now ~5 and if I do a site search the spam pages still show up, but the redirect is no longer working. At this point, I am at a standstill, because i'm loosing traffic on my site by about 90%, and google hasn't sent us any warnings of malaware or the like. I know I was recommended against this before, but should I attempt to submit a reconsideration request, or should I just wait it out? Thanks for your help, Zach
Search Behavior | | Zachary_Russell0 -
Studies on influence of meta description on CTR
After having answered quite a lot of questions here, I figured it was about time to ask one of my own. Can anybody point me to decent (experimental) research articles or blogs that actually show variations in meta descriptions influence the Click Through Rate (CTR) of searchers? On dozens of websites on the internet it is stated that 'meta descriptions affect CTR', but (good scientific) sources for those statements are nowhere to be found. The only research I can find that comes closest to providing any evidence is a translated study by dynamical.biz, which states that searchers LOOK a lot at the meta description, but this study (atleast in the translation) mentions nothing of searchers actually CLICKING it. Any help would be greatly appreciated!
Search Behavior | | Theo-NL0