How long after disallowing Googlebot from crawling a domain until those pages drop out of their index?
-
We recently had Google crawl a version of the site we that we had thought we had disallowed already. We have corrected the issue of them crawling the site, but pages from that version are still appearing in the search results (the version we want them to not index and serve up is our .us domain which should have been blocked to them).
My question is this: How long should I expect that domain (the .us we don't want to appear) to stay in their index after disallowing their bot? Is this a matter of days, weeks, or months?
-
If it is the case that no URL for .us should exist (there are not new URLs) then you can remove pretty swiftly in Webmaster Tools >> Google Index >> Remove URLs >> select the root URL and select to remove all directories that come from it.
-
Hi there,
Crawling and indexing are processes which can take some time and which rely on many factors. In general, we cannot make predictions or guarantees about when or if your URLs will be crawled or indexed.
Hope it helps you.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages being flagged in Search Console as having a "no-index" tag, do not have a meta robots tag??
Hi, I am running a technical audit on a site which is causing me a few issues. The site is small and awkwardly built using lots of JS, animations and dynamic URL extensions (bit of a nightmare). I can see that it has only 5 pages being indexed in Google despite having over 25 pages submitted to Google via the sitemap in Search Console. The beta Search Console is telling me that there are 23 Urls marked with a 'noindex' tag, however when i go to view the page source and check the code of these pages, there are no meta robots tags at all - I have also checked the robots.txt file. Also, both Screaming Frog and Deep Crawl tools are failing to pick up these urls so i am a bit of a loss about how to find out whats going on. Inevitably i believe the creative agency who built the site had no idea about general website best practice, and that the dynamic url extensions may have something to do with the no-indexing. Any advice on this would be really appreciated. Are there any other ways of no-indexing pages which the dev / creative team might have implemented by accident? - What am i missing here? Thanks,
Technical SEO | | NickG-1230 -
Why does my site have so many crawl errors relating to the wordpress login / captcha page
Going through the crawl of my site, there were around 100 medium priority issues, such as title element too short, and duplicate page title, and around 80 high priority issues relating to duplicate page content - However every page listed with these issues was the site's wordpress login / captcha page. Does anyone know how to resolve this?
Technical SEO | | ZenyaS0 -
Pages appear fine in browser but 404 error when crawled?
I am working on an eCommerce website that has been written in WordPress with the shop pages in E commerce Plus PHP v6.2.7. All the shop product pages appear to work fine in a browser but 404 errors are returned when the pages are crawled. WMT also returns a 404 error when ‘fetch as Google’ is used. Here is a typical page: http://www.flyingjacket.com/proddetail.php?prod=Hepburn-Jacket Why is this page returning a 404 error when crawled? Please help?
Technical SEO | | Web-Incite0 -
Google Sitemap - How Long Does it Take Google To Index?
We have changed our sitemap about 1 month ago and Google is yet to index it. We have run a site: search and we still have many pages indexed but we are wondering how long does it take for google to index our sitemap? The last sitemap we put up had thousands of pages indexed within a fortnight, but for some reason this version is taking way longer. We are also confident that there are no errors in this version. Help!
Technical SEO | | JamesDFA0 -
Help! Pages not being indexed
Hi Mozzers, I need your help.
Technical SEO | | bshanahan
Our website (www.barnettcapitaladvisors.com) stopped being indexed in search engines following a round of major changes to URLs and content. There were a number of dead links for a few days before 301 redirects were properly put in place. And now, only 3 pages show up in bing when I do the search "site:barnettcapitaladvisors.com". A bunch of pages show up in Google for that search, but they're not any of the pages we want to show up. Our home page and most important services pages are nowhere in search results. What's going on here?
Our sitemap is at http://www.barnettcapitaladvisors.com/sites/default/files/users/AndrewCarrillo/sitemap/sitemap.xml
Robots.txt is at: http://www.barnettcapitaladvisors.com/robots.txt Thanks!0 -
.co.uk/index.html or just .co.uk - my on-page reports are different for both - why?
It looks like the same thing, yet it has a different on-page report for each version - why is this. Please share your ideas with me on this. The original url is http://bath.waspkilluk.co.uk/index.html. Many Thanks - Simon.
Technical SEO | | simonberenyi0 -
My homepage+key pages have dropped 40+ positions after implementing redirects and canonical changes. HELP!
Hi SEOMozers, I work for a web based nonprofit at www.tisbest.org. I had a professional contact recommend that we work on our redirects to our homepage because we were losing valuable rank benefit. This combined with getting sick of seeing our weekly SEOMoz crawl reports show 304 duplicate page and title errors for months. No one could seem to figure out what was happening (we think it had to do with session stuff; we were seeing several versions of each page showing the following: www.tisbest.org/default.aspx/(random character string) My developer and I read a bunch of articles and started making changes 10 days ago: He setup 301 redirects from http://tisbest.org to http://www.tisbest.org. (set the canonical domain). We did a redirect from http://www.tisbest.org/default.aspx to root with "/". I set the canonical setting to www.tisbest.org in our webmaster tools. In our web config (we're running in asp.net), we changed our session detection from auto-detect then saw some session funkiness so we changed it back. Though we do think the character strings we were seeing were session GUID. He forced lower case URL’s to reduce duplicate page content/titles. I got my weekly crawl report 9 days ago and we had dropped from 340 duplicate page title and page content errors went to one. We went nuts and felt like the kings of SEO. Then, yesterday (9/28), the SEO grim reaper came knocking when I received my weekly SEOMoz ranking report. It said we had dropped 40+ spots for all of 9 of our keywords. Sure enough, I searched our keywords and our website was gone. Then I searched our company name, tisbest, and only a few of our pages show but not the homepage. I searched for our URL www.tisbest.org, and I originally got the expanded view (with 8 links to various webpages - can't remember what this view is called) but now, today (Saturday), the expanded view is gone from this search result. Also, when I run the On Page Report card for our homepage, I get the following error message with no results: "We were unable to grade that page. The page did not load. Curl::Err::TooManyRedirectsError: Number of redirects hit maximum amount." When I run the Open Site explorer report, I get this message at the top: Oh Hey! It looks like that URL redirects to www.tisbest.org/?AspxAutoDetectCookieSupport=1. Would you like to see data for <a class="clickable redirects">that URL instead</a>?" If I go to the report for the that report's page, it says that "No information is available for that URL." Just tonight (night of 9/29), our developer added the rel="canonical" href="http://www.tisbest.org" /> to our homepage tonight to see if that would help. We did not do that originally. In our Google Webmaster tools, I am seeing the number of URL Error - Not Followed has sky rocked. I have attached a screen capture to this thread. There are also a large number of URL Errors - Not Found errors as well. I did some research tonight and downloaded and ran Screaming Frog SEO Crawler. I have attached a screen capture below with this report and a couple of questions I sent our developer that may be helpful to you. Also, not sure if this is relevant, we use a master page that all of our pages inherit from so all of our pages get the same meta-data: name="keywords" content="charitable gift card, charitable gift certificate, non profit gift card, charity donation, giftcard, charity gift card, donation gift card, donation gift, charity gift, animal gift card, animal gift, environmental gift card, environmental gift, humanitarian gift card, humanitarian gift, christian gift card, christian gift, catholic gift card, catholic gift, religious gift card, religious gift" />id="ctl00_metaDescription" name="description" content="Award winning Charity Gift Card, for over 250 premier charities. A customized donation gift that makes the world better. TisBest is BBB Accredited." />name="google-site-verification" content="EfJIhN3h2SVSXdSpUbfceBVw2q6zrGX8rRQhdNZ1xY8" /><title></span><span> </span></p> <p>Can anyone help me/us identify the issue that obliterated our rankings? I am happy to give an information needed. Thank you! Chad Edwards</p> <a download="Bqcu1.png" class="imported-anchor-tag" href="http://i.imgur.com/Bqcu1.png" target="_blank">Bqcu1.png</a> <a download="ZXQ8d.png" class="imported-anchor-tag" href="http://i.imgur.com/ZXQ8d.png" target="_blank">ZXQ8d.png</a></title>
Technical SEO | | TisBest0 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0