Robots.txt blocking Addon Domains
-
I have this site as my primary domain: http://www.libertyresourcedirectory.com/
I don't want to give spiders access to the site at all so I tried to do a simple Disallow: / in the robots.txt. As a test I tried to crawl it with Screaming Frog afterwards and it didn't do anything. (Excellent.)
However, there's a problem. In GWT, I got an alert that Google couldn't crawl ANY of my sites because of robots.txt issues. Changing the robots.txt on my primary domain, changed it for ALL my addon domains. (Ex. http://ethanglover.biz/ ) From a directory point of view, this makes sense, from a spider point of view, it doesn't.
As a solution, I changed the robots.txt file back and added a robots meta tag to the primary domain. (noindex, nofollow). But this doesn't seem to be having any effect. As I understand it, the robots.txt takes priority.
How can I separate all this out to allow domains to have different rules? I've tried uploading a separate robots.txt to the addon domain folders, but it's completely ignored. Even going to ethanglover.biz/robots.txt gave me the primary domain version of the file. (SERIOUSLY! I've tested this 100 times in many ways.)
Has anyone experienced this? Am I in the twilight zone? Any known fixes? Thanks.
Proof I'm not crazy in attached video.
-
Sort of resolved, maybe the wrong place to ask any further. The above is a working fix for what seems like a legit bug, I'll update if WordPress forums say anything.
-
No, I don't like to waste memory and bandwidth. If you can do it yourself, you should probably do it yourself. I'm moving this question to WordPress.
-
Hi Ethan
One thing I have heard of people trying is a plugin that serves dynamic robots.txt files. I don't use add-on sites so you will probably have to test the behavior. He is an example of one of the plugins.
https://wordpress.org/plugins/wp-robots-txt/
hope this helps,
Anthony -
Ethan
It sounds like the issue has been resolved. I'm not too familiar with domain add-ons but if you have any more trouble let us know and I'll be sure another Moz Associate takes a look.
-Dan (Moz Associate)
-
-
Hi Ethan
Sorry, I wasn't clear. I was thinking you could drop the use of the robots.txt all together and just use the Meta Tag approach since it seems that the robots.txt is having a global impact to your sites. Search engines will still crawl the pages, but it should exclude them from the index.
Hope this helps,
Anthony -
Anthony, based on your response it's obvious you haven't read the question or follow-up.
-
Hi Ethan
One approach may be to try using the Robots Meta Tag. You can use noindex to tell Google not to index. This won't prevent crawling, but Google should respect the request to not index your site. I have included a good guide below to get you started.
https://developers.google.com/webmasters/control-crawl-index/docs/robots_meta_tag
Hope this helps,
Anthony B
Biondo Creative
biondocreative.com -
I've found a quick fix for now: http://ethanglover.biz/using-robots-txt-with-addon-domains/
This is still an issue, and it may be exclusive to WordPress.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No index tag robots.txt
Hi Mozzers, A client's website has a lot of internal directories defined as /node/*. I already added the rule 'Disallow: /node/*' to the robots.txt file to prevents bots from crawling these pages. However, the pages are already indexed and appear in the search results. In an article of Deepcrawl, they say you can simply add the rule 'Noindex: /node/*' to the robots.txt file, but other sources claim the only way is to add a noindex directive in the meta robots tag of every page. Can someone tell me which is the best way to prevent these pages from getting indexed? Small note: there are more than 100 pages. Thanks!
Technical SEO | | WeAreDigital_BE
Jens0 -
Robot.txt : How to block a specific file type in several subdirectories ?
Hello everyone ! I need help setting up a robot.txt. I'm trying to block all pdf files in particular directories so I'm using this command. In the example below the line is blocking all .gif in the entire site. Block files of a specific file type (for example, .gif) | Disallow: /*.gif$ 2 questions : Can I use this command to specify one particular directory in which I want to block pdf files ? Will this line be recognized by googlebots ? Disallow: /fileadmin/xxxxxxx/xxx/xxxxxxx/*.pdf$ Then I realized that I would have to write as many lines as many directories there are in which I want to block pdf files. Let's say I want to block pdf files in all these 3 directories /fileadmin/directory1 /fileadmin/directory1/sub1 /fileadmin/directory1/sub1/pdf Is there a pattern-matching rule I could use to blocks access to pdf files in all subdirectories instead of writing 3x the above line for each subdirectory ? For exemple : Disallow: /fileadmin/directory1*/ Many thanks in advance for any insight you may have.
Technical SEO | | LabeliumUSA0 -
Domain vs Sub Domain and Rankings
Hi All Wanting some advice. I have a client which has a number of individual centres that are part of an umbrella organisation. Each individual centre has its own web site and some of these sites have similar (not duplicate content) products and services. Currently the individual centres are sub domains of the umbrella organisation. i.e. Umbrella organisation www.organisation.org.au Individual centres are sub domains i.e. www.centre1.organisation.org.au, www.centre2.organisation.org.au etc. I'm feeling that perhaps this setup might be affecting the rankings of the individual sites because they are sub domains. Would love to hear some thoughts or experience on this and whether its worth going through the process of migrating the individual centre domains. Thanks Ian
Technical SEO | | iragless0 -
Keyword in Domain or not?
My on page optimization grade is an "A" with the following factors; Factor Overview <dl class="scoreboard clearfix"> <dt>Critical Factors</dt> <dd>4 / 4</dd> <dt>High Importance Factors</dt> <dd>7 / 7</dd> <dt>Moderate Importance Factors</dt> <dd>8 / 9</dd> <dt>Low Importance Factors</dt> <dd>11 / 11</dd> <dt>Optional Factors</dt> <dd>5 / 5</dd> </dl> The main thing I appear to be missing is keywords in my URL. How truly important is that in today's SEO world and how much time or ranking would be lost if I do not have control to change the external links to my website if I decided to migrate to a keyword relevant url?
Technical SEO | | classa0 -
Is Buying Domains Good For SEO? Can I 301 redirect domains to an Original website?
I have a friend that purchased multiple domains related to their website. Each of these domains have the back ground of the original website and irrelevant content on them. Is is possible to redirect the various domains to certain pages on the original website. For example if the website is www.shoes.com and they purchased domains such as www.leathermensshoes.com and a few others related to the website. Is it SEO friendly to link the domains purchased to the original website?
Technical SEO | | TSpike10 -
Should search pages be disallowed in robots.txt?
The SEOmoz crawler picks up "search" pages on a site as having duplicate page titles, which of course they do. Does that mean I should put a "Disallow: /search" tag in my robots.txt? When I put the URL's into Google, they aren't coming up in any SERPS, so I would assume everything's ok. I try to abide by the SEOmoz crawl errors as much as possible, that's why I'm asking. Any thoughts would be helpful. Thanks!
Technical SEO | | MichaelWeisbaum0 -
Sub Domains
Hi,,, Okay we have 1 main site , a few years back we went down the road of sub domains and generated about 10. They have page rank and age but we wish to move them back to the main web site. What is the correct or best way to achieve this. 1 copy all content to the main web site creating dup pages and then use a redirects from the sub pages to the new dup pages on the main domain... or 2 write new content on the main domain for the subdomain pages and redirect to the new content. Problem with 2 is the amount of work involved...
Technical SEO | | NotThatFast0 -
Robots.txt and 301
Hi Mozzers, Can you answer something for me please. I have a client and they have 301 re-directed the homepage '/' to '/home.aspx'. Therefore all or most of the linkjuice is being passed which is great. They have also marked the '/' as nofollow / noindex in the Robots.txt file so its not being crawled. My question is if the '/' is being denied access to the robots is it still passing on the authority for the links that go into this page? It is a 301 and not 302 so it would work under normal circumstances but as the page is not being crawled do I need to change the Robots.txt to crawl the '/'? Thanks Bush
Technical SEO | | Bush_JSM0