Exclude root url in robots.txt ?
-
Hi,
I have the following setup:
www.example.com/nl
www.example.com/de
www.example.com/uk
etc
www.example.com is 301'ed to www.example.com/nlBut now www.example.com is ranking instead of www.example.com/nl
Should is block www.example.com in robots.txt so only the subfolders are being ranked?
Or will i lose my ranking by doing this. -
Yes, when clicking the link in google you get redirected.
I will wait some time.Thank you.
-
The site just launched? It sounds like I am right, you just need to give Google some time to drop the page from the index.
When you find the homepage in the index, and you click the link, do you get redirected? If so, Google will eventually drop it.
-
Thanks for answering Philip,
Yes i really used a 301.
I used it in .htaccess
And i have set this up before lauching the site, so i should be good from the beginning.The site was launched last friday.
When searching for the brand name it shows up as example.com
When searching for my main keywords it shows example.com/ned/landing-page -
If you put disallow: / in your robots.txt file, you will tell bots not to crawl the homepage plus ALL interior pages. You'd be shooting yourself in the foot (or head, really).
Are you sure the redirect is setup properly? Is it definitely a 301 redirect, or maybe a 302 (temporary)? How long ago did you implement the redirect? If the 301 redirect is setup properly, and you're still seeing the homepage in the index, you might just need to wait for it to drop out.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
No index tag robots.txt
Hi Mozzers, A client's website has a lot of internal directories defined as /node/*. I already added the rule 'Disallow: /node/*' to the robots.txt file to prevents bots from crawling these pages. However, the pages are already indexed and appear in the search results. In an article of Deepcrawl, they say you can simply add the rule 'Noindex: /node/*' to the robots.txt file, but other sources claim the only way is to add a noindex directive in the meta robots tag of every page. Can someone tell me which is the best way to prevent these pages from getting indexed? Small note: there are more than 100 pages. Thanks!
Technical SEO | | WeAreDigital_BE
Jens0 -
URL is invalid: Why?
Hello everyone, I am currently listing my company on business directories. For some websites however when I add my website URL, it comes up as URL is invalid. What could be the reason for this? I have tried different variations like www., http:// and https://. Kind Regards,
Technical SEO | | SMCCoachHire
Aqib0 -
URL Change, Old URLs Still In Index
Recently changed URLs on a website to remove dynamic parameters. We 301'd the old dynamic links (canonical version) to the cleaner parameter-free URLs. We then updated the canonical tags to reflect these changes. All pages dropped at least a few ranking positions and now Moz shows both the new page ranking slightly lower in results pages and the old page still in the index. I feel like I'm splitting value between the two page versions until the old one disappears... is there a way to consolidate this quickly?
Technical SEO | | ShawnW0 -
How to delete specific url?
I just ran drawl diagnostics and trying to delete pages such as "oops that page can't be found" or "404 (not found_ error response pages. Can anyone help?
Technical SEO | | sawedding0 -
Blocked URL's by robots.txt
In Google Webmaster Tools shows me 10,936 Blocked URL's by robots.txt and it is very strange when you go to the "Index Status" section where shows that since April 2012 robots.txt blocked many URL's. You can see more precise on the image attached (chart WMT) I can not explain why I have blocked URL's ? because I have nothing in robots.txt.
Technical SEO | | meralucian37
My robots.txt is like this: User-agent: * I thought I was penalized by Penguin in April 2012 because constantly i'am losing visitors now reaching over 40%. It may be a different penalty? Any help is welcome because i'm already so saturated. Mera robotstxt.jpg0 -
Updating content on URL or new URL
High Mozzers, We are an event organisation. Every year we produce like 350 events. All the events are on our website. A lot of these events are held every year. So i have an URL like www.domainname.nl/eventname So what would you do. This URL has some inbound links, some social mentions and so on. SO if the event will be held again in 2013. Would it be better to update the content on this URL or create a new one. I would keep this URL and update it because of the linkvalue and it is allready indexed and ranking for the desired keyword for that event. Cheers, Ruud
Technical SEO | | RuudHeijnen0 -
Can you 404 any forms of URL?
Hi seomozzers, <colgroup><col width="548"></colgroup>
Technical SEO | | Ideas-Money-Art
| http://ex.com/user/login?destination=comment%2Freply%2F256%23comment-form |
| http://ex.com/user/login?destination=comment%2Freply%2F258%23comment-form |
| http://ex.com/user/login?destination=comment%2Freply%2F242%23comment-form |
| http://ex.com/user/login?destination=comment%2Freply%2F257%23comment-form |
| http://ex.com/user/login?destination=comment%2Freply%2F260%23comment-form |
| http://ex.com/user/login?destination=comment%2Freply%2F225%23comment-form |
| http://ex.com/user/login?destination=comment%2Freply%2F251%23comment-form |
| http://ex.com/user/login?destination=comment%2Freply%2F176%23comment-form | These are duplicate content and the canonical version is: http://www.ex.com/user (login and pass page of the website) Since there were multiple other duplicates which mostly have been resolved by 301s, I figured that all "LOGIN" URLs (above) should be 404d since they don't carry any authority and 301 those wouldn't be the best solution since "too many 301s" can slow down the website speed. But a member of the dev team said: "Looks like all the urls requested to '404 redirect' are actually the same page http://ex.com/user/login. The only part of the url that changes is the variables after the "?" . I don't think you can (or highly not recommended) make 404 pages display for variables in a url. " So my question is: I am not sure what he means by that? and Is it really better to not 404 these? Thanks0 -
URL Structure Question
Hey folks, I have a weird problem and currently no idea how to fix it. We have a lot of pages showing up as duplicates although they are the same page, the only difference is the url structure. They seem to show up like: http://www.example.com/page/ and http://www.example.com/page What would I need to do to force the URLs into one format or the other to avoid having that one page counting as two? The same issue pops up with upper and lower case: http://www.example.com/Page and http://www.example.com/page Is there any solution to this or would I need to forward them with 301s or similar? Thanks, Mike
Technical SEO | | Malarowski0