Robots Disallow Backslash - Is it right command
-
Bit skeptical, as due to dynamic url and some other linkage issue, google has crawled url with backslash and asterisk character
ex - www.xyz.com/\/index.php?option=com_product
www.xyz.com/\"/index.php?option=com_product
Now %5c is the encoded version of \ - backslash & %22 is encoded version of asterisk
Need to know for command :-
User-agent: * Disallow: \As am disallowing all backslash url through this - will it only remove the backslash url which are duplicates or the entire site,
-
Thanks, you seem lucky to me.. Almost after 2 month i have got the code for making all these encoded url's redirect correctly. Finally, now if one types
http://www.mycarhelpline.com/\"/index.php?option=com_latestnews&view=list&Itemid=10
then he's redirected through 301 to the correct url
http://www.mycarhelpline.com/index.php?option=com_latestnews&view=list&Itemid=10
-
Hello Gagan,
I think the best way to handle this would be using the rel canonical tag or rewriting the URLs to get rid of the parameters and replace them with something more user-friendly.
The rel canonical tag would be the easiest way out of those two. I notice the version without the encoding (e.g. http://www.mycarhelpline.com/index.php?option=com_latestnews&view=list&Itemid=10 ) have a rel canonical tag that correctly references itself as the canonical version. However, the encoded URLs (e.g. http://www.mycarhelpline.com/\"/index.php?option=com_latestnews&view=list&Itemid=10) which is actually http://www.mycarhelpline.com/\"/index.php?option=com_latestnews&view=list&Itemid=10 does NOT have a rel canonical tag.
If the version with the backslash had a rel canonical tag stating that the following URL is canonical it would solve your issue, I think.
Canonical URL:
http://www.mycarhelpline.com/index.php?option=com_latestnews&view=list&Itemid=10 -
Sure, If i show you some url they are crawled as :-
Sample Incorrect URLs crawled and reported as duplicate one in Google Webmaster & Moz too
|
http://www.mycarhelpline.com/\"/index.php?option=com_latestnews&view=list&Itemid=10
| http://www.mycarhelpline.com/\"/index.php?option=com_newcar&view=category&Itemid=2 |
|
Correct URL
http://www.mycarhelpline.com/index.php?option=com_latestnews&view=list&Itemid=10
http://www.mycarhelpline.com/index.php?option=com_newcar&view=search&Itemid=2
What we found online
Since URLs often contain characters outside the ASCII set, the URL has to be converted into a valid ASCII format. URL encoding replaces unsafe ASCII characters with a "%" followed by two hexadecimal digits. URLs cannot contain spaces.
%22 reflects - " and %5c as \ (forward slash)
We intend to remove these duplicate one created having %22 and %5c within them..
Many thanks
-
I am not entirely sure I understood your question as intended, but I will do my best to answer.
I would not put this in my robots.txt flie because it could possibly be misunderstood as a forward slash, in which case your entire domain would be blocked:
Disallow: \
We can possibly provide you with some alternative suggestions on how to keep Google from crawling those pages if you could share some real examples.
It may be best to rewrite/redirect those URls instead since they don't seem to be the canonical version you intend to be presented to the user.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What happens to crawled URLs subsequently blocked by robots.txt?
We have a very large store with 278,146 individual product pages. Since these are all various sizes and packaging quantities of less than 200 product categories my feeling is that Google would be better off making sure our category pages are indexed. I would like to block all product pages via robots.txt until we are sure all category pages are indexed, then unblock them. Our product pages rarely change, no ratings or product reviews so there is little reason for a search engine to revisit a product page. The sales team is afraid blocking a previously indexed product page will result in in it being removed from the Google index and would prefer to submit the categories by hand, 10 per day via requested crawling. Which is the better practice?
Intermediate & Advanced SEO | | AspenFasteners1 -
Setting up the right Geo targeting/language targeting settings and not to brake the SEO
Hello the great Moz Community! Gev here from BetConstruct, a leading gaming and betting software provider in the world. Our company website is performing great on SERP. We have 20+ different dedicated pages for our 20+ softwares, event section, different landing pages for different purposes. We also run a blog section, Press section, and more... Our website's default language is EN. 4 months ago we opened the /ru and /es versions of the website! I have set the correct hreflang tags, redirects, etc.. generated correct sitemaps, so the translated versions started to rank normally! Now our marketing team is requesting different stuff to be done on the website and I would love to discuss this with you before implementing! There are different cases! For example: They have created a landing page under a url betconstruct.com/usa-home and want me to set that page as the default website page(ie homepage), if the user visits our website from a US based IP. This can be done in 2 different ways: I can set the /usa-home page as default in my CMS, in case the visitor is from US and the address will be just betconstruct.com(without /use-home). In this case the same URL (betconstruct.com) will serve different content for only homepage. I can check the visitor IP, if he is from US, I can redirect him to betconstruct.com/usa-home. In this case user can click on the logo and go to the homepage betconstruct.com and see the original homepage. Both of the cases seems to be dangerous, because in the 1st case I am not sure what google will think when he sees different homepage from different IPs. And in the 2nd case I am not sure what should be that redirection. Is it 301 or 303, 302, etc... Because Google will think I don't have a homepage and my homepage redirects to a secondary page like /usa-home After digging a lot I realised that my team is requesting from me a strange case. Because the want both language targeting(/es, /ru) and country targeting (should ideally be like /us), but instead of creating /us, they want it to be instead of /en(only for USA) Please let me know what will be the best way to implement this? Should we create a separate version of our website for USA under a /us/* URLs? In this case, is it ok to have /en as a language version and /us as a country targeting? What hreflangs to use? I know this is a rare case and it will be difficult for you to understand this case, but any help will be much appreciated! Thank you! Best,
Intermediate & Advanced SEO | | betconstruct
Gev0 -
Search Results Pages Blocked in Robots.txt?
Hi I am reviewing our robots.txt file. I wondered if search results pages should be blocked from crawling? We currently have this in the file /searchterm* Is it a good thing for SEO?
Intermediate & Advanced SEO | | BeckyKey0 -
Question about Syntax in Robots.txt
So if I want to block any URL from being indexed that contains a particular parameter what is the best way to put this in the robots.txt file? Currently I have-
Intermediate & Advanced SEO | | DRSearchEngOpt
Disallow: /attachment_id Where "attachment_id" is the parameter. Problem is I still see these URL's indexed and this has been in the robots now for over a month. I am wondering if I should just do Disallow: attachment_id or Disallow: attachment_id= but figured I would ask you guys first. Thanks!0 -
The "webmaster" disallowed all ROBOTS to fight spam! Help!!
One of the companies I do work for has a magento site. I am simply the SEO guy and they work the website through some developers who hold access to their systems VERY tightly. Using Google Webmaster Tools I saw that the robots.txt file was blocking ALL robots. I immediately e-mailed out and received a long reply about foreign robots and scrappers slowing down the website. They told me I would have to provide a list of only the good robots to allow in robots.txt. Please correct me if I'm wrong.. but isn't Robots.txt optional?? Won't a bad scrapper or bot still bog down the site? Shouldn't that be handled in httaccess or something different? I'm not new to SEO but I'm sure some of you who have been around longer have run into something like this and could provide some suggestions or resources I could use to plead my case! If I'm wrong.. please help me understand how we can meet both needs of allowing bots to visit the site but prevent the 'bad' ones. Their claim is the site is bombarded by tons and tons of bots that have slowed down performance. Thanks in advance for your help!
Intermediate & Advanced SEO | | JoshuaLindley0 -
Huge increase in server errors and robots.txt
Hi Moz community! Wondering if someone can help? One of my clients (online fashion retailer) has been receiving huge increase in server errors (500's and 503's) over the last 6 weeks and it has got to the point where people cannot access the site because of server errors. The client has recently changed hosting companies to deal with this, and they have just told us they removed the DNS records once the name servers were changed, and they have now fixed this and are waiting for the name servers to propagate again. These errors also correlate with a huge decrease in pages blocked by robots.txt file, which makes me think someone has perhaps changed this and not told anyone... Anyone have any ideas here? It would be greatly appreciated! 🙂 I've been chasing this up with the dev agency and the hosting company for weeks, to no avail. Massive thanks in advance 🙂
Intermediate & Advanced SEO | | labelPR0 -
Will blocking urls in robots.txt void out any backlink benefits? - I'll explain...
Ok... So I add tracking parameters to some of my social media campaigns but block those parameters via robots.txt. This helps avoid duplicate content issues (Yes, I do also have correct canonical tags added)... but my question is -- Does this cause me to miss out on any backlink magic coming my way from these articles, posts or links? Example url: www.mysite.com/subject/?tracking-info-goes-here-1234 Canonical tag is: www.mysite.com/subject/ I'm blocking anything with "?tracking-info-goes-here" via robots.txt The url with the tracking info of course IS NOT indexed in Google but IT IS indexed without the tracking parameters. What are your thoughts? Should I nix the robots.txt stuff since I already have the canonical tag in place? Do you think I'm getting the backlink "juice" from all the links with the tracking parameter? What would you do? Why? Are you sure? 🙂
Intermediate & Advanced SEO | | AubieJon0 -
Right SEO strategy for Wordpress
Hello all, I am working on my SEO strategy for a WordPress site. I am trying to cover all my keywords in: 1.a) Page title trying to have a length <70 1.b) Page url trying to have a length<115 My question is: should i try to focus all my keywords in both name and url page path? or only in the Page title as the SEOMOZ's guide suggest? I would go for a mix strategy with my keywords in both page title and url path name, but I do not know if the search engines PAY MORE ATTENTION TO THE PAGE TITLE, so mixing 1.a) and 1.b) would mean I am loosing keywords. I am using the WordPress All in ONE SEO Plugin. Do you recommend me this or any other plugin? This plugin has 3 input fields: a) Title tag b) Description tag c) Keywords My questions here are: a) If these tags replace the standard settings of WP as described in point 1.a) b) If the description and title tags are META TAGS that are not taken into account in terms of SEO but in terms of customer description of the contento of the page. c) Where are the keywords listed inserted in the page? In H1, H2, H3 and H4 tags? My feeling after reading the SEOMOZ guide is that this plugin is not providing any added value for SEO any more??' Thank you very much, Best regards, Antonio Alcocer
Intermediate & Advanced SEO | | aalcocer20030