Block Baidu crawler?
-
Hello!
One of our websites receives a large amount of traffic from the Baidu crawler. We do not have any Chinese content or do any business with China since our market is Uk.
Is it a good idea to block the Baidu crawler in the robots.txt or could it have any adverse effects on SEO of our site?
What do you suggest?
-
I'm also trying to get this done as well, not sure if its doable on Volusion(don't use them).
Yandex actually crawls more than Baidu for me, and both don't benefit me at all(sucks when you pay for the bandwidth)
-
Thanks for that I have just looked that up-I didn't realise that this was such a common problem.
-
Hi
Further to Ally's answer, in my experiance Baidu tends to ignor the robot.txt, so just do it on the server side.
S
-
Thanks Ally for your answer, will now block Baidu
-
Hi Stefan,
You can block the Baidu crawler in in the robots.txt.
There should be no adverse affect to your site. As this is not an area you are targeting and has no future long term benerfit to your business. Blocking the crawler will mean that your server has less load to deal with from the unnecessary traffic you have been receiving.
You can block the spiders in the following ways:
- Robots.txt (below is code for Baidu)
User-agent: Baiduspider
User-agent: Baiduspider-video
User-agent: Baiduspider-image
Disallow: /- Blocking Spiders via the Apache Configuration File httpd.conf
See the below article for more details on this method
http://searchenginewatch.com/article/2067357/Bye-bye-Crawler-Blocking-the-Parasites
You may also want to check out:
I hope this helps,
Ally
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are crawlers not picking up these pages?
Hi there, I've been asked to audit a new subdomain for a travel company. It's all a bit messy, so it's going to take some time to remedy. However, one thing I couldn't understand was the low number of pages appearing in certain crawlers. The subdomain has many pages. A homepage, category pages then product pages. Unfortunately, tools like Screaming Frog and xml-sitemaps.com are only picking up 19 pages and I can't figure out why. Google has so far indexed around 90 pages - this is by no means all of them, but that's probably because of the new domain and lack of sitemap etc. After looking at the crawl results, only the homepage and category (continent pages) are showing. So all the product pages are not. for example, tours.statravel.co.uk/trip/Amsterdam_Kings_Day_(Start_London_end_London)-COCCKDM11 is not appearing in the crawl results. After reviewing the source code, I can't see anything that would prevent this page being crawled. Am I missing something? At the moment, the crawl should be picking up around 400+ product pages, but it's not picking up any. Thanks
Technical SEO | | PeaSoupDigital0 -
How to block text on a page to be indexed?
I would like to block the spider indexing a block of text inside a page , however I do not want to block the whole page with, for example , a noindex tag. I have tried already with a tag like this : chocolate pudding chocolate pudding However this is not working for my case, a travel related website. thanks in advance for your support. Best regards Gianluca
Technical SEO | | CharmingGuy0 -
Similar Websites, Same C Block: Can I Get a Penalty?
One of my website has been heavily hit by Google's entire zoo so I decided to phase it out while building a new one. Old website: www.thewebhostinghero.com
Technical SEO | | sbrault74
New website: www.webhostinghero.com Now the thing is that both websites are obviously similar since I kept the branding. They also both have content about the same topics. No content has been copied or spinned or whatever though. Everything's original on both websites. There were only 3 parts of both websites that were too similar in terms of functionalities so I "noindexed" it on the old website. Now it seems that Google doesn't want you to have multiple websites for the same business just for the sake of occupying more space in the search results. This can especially be detected by the websites' C block. I am not sure if this is myth or fact though. So do you think I'm in a problematic situation with this scenario? It's getting ridiculous all you have to watch for when building a website, I'm afraid to touch my keyboard in fear my websites will get penalized! Sorry for my english btw.0 -
Will a google map loaded "on scroll" be ignored by the crawler?
One of my pages has two Google maps on it. This leads to a fairly high keyword density for words like "data", "map data" etc. Since one of the maps is basically at the bottom of the page I thought of loading it "on scroll" as soon as its container becomes visible (before loading the map div should be empty). Will the map then still be craweld by google (can they execute the JS in a way that the map is loaded anyways?) or would this help to reduce the keywords introduced by the maps?
Technical SEO | | ddspg0 -
Ajax Crawling | Blocked URLs Spike
http://www.zando.co.za/women/shoes/ (for example) Hello, I'm concerned that WMT is reporting a large spike in blocked URLs - now reporting more blocked URLs than good URLs. Our product recommendations get generated via an Ajax call and these autogenerated, unique, URLs are rendered in the /recommendations/ folder which sits in the root of our site: http://www.zando.co.za/recommendations/ I can't see how I can prevent Google from calling the Ajax - I can only assume that's what's happening.This is what the code typically looks like:
Technical SEO | | RocketZando0 -
Warnings for blocked by blocked by meta-robots/meta robots Nofollow...how to resolve?
Hello, I see hundreds of notices for blocked by meta-robots/meta robots nofollow and it appears it is linked to the comments on my site which I assume I would not want to be crawled. Is this the case and these notices are actually a positive thing? Please advise how to clear them up if these notices can be potentially harmful for my SEO. Thanks, Talia
Technical SEO | | M80Marketing0 -
Client accidently blocked entire site with robots.txt for a week
Our client was having a design firm do some website development work for them. The work was done on a staging server that was blocked with a robots.txt to prevent duplicate content issues. Unfortunately, when the design firm made the changes live, they also moved over the robots.txt file, which blocked the good, live site from search for a full week. We saw the error (!) as soon as the latest crawl report came in. The error has been corrected, but... Does anyone have any experience with a snafu like this? Any idea how long it will take for the damage to be reversed and the site to get back in the good graces of the search engines? Are there any steps we should take in the meantime that would help to rectify the situation more quickly? Thanks for all of your help.
Technical SEO | | pixelpointpress0 -
Should I Block Tag, Category, Author Pages
Just finished reviewing the first crawl of my first SEOmoz campaign for a site that I am working on. The site I"m working on uses Wordpress as a CMS, and most if not all of the warnings and notices have to do with author, category, and tag pages. Should I block these from being indexed? Why or why not?
Technical SEO | | Falconberg0