Robots.txt Help
-
I need help to create robots.txt file.
Please let me know what to add in the file. any real example or working example.?
-
Michael, from what i can tell, your website is built using WordPress. We typically recommend installing the Yoast SEO plugin and using that--which will help with your robots.txt file. If you need more information, take a look here: https://yoast.com/wordpress-robots-txt-example/
Generally, most of your site won't need to be disallowed in the robots.txt file, unless you're using tags and categories on your site. Yoast typically helps disallow the proper directories that you need to disallow.
One thing that you need to be aware of is the fact that you don't want to disallow your .CSS or .JS files on your site, many of the themes nowadays will put those files in your wp-admin folder--which by default typically gets disallowed.
-
This is the site I used to really get a good understanding of how to create a robots.txt file: http://www.robotstxt.org/
-
A very basic robots.txt file would look something like the below
User-agent: *
Sitemap: http://www.yourwebsite.com/sitemap.xml
Disallow: http://www.yourwebsite.com/url-you-dont-want-indexed
Disallow: http://www.yourwebsite.com/another-url-you-dont-want-indexedHope that helps
-
Include sitemaps. Disallow: Pages that you don't want indexed: search pages, login pages, core admin files.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What happens to crawled URLs subsequently blocked by robots.txt?
We have a very large store with 278,146 individual product pages. Since these are all various sizes and packaging quantities of less than 200 product categories my feeling is that Google would be better off making sure our category pages are indexed. I would like to block all product pages via robots.txt until we are sure all category pages are indexed, then unblock them. Our product pages rarely change, no ratings or product reviews so there is little reason for a search engine to revisit a product page. The sales team is afraid blocking a previously indexed product page will result in in it being removed from the Google index and would prefer to submit the categories by hand, 10 per day via requested crawling. Which is the better practice?
Intermediate & Advanced SEO | | AspenFasteners1 -
Will this 301 redirects help me?
Hello, recently, I found out about all the SEO advantages from 301 redirects. I had 3 websites that are now expired, their topic was Counter Strike 1.6 servers. All of these websites were registered 9 years ago and have few good backlinks (from website with 1%-3% spam score and DA 30+). Now I have one website that is not only about Counter Strike 1.6 but also many other Steam shooter games. If I revive these 3 old domains and 301 redirect them to my new one, will it help me with SEO and increase my ranking on Google?
Intermediate & Advanced SEO | | Bonito19930 -
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
Magento Help - Server Reset
Good Morning, After rebooting a server, a magento based website reset itself going back to December 2013. All changes to the site and orders dating up until yesterday (6/19/14) have disappeared. There are several folders on the root of the server that have files with yesterday's date but we don't know how to bring everything back and restore. Any Magento or server experts out there ever face this issue or have any ideas or potential solutions? Thanks
Intermediate & Advanced SEO | | Prime850 -
Massive URL blockage by robots.txt
Hello people, In May there has been a dramatic increase in blocked URLs by robots.txt, even though we don't have so many URLs or crawl errors. You can view the attachment to see how it went up. The thing is the company hasn't touched the text file since 2012. What might be causing the problem? Can this result any penalties? Can indexation be lowered because of this? ?di=1113766463681
Intermediate & Advanced SEO | | moneywise_test0 -
Time sensitive: HELP! We are having a problem doing a 301 redirect.....what can we do instead?
Our website has dynamic URLs and we are moving to another server/platform. 301 redirects is looking like a highly unlikely solution. A 3rd party company is handling the back-end of the website which they say works more like a "search engine" than a traditional website. Maybe that explains why they're having a hard time with the 301 redirects. Worst case scenario: we can't use the 301 redirect. What else can we do? We are considering "Indicate your canonical (preferred) URLs by including them in a Sitemap" as Google describes here: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=139066#2. I'm wondering if this method only applies to duplicate content........and what would happen once the old website results in a 404 page...... HELP! We need to cross over to the new platform as soon as possible.
Intermediate & Advanced SEO | | PatriotOutfitters810 -
Best practices for robotx.txt -- allow one page but not the others?
So, we have a page, like domain.com/searchhere, but results are being crawled (and shouldn't be), results look like domain.com/searchhere?query1. If I block /searchhere? will it block users from crawling the single page /searchere (because I still want that page to be indexed). What is the recommended best practice for this?
Intermediate & Advanced SEO | | nicole.healthline0 -
Not using a robot command meta tag
Hi SEOmoz peeps. Was doing some research on robot commands and found a couple major sites that are not using them. If you check out the code for these: http://www.amazon.com http://www.zappos.com http://www.zappos.com/product/7787787/color/92100 http://www.altrec.com/ You fill not find a meta robot command line. Of course you need the line for any noindex, nofollow, noarchive pages. However for pages you want crawled and indexed, is there any benefit for not having the line at all? Thanks!
Intermediate & Advanced SEO | | STPseo0