Blocked by meta-robots but there is no robots file
-
OK, I'm a little frustred here. I've waited a week for the next weekly index to take place after changing the privacy setting in a wordpress website so Google can index, but I still got the same problem. Blocked by meta-robots, no index, no follow. But I do not see a robot file anywhere and the privacy setting in this Wordpress site is set to allow search engines to index this site. Website is www.marketalert.ca
What am I missing here? Why can't I index the rest of the website and is there a faster way to test this rather than wait another week just to find out it didn't work again?
-
The .htaccess file is in placing directing www to non www, so I don't see what else I could do with that. I forgot to mention the website was recently overhauled by someone else, and they are having me help with SEO. Not sure if that has anything to do with it. It looks like the .htaccess should be reversed so the non www points to the www which has more value. Someone else designed this site and they are having me do the SEO on it for them.
-
The issue might be the forwarding from www.yourdomain.ca to yourdomain.ca
look at http://www.opensiteexplorer.org/pages?site=marketalert.ca%2F
and here http://www.opensiteexplorer.org/pages?site=www.marketalert.ca%2F
..some are indexed on with www and other without www. , this is your main issue.
recommendation:
- revisit the htaccess file or where the redirect has been set DNS..
- choose one with www or without and stick to it.
- revicit your external links and make the changes to your links
- create new sitemap and resubmit to SearchEngines
-
I ran the SEO web crawler and it finished already. Successfully crawled all pages. I still have to wait for another week to get the main campaign updated and see results there, but I believe it may work too now.
I guess I solved my own problem after being directed to robots.txt by Jim. I found that the Wordpress plugin for SEO xml sitemap creator was the problem because it created a virtual robots.txt file which sent me on a wild goose chase looking for a robots.txt file which didn't exist. Creating a robots.txt file allowing all seems to be the solultion, incase anyone else has this same problem.
-
If you can, follow up either way - happy to help you get it debugged!
-
I was able to update my sitemap.xml with Google webmaster tools no problem. I'm not 100% confident though that means the entire site is searchable by the spiders. I guess I'll know for sure in a few days tops.
-
I agree with Jim. Update your sitemap.xml files with Google Webmaster Tools. That will also help you identify problems you might be missing.
-
I've done some more looking into it and seems to be a problem when Wordpress uses the XML site generator plugin. It creates a virtual robot.txt file, which is why I couldn't find the robot.txt file. Apparently the only fix is to replace it with an actual robot.txt file forcing it to allow all.
I just replaced the robots.txt file with a real one allowing all. SEOmoz estimates a few days to test site crawl and it's another 7 days before the next scheduled crawl. I'd kinda like to find out sooner if it's not going to work. There must be a faster test. I don't need a detailed test, just a basic test that says, YEP, we can see this many pages or something like that.
-
hi
your robots.txt file is located here http://marketalert.ca/robots.txt, which is the root of your website directory.
this is the actual location of your sitemap file (http://marketalert.ca/sitemap.xml), does the Google WT show any issues about the sitemap file could not be found?
You might need to resubmit the sitemap file, if there are any changes, of course with the updated version of your site.
hope this helps.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt and Magento
HI, I am working on getting my robots.txt up and running and I'm having lots of problems with the robots.txt my developers generated. www.plasticplace.com/robots.txt I ran the robots.txt through a syntax checking tool (http://www.sxw.org.uk/computing/robots/check.html) This is what the tool came back with: http://www.dcs.ed.ac.uk/cgi/sxw/parserobots.pl?site=plasticplace.com There seems to be many errors on the file. Additionally, I looked at our robots.txt in the WMT and they said the crawl was postponed because the robots.txt is inaccessible. What does that mean? A few questions: 1. Is there a need for all the lines of code that have the “#” before it? I don’t think it’s necessary but correct me if I'm wrong. 2. Furthermore, why are we blocking so many things on our website? The robots can’t get past anything that requires a password to access anyhow but again correct me if I'm wrong. 3. Is there a reason Why can't it just look like this: User-agent: * Disallow: /onepagecheckout/ Disallow: /checkout/cart/ I do understand that Magento has certain folders that you don't want crawled, but is this necessary and why are there so many errors?
Technical SEO | | EcomLkwd0 -
Are nofollow, noindex meta tags healthy for this particular situation?
Hi mozzers, I am conducting an audit for a client and found over 200 instances using meta tags(nofollow, noindex). These were implemented essentially on Blog pages/category pages, tags not blog posts which is a good sign. I believe that tagged URLs aren't something to worry about since these create tons of duplicates. In regards to the category page, i feel that it falls in the same basket as tags but I am not positive about it. Can someone tell me if these are fine to have noindex, nofollow? Also on the website subnav which are related to the category pages mentioned prev, the webmaster have implemented noindex,follow(screenshot below), which seems ok to me? am i right? Thanks 8egLLbo.png?1
Technical SEO | | Ideas-Money-Art0 -
Magento Robots & overly dynamic URL-s
How can i block all URL-s on a Magento store that have 2 or more dynamic parameters in it, since all the parameters have attribute name in it and not some uniform ID Would something like: Disallow: /?&* work? Since the only thing that is constant throughout all the custom parameters is that they are separated with "&" Thanks 🙂
Technical SEO | | tilenkrivec0 -
Matching C Block
Hi Guys We have 2 sites that are in the same niche and competing for the same keywords. The sites are on seperate domains one is UK and one is .com They have their own IP's however have both have the same C Block... We have noticed that when the rankings for one site improves the other drops.... Could the C Block be causing this?
Technical SEO | | EwanFisher0 -
How to block "print" pages from indexing
I have a fairly large FAQ section and every article has a "print" button. Unfortunately, this is creating a page for every article which is muddying up the index - especially on my own site using Google Custom Search. Can you recommend a way to block this from happening? Example Article: http://www.knottyboy.com/lore/idx.php/11/183/Maintenance-of-Mature-Locks-6-months-/article/How-do-I-get-sand-out-of-my-dreads.html Example "Print" page: http://www.knottyboy.com/lore/article.php?id=052&action=print
Technical SEO | | dreadmichael0 -
Duplicate Meta Description in GWMT
We've just discovered that there are multiple duplicate URLs indexed for a site that we're working on. It seems that when new versions of the site was developed in the last couple of years, there were new page names and URL structures that were used. All of these seem to be showing up as Duplicate Meta Descriptions in Google's WMT, which is not surprising as they are basically the same page with the same content that are just sitting on different page names/URLs. This is an example of the situation, where URL 5 is the current version. Note: all the others are still live and resolve, although they are not linked to from the current site. URL 1: www.example.com/blue-tshirts.html (Version 1 - January 2010) URL 2: www.example.com/blue-t-shirts.html (Version 2 - July 2010) URL 3: www.example.com/blue_t_shirts.html (Version 3 - November 2010) URL 4: www.example.com/buy/blue_tshirts.html (Version 4 - January 2011) URL 5: www.example.com/buy/bluetshirts.html (Version 5 - April 2011) Presumably, this is a clear case of duplicate content. QUESTION: In order to solve it, shall we 301 all of the previous URLs to the current one - ie. Redirect URLs 1-4 to URL 5? Or, should some of them be NoIndexed? To complicate matters, there is Pagination on most of them. For example: URL 1: www.example.com/blue-tshirts.html (Version 1 - January 2010) URL 1a: www.example.com/page-1/blue-tshirts.html URL 1b: www.example.com/page-2/blue-tshirts.html URL 1c: www.example.com/page-3/blue-tshirts.html URL 4: www.example.com/buy/blue_tshirts.html URL 4a: www.example.com/buy/page-1/blue_tshirts.html URL 4b: www.example.com/buy/page-2/blue_tshirts.html URL 4c: www.example.com/buy/page-3/blue_tshirts.html URL 5: www.example.com/buy/bluetshirts.html URL 5a: www.example.com/buy/page-1/bluetshirts.html URL 5b: www.example.com/buy/page-2/bluetshirts.html URL 5c: www.example.com/buy/page-3/bluetshirts.html Since URL 5 is the current site, we are going to 'NoIndex, Follow' URLs 5a, 5b and 5c, which is what we understand to be the correct thing to do for paginated pages. QUESTION: What shall we do with URLs 1a, 1b and 1c? Should we apply the same "No Index, Follow" OR should they be 301'd to their respective counterparts in 5a, 5b and 5c? QUESTION: In the same way, since URL 4 is the version just before the current live Version 5, does it make a different on whether the paginated pages (ie 4a, 4b and 4c) should be No Indexed or 301'd? Thanks in advance for all responses and suggestions, it's greatly appreciated.
Technical SEO | | orangechew0 -
Blocking other engines in robots.txt
If your primary target of business is not in China is their any benefit to blocking Chinese search robots in robots.txt?
Technical SEO | | Romancing0 -
Does RogerBot read URL wildcards in robots.txt
I believe that the Google and Bing crawlbots understand wildcards for the "disallow" URL's in robots.txt - does Roger?
Technical SEO | | AspenFasteners0