Login webpage blocked by robots
-
Hi, the SEOMOZ crawl diagnostics shows that this page:
www.tarifakitesurfcamp.com/wp-login.php is blocked (noindex, nofollow)
Is there any problem with that?
-
thanks!
-
thanks!
-
Unless you have relevant information for your users on the log in page (i.e. for your private use) then it's probably a good idea not to index it!
-
Nope, that's perfectly fine since that's your login page for Wordpress.
If you're linking to the page from anywhere on your site (which you really shouldn't be), you could update the meta robots tag to (noindex, FOLLOW), but since it looks like the page has no links, it shouldn't be necessary.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Login Page Outranking Homepage
Hi all, I have a subscription based website with a members login. For our branded terms, this login page is outranking the homepage and we're unsure what, if anything to do about it. One suggestion was to deindex it, but the counter-argument was that we'd be taking away one of our spots on the first page. I feel from a user experience, it would be best to deindex it to try and get the homepage ranked #1 since the login page not optimised towards new visitors. Another suggestion was to just optimise the title and description to make it look a little nicer. I would love to hear your thoughts Thanks!
On-Page Optimization | | CupidTeam0 -
Need suggestion: Should the user profile link be disallowed in robots.txt
I maintain a myBB based forum here. The user profile links look something like this http://www.learnqtp.com/forums/User-Ankur Now in my GWT, I can see many 404 errors for user profile links. This is primarily because we have tight control over spam and auto-profiles generated by bots. Either our moderators or our spam control software delete such spammy member profiles on a periodic basis but by then Google indexes those profiles. I am wondering, would it be a good idea to disallow User profiles links using robots.txt? Something like Disallow: /forums/User-*
On-Page Optimization | | AnkurJ0 -
Using Robots Meta Tag on Review Form Pages
I have gone over this so many times and I just can't seem to get it straight and hope someone can help me out with a couple of questions: Right now, on my dynamically created pages created by filters (located on the category pages) I am using rel""canonical" to point them to their respective category page. Should I also use the robots meta tag as well? Similarly, each product I have on my site has a review form on it and thus is getting indexed by Google. I have placed the same canonical tag on them as well pointing them to the page with the review form on it. In the past I used robots.txt to block google from the review pages but this didn't really do much. Should I be using the robots meta tag on these pages as well? If I used the robots meta tag should I noindex,nofollow? Thanks in advance, Jake
On-Page Optimization | | jake3720 -
Can we listed URL on Website sitemap page which are blocked by Robots.txt
Hi, I need your help here. I have a website, and few pages are created for country specific. (www.example.com/uk). I have blocked many country specific pages from Robots.txt file. It is advisable to listed those urls (blocked by robots.txt) on my website sitemap. (html sitemap page) I really appreciate your help. Thanks, Nilay
On-Page Optimization | | Internet-Marketing-Profs0 -
70 Domain Names Point to 70 Nearly Identical Inner Webpages
I have a new SEO client; his website has never been optimized. There are 70 domain names involved with this one website. Each domain name points to an exact replica of the main page, other than the fact that a small content box has different info in it, and sometimes the header graphic is different. So, 70 webpages that are 5% different from each other and 5% different from the main page. How badly is this issue affecting this website's ability to rank well, and what is the best way to solve this?
On-Page Optimization | | netsites0 -
New CMS system - 100,000 old urls - use robots.txt to block?
Hello. My website has recently switched to a new CMS system. Over the last 10 years or so, we've used 3 different CMS systems on our current domain. As expected, this has resulted in lots of urls. Up until this most recent iteration, we were unable to 301 redirect or use any page-level indexation techniques like rel 'canonical' Using SEOmoz's tools and GWMT, I've been able to locate and redirect all pertinent, page-rank bearing, "older" urls to their new counterparts..however, according to Google Webmaster tools 'Not Found' report, there are literally over 100,000 additional urls out there it's trying to find. My question is, is there an advantage to using robots.txt to stop search engines from looking for some of these older directories? Currently, we allow everything - only using page level robots tags to disallow where necessary. Thanks!
On-Page Optimization | | Blenny0 -
Does Google respect User-agent rules in robots.txt?
We want to use an inline linking tool (LinkSmart) to cross link between a few key content types on our online news site. LinkSmart uses a bot to establish the linking. The issue: There are millions of pages on our site that we don't want LinkSmart to spider and process for cross linking. LinkSmart suggested setting a noindex tag on the pages we don't want them to process, and that we target the rule to their specific user agent. I have concerns. We don't want to inadvertently block search engine access to those millions of pages. I've seen googlebot ignore nofollow rules set at the page level. Does it ever arbitrarily obey rules that it's been directed to ignore? Can you quantify the level of risk in setting user-agent-specific nofollow tags on pages we want search engines to crawl, but that we want LinkSmart to ignore?
On-Page Optimization | | lzhao0 -
How can I reduce my webpage load time?
According to google 'On average, pages in your site take 4.6 seconds to load (updated on Apr 3, 2011). This is slower than 71% of sites. These estimates are of low accuracy (fewer than 100 data points). The chart below shows how your site's average page load time has changed over the last few months. For your reference, it also shows the 20th percentile value across all sites, separating slow and fast load times.' My website: http://ablemagazine.co.uk I've installed Cache plugins, Minify plugins, reduce the amount of posts on my main page. But my website is still taking too long to load and I'm afraid I'm being penalised for it. Any tips?
On-Page Optimization | | craven220