Allow only Rogerbot, not googlebot nor undesired access
-
I'm in the middle of site development and wanted to start crawling my site with Rogerbot, but avoid googlebot or similar to crawl it.
Actually mi site is protected with login (basic Joomla offline site, user and password required) so I thought that a good solution would be to remove that limitation and use .htaccess to protect with password for all users, except Rogerbot.
Reading here and there, it seems that practice is not very recommended as it could lead to security holes - any other user could see allowed agents and emulate them. Ok, maybe it's necessary to be a hacker/cracker to get that info - or experienced developer - but was not able to get a clear information how to proceed in a secure way.
The other solution was to continue using Joomla's access limitation for all, again, except Rogerbot. Still not sure how possible would that be.
Mostly, my question is, how do you work on your site before wanting to be indexed from Google or similar, independently if you use or not some CMS? Is there some other way to perform it?
I would love to have my site ready and crawled before launching it and avoid fixing issues afterwards...Thanks in advance.
-
Great, thanks.
With those 2 recommendations I have more than enough for the next crawler. Thank you both!
-
Hi, thanks for answering
Well, it looks doable. Will try t do it on next programmed crawler, trying to minimize exposed time.
Hw, your idea seems very compatible with my first approach, maybe I could also allow rogerbot through htaccess, limiting others and only for that day remove the security user/password restriction (from joomla) and leave only the htaccess limitation. (I know maybe I'm a bit paranoid just want to be sure to minimize any collateral effect...)
*Maybe could be a good feature for Moz to be able to access restricted sites...
-
Hi,
I ran into a similar issue while we were redesigning our site. This is what we did. We unblocked our site (we also had a user and password to avoid Google indexing it). We added the link to a Moz campaign. We were very careful not to share the URL (developing site) or put it anywhere where Google might find it quickly. Remember Google finds links from following other links. We did not submit the developing site to Google webmaster tools or Google analytics. We watched and waited for the Moz report to come in. When it did, we blocked the site again.
Hope this helps
Carla
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved Ooops. Our crawlers are unable to access that URL
hello
Moz Pro | | ssblawton2533
i have enter my site faroush.com but i got an error
Ooops. Our crawlers are unable to access that URL - please check to make sure it is correct
what is problem ?0 -
Our crawler was not able to access the robots.txt file on your site.
Good morning, Yesterday, Moz gave me an error that is wasn't able to find our robots.txt file. However, this is a new occurrence, we've used Moz and its crawling ability many times prior; not sure why the error is happening now. I validated that the redirects and our robots page are operational and nothing is disallowing Roger in our robots.txt. Any advice or guidance would be much appreciated. https://www.agrisupply.com/robots.txt Thank you for your time. -Danny
Moz Pro | | Danny_Gallagher0 -
Why am I not getting my allowance of 10,000 inbound links in csv download file? 370 out of 4700??
Hi, I'm desparately trying to audit my backlinks to remove a penguin penalty on my site livefit.co.uk When I do the inbound link report i'm not getting all the links in the download. I know there is a limit of 25 links from each linking site so we get the full picture of links bu: I have 4700 links so why does it need to limit it when we are supposed to see up to 10,000? When you check the link profile on the report it doesn't seem there are many sites with anything close to 25, so surely that rule is invalid as an explanation here? Should I just work off OSE? But there is less useful info than on the csv.. I'd be very grateful for your thoughts. Thanks! James
Moz Pro | | LiveFit0 -
Rogerbot did not crawl my site ! What might be the problem?
When I saw the new crawl for my site I wondered why there are no errors, no warning and 0 notices anymore. Then I saw that only 1 page was crawled. There are no Error Messages or webmasters Tools also did not report anything about crawling problems. What might be the problem? thanks for any tips!
Moz Pro | | inlinear
Holger rogerbot-did-not-crawl.PNG0 -
Data Update for RogerBot
Hi, I noticed that rogerbot still give me 404 for http://www.salustore.com/capelli/nanogen-acquamatch.html refferal form http://www.salustore.com/protocollo-nanogen even I made changes since a couple of week. Same error with one "Title Element Too Short" on our site. Any suggestion on how to refresh it? Best Regards n.
Moz Pro | | nicolobottazzi0 -
How to get rogerbot whitelisted for application firewalls.
we have recently installed an application firewall that is blocking rogerbot from crawling our site. Our IT department has asked for an IP address or range of IP addresses to add to the acceptable crawlers. If rogerbot has a dynamic IP address how to we get him added to our whitelist? The product IT is using is from F5 called Application Security Manager.
Moz Pro | | Shawn_Huber0 -
Does anyone know of a crawler similar to SEOmoz's RogerBot?
As you probably know SEOmoz had some hosting and server issues recently, and this came at a terrible time for me... We are in the middle of battling some duplicate content and crawl errors and need to get a fresh crawl of some sites to test things out before we are hit with the big one? Before I get a million thumbs downs- I love and will continue to use SEOmoz, just need something to get me through this week ( or until Roger is back! )!
Moz Pro | | AaronSchinke1 -
Why can't we access metrics in the API that are available in the mozbar?
For example, the Chrome/Firefox mozbar uses peid (Root Domain External Links) but the api doesn't allow this. I've done a little bit of reverse engeniering and it looks like the accessID for the mozbar (when logged in as a pro member) is "pro-xxxx". Does this mean that each pro account has its own accessid/secret key? If so, when will pro members have access to that? I've created a tool that I use for deciding which expired domains to buy and it uses all of the free metrics - but I find myself having to do one extra step to get the peid (navigating to ose). It doesn't make much sense that the metric is available to me when I'm using the mozbar logged in as a pro member (which I pay for), but not apart of the API.
Moz Pro | | SeanStewart811