Block Googlebot from submit button
-
Hi,
I have a website where many searches are made by the googlebot on our internal engine. We can make noindex on result page, but we want to stop the bot to call the ajax search button - GET form (because it pass a request to an external API with associate fees).
So, we want to stop crawling the form button, without noindex the search page itself. The "nofollow" tag don't seems to apply on button's submit.
Any suggestion?
-
Hey Olivier,
You could detect the user agent and hide the button. The difference isn't substantial enough to be called cloaking.
Or you could make the button not actually a button tag, but another tag with that traps clicks with a JS event. I'm not sure Google's headless browser is smart enough to automate that. I would try this first and if it doesn't work switch to the user agent detection idea.
Let us know how it goes!
-Mike
-
-
Can always do it in a programme the bot's can't use or hide it behind a log in field etc.
I also give you the following for consumption :
http://moz.com/blog/12-ways-to-keep-your-content-hidden-from-the-search-engines
Good luck!
-
Hi Bernard
Are you able to provide a link to the web form containing the submit button?
Peter
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Not sure how we're blocking homepage in robots.txt; meta description not shown
Hi folks! We had a question come in from a client who needs assistance with their robots.txt file. Metadata for their homepage and select other pages isn't appearing in SERPs. Instead they get the usual message "A description for this result is not available because of this site's robots.txt – learn more". At first glance, we're not seeing the homepage or these other pages as being blocked by their robots.txt file: http://www.t2tea.com/robots.txt. Does anyone see what we can't? Any thoughts are massively appreciated! P.S. They used wildcards to ensure the rules were applied for all locale subdirectories, e.g. /en/au/, /en/us/, etc.
Intermediate & Advanced SEO | | SearchDeploy0 -
Block in robots.txt instead of using canonical?
When I use a canonical tag for pages that are variations of the same page, it basically means that I don't want Google to index this page. But at the same time, spiders will go ahead and crawl the page. Isn't this a waste of my crawl budget? Wouldn't it be better to just disallow the page in robots.txt and let Google focus on crawling the pages that I do want indexed? In other words, why should I ever use rel=canonical as opposed to simply disallowing in robots.txt?
Intermediate & Advanced SEO | | YairSpolter0 -
Is there a way to show random blocks of text to users without it affecting SEO? Cloaking for good?
My client has a pretty creative idea for his web copy. In the body of his page there will be a big block of text that contains random industry related terms but within that he will bold and colorize certain words that create a coherent sentence. Something to the effect of "cut through the noise with a marketing team that gets results". Get it? So if you were to read the paragraph word-for-word it would make no sense at all. It's basically a bunch of random words. He's worried this will affect his SEO and appear to be keyword stuffing to Google. My question is: Is there a way to block certain text on a webpage from search engines but show them to users? I guess it would be the opposite of cloaking? But it's still cloaking...isn't it? In the end we'll probably just make the block of text an image instead but I was just wondering if anyone has any creative solutions. Thanks!
Intermediate & Advanced SEO | | TheOceanAgency0 -
Received "Googlebot found an extremely high number of URLs on your site:" but most of the example URLs are noindexed.
An example URL can be found here: http://symptom.healthline.com/symptomsearch?addterm=Neck%20pain&addterm=Face&addterm=Fatigue&addterm=Shortness%20Of%20Breath A couple of questions: Why is Google reporting an issue with these URLs if they are marked as noindex? What is the best way to fix the issue? Thanks in advance.
Intermediate & Advanced SEO | | nicole.healthline0 -
Googlebot Can't Access My Sites After I Repair My Robots File
Hello Mozzers, A colleague and I have been collectively managing about 12 brands for the past several months and we have recently received a number of messages in the sites' webmaster tools instructing us that 'Googlebot was not able to access our site due to some errors with our robots.txt file' My colleague and I, in turn, created new robots.txt files with the intention of preventing the spider from crawling our 'cgi-bin' directory as follows: User-agent: * Disallow: /cgi-bin/ After creating the robots and manually re-submitting it in Webmaster Tools (and receiving the green checkbox), I received the same message about Googlebot not being able to access the site, only difference being that this time it was for a different site that I manage. I repeated the process and everything, aesthetically looked correct, however, I continued receiving these messages for each of the other sites I manage on a daily-basis for roughly a 10-day period. Do any of you know why I may be receiving this error? is it not possible for me to block the Googlebot from crawling the 'cgi-bin'? Any and all advice/insight is very much welcome, I hope I'm being descriptive enough!
Intermediate & Advanced SEO | | NiallSmith1 -
Fetch as GoogleBot "Unreachable Page"
Hi, We are suddenly having an error "Unreachable Page" when any page of our site is accessed as Googlebot from webmaster tools. There are no DNS errors shown in "Crawl Errors". We have two web servers named web1 and web2 which are controlled by a software load balancer HAProxy. The same network configuration has been working for over a year now and never had any GoogleBot errors before 21st of this month. We tried to check if there could be any error in sitemap, .htaccess or robots.txt by excluding the loadbalancer and pointing DNS to web1 and web2 directly and googlebot was able to access the pages properly and there was no error. But when loadbalancer was made active again by pointing the DNS to it, the "unreachable page" started appearing again. This very same configuration has been working properly for over a year till 21st of this month. Website is properly accessible from browser and there are no DNS errors either as shown by "Crawl Errors". Can you guide me about how to diagnose the issue. I've tried all sorts of combinations, even removed the firewall but no success. Is there any way to get more details about error instead of just "Unreachable Page" error ? Regards, shaz
Intermediate & Advanced SEO | | shaz_lhr0 -
Can anyone recommend solid directories (paid) to submit websites to?
can anyone recommend solid directories (paid) to submit websites to? maybe a directory you've already submitted to and have gotten results. Thanks
Intermediate & Advanced SEO | | PaulDylan0 -
Does anyone know if certain DMOZ categories are blocked/never get indexed on google?
Hi all, After waiting many months I was happy to see a certain site listed on DMOZ, then months later still haven't seen the dmoz category indexed in google. It makes me wonder if certain categories don't get indexed or blocked or even previously penalized by google. The category in question is a regional one : http://www.dmoz.org/Regional/North_America/United_States/New_Jersey/Localities/G/Garfield/Business_and_Economy/ Anyone come across this before? Dave
Intermediate & Advanced SEO | | davebrown19750