How to add a disclaimer to a site but keep the content accessible to search robots?
-
Hi,
I have a client with a site regulated by the UK FSA (Financial Services Authority). They have to display a disclaimer which visitor must accept before browsing. This is for real, not like the EU cookie compliance debacle
Currently the site 302 redirects anyone not already cookied (as having accepted) to a disclaimer page/form. Do you have any suggestions or examples of how to require acceptance while maintaining accessibility?
I'm not sure just using a jquery lightbox would meet the FSA's requirements, as it wouldn't be shown if JS was not enabled.
Thanks,
-Jason
-
Joshua thanks for your suggestions.
Fixed div idea is good but not sure it will pass FSA compliance.
Google search appliance config article is interesting and provides some ideas but not sure how to go about implementing for Googlebot.
Suppose reverse dns lookup (http://support.google.com/webmasters/bin/answer.py?hl=en&answer=80553) may provide a solution. Was hoping someone that had implemented something similar may share their experience.
Cheers.
-
That is rough,
maybe a legitimate situation for user agent sniffing (albeit fraught with danger)? If you can't rely on javascript then it would seem that any option will have significant downsides.
This may be a hair-brained suggestion but what about appending a server parameter to all links for those who do not have a cookie set? if the user agent is google or bing (or any other search bot) the server could ignore that parameter and send them on their way to the correct page, however if the user agent is not a search engine then they would be forced to the disclaimer page.
This would allow for a user to see the initial content (which may not be allowed?) but not navigate the site, however it would also allow you to present the same info to both user and agent while making the user accept the terms.
Alternatively serve up a version of the page that has the div containing the disclaimer form expand to fill the whole viewport to non-cookied visitors and set the style to position:fixed which should keep the visitor from scrolling past the div, but it will still render the content below the viewport. Thus cookied visitors don't see a form but non-cookied visitors get the same page content but can't scroll to it until they accept the form (mobile does weird things with position fixe, so this again might not work, and a savy user could get around it).
Edit: Just found this article which looks promising. It is a google doc on how to allow crawls on a cookied domain https://developers.google.com/search-appliance/documentation/50/help_gsa/crawl_cookies might solve the problem in a more elegant, safe way.
Would be interested to hear what you come up with. If you could rely on javascript then there are many ways to do it.
Cheers!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Should I add my html sitemap to Robots?
I have already added the .xml to Robots. But should I also add the html version?
Technical SEO | | Trazo0 -
How to avoid duplicate content on internal search results page?
Hi, according to Webmaster Tools and Siteliner our website have an above-average amount of duplicate content. Most of the pages are the search results pages, where it finds only one result. The only difference in this case are the TDK, H1 and the breadcrumbs. The rest of the layout is pretty static and similar. Here is an example for two pages with "duplicate content": https://soundbetter.com/search/Globo https://soundbetter.com/search/Volvo Edit: These are legitimate results that happen to have the same result. In this case we want users to be able to find the audio engineers by 'credits' (musicians they've worked with). Tags. We want users to rank for people searching for 'engineers who worked with'. And searching for two different artists (credit tags) returns this one service provider, with different urls (the tag being the search parameter) hence the duplicate content. I guess every e-commerce/directory website faces this kind of issue. What is the best practice to avoid duplicate content on search results page?
Technical SEO | | ShaqD1 -
Duplicate Content
Hello guys, After fixing the rel tag on similar pages on the site I thought that duplicate content issue were resolved. I checked HTML Improvements on GWT and instead of going down as I expected, it went up. The duplicate issues affect identical product pages which differ from each other just for one detail, let's say length or colour. I could write different meta tags as the duplicate is the meta description, and I did it for some products but still didn't have any effects and they are still showing as duplicates. What would the problem be? Cheers
Technical SEO | | PremioOscar0 -
Moving content
I have www.SiteA.com which contains a number of sections of content, a section of which (i.e. www.SiteA.com/sectionA), we would like to move to a new domain www.SiteB.com Definitely we will ensure that a redirect strategy is in place and that we submit a sitemap for SiteB Three Questions 1. Anything else I am missing from the migration plan? 2. Since we are only moving part of SiteA to SiteB, is there another way of telling Google that we changed address for that section or are the 301s enough? 3. Currently, Section A (under SiteA) contains a subsection where we were posting an article a day. In the new site (SiteB), we decided to drop this subsection and write content (but not "exactly" the same content) under a new section. During migration, how should we handle the subsection that we have decided to stop writing? Should we: A. Import the content into SiteB and call it archives and then redirect all the urls from subsection under SiteA to the archives under SiteB? OR B. Do not move the content but redirect all the pages (365 in total) to where we think the user would be more interested in going to on SiteB? Note: A colleague of mine is worried that since the subsection has good content he thinks its necessary to actually move the content to SiteB. But again, looking at the views for the archives it caters for 1% of the the total views of this section. In other words, people only view the article on the day it is written. I hope I was clear 🙂 Your help is appreciated Thank you
Technical SEO | | seo12120 -
Content on top-level-domain vs. content on subpage
Hello Seomoz community, I just built a new website, mainly for a single affiliate programm and it ranks really well at google. Unfortunately the merchant doesn’t like the name of my domain, that’s why I was thrown out of the affiliate program. So suppose the merchant is a computer monitor manufacturer and his name is “Digit”. The name of my domain is something like monitorsdigital.com at the moment. (It’s just an example, I don’t own this URL). The structure of my website is: 1 homepage with much content on it + a blog. The last 5 blog entries are displayed on the homepage. Because I got kicked out of the affiliate program I want to permanent redirect monitorsdigital.com to another domain. But what should the new website look like? I have two possibilities: Copy the whole monitorsdigital website to a new domain, called something like supermonitors.com. Integrate the monitorsdigital website into my existing website about different monitor manufacturers. E.g.: allmonitors.com/digit-monitors.html (that url is permitted by the merchant) What do you think is the better way? I just got the impression, that it seems to be a little easier to rank high with a top-level-domain (www.supermonitors.com) than with a subpage (www.allmonitors.com/digit-monitors.html). However the subpage can benefit from the domain authority, that was generated by other subpages. Thanks for your help and best regards MGMT
Technical SEO | | MGMT0 -
Robots.txt issue - site resubmission needed?
We recently had an issue when a load of new files were transferred from our dev server to the live site, which unfortunately included the dev site's robots.txt file which had a disallow:/ instruction. Bad! Luckily I spotted it quickly and the file has been replaced. The extent of the damage seems to be that some descriptions aren't displaying and we're getting a message about robots.txt in the SERPs for a few keywords. I've done a site: search and generally it seems to be OK for 99% of our pages. Our positions don't seem to be affected right now but obviously it's not great for the CTRs on those keywords affected. My question is whether there is anything I can do to bring the updated robots.txt file to Google's attention? Or should we just wait and sit it out? Thanks in advance for your answers!
Technical SEO | | GBC0 -
Robots.txt
should I add anything else besides User-Agent: * to my robots.txt file? http://melo4.melotec.com:4010/
Technical SEO | | Romancing0 -
Should I create mini-sites with keyword rich domain names pointing to my main site?
Hi, I'm new to seomoz (and seo in general) and loving it so far. My main domain name is more of a brandname than a search engine friendly list of keywords. I rank well for some keywords I optimized for, and less so for the more competitive keywords. I was wondering if making one page minisites hosted on keyword rich domain names could help in this respect? What I want to do is just have a single page with a few paragraphs of content and links to the main site. I am not looking for links to boost the main site, just for the minisites to do better for several keywords. Will this help? Is this ok, or against some Google policy? Can this hurt the main site rankings? Thank you! **Edit: **I noticed that sites ranking above me on the first page for some keywords have much less on-page elements than my page, have about the same domain trust and also very little inbound links. The only factor I can see is the exact match of keywords in the domain name.
Technical SEO | | Eladla1