Trying to determine if either of these are considered cloaking
-
Option 1) In the browser, we use javascript to determine if you meet the redirect conditions (referrer not mydomain.com and no bypassing query-string). If so, then we direct your browser to the subdomain.mydomain.com URL. Googlebot would presumably get the original page.
Option 2) In the browser, we use javascript to determine if you meet the redirect conditions. If so, we trigger different CSS that hides certain components of the page and use javascript to load in extra ads. Googlebt would get the unaltered page.
In both scenarios the page content does not change. However, the presentation is different. The idea is that under certain conditions users are redirected to a page with more ads. The ads are not too severe on the redirected page and will not cause an above the fold penalty. That said, will either option be considered cloaking by Google?
-
Matt Cutts has discussed this pretty well. Cloaking means specifically showing something different to Google than to the user. Hidden content is not specifically cloaking, it is a different issue altogether.
If I had to choose between the two options, I would choose the second. The 302 redirect can be problematic. You should assume that Google is going to find BOTH of these and execute the javascript appropriately. Just don't make anything on your site behave differently specifically for Google.
LivingSocial and Groupon both do javascript redirects and are not suffering the consequences, so I think you should be fine too.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How should you determine the preferred URL structure?
Hi Guys, When migrating to a new CMS which include new pages how should you determine the URL structure, specifically: So should we include www. or without it? Should the URL have a trailing slash? How would you determine the answer to these questions? Cheers.
Intermediate & Advanced SEO | | kayl870 -
Contextual links (is this screen shot considered contextual /editorial links ?)
Hello, Is the screen shot below considered contextual ?https://imgur.com/a/mrbQq and does it have any value or no value What is the value on a scale from 0 to 10 (if you know) of a contextual link versus non contextual links. Thank you, mrbQq
Intermediate & Advanced SEO | | seoanalytics0 -
Is a link inside a video player considered an inbound link from the domain the player is embedded in?
Good afternoon...We just added a link to our homepage inside the menu of our video player. In the link below, if you click on the menu icon in the bottom right corner of the video player, you'll see a "Powered by WellcomeMat" link at the bottom of the menu. http://www.wellcomemat.com/video/kt216e25172416n/-Rancho-Santa-Fe/Ca/92067/16596-Via-Lago-Azul/1234567890/ My question for the community is would that link be considered an inbound link from any site that has the video player embedded? So hypothetically, the video player is embedded into www.abcd.com. If a user would click on that link and go to our homepage, would search engines recognize that as an inbound link from abcd.com, even though it sits within our video player? And most of the time, the player sits within an iframe. So that's why I'm not 100% sure. Thanks for reading and for your help! It's much appreciated!!
Intermediate & Advanced SEO | | brian7201 -
Is This Considered Duplicate Content?
My site has entered SEO hell and I am not sure how to fix it. Up until 18 months ago I had tremendous success on Google and Bing and now my website appears below my Facebook page for the term "Direct Mail Raleigh." What makes it even more frustrating is my competitors have done no SEO and they are dominating this keyword. I thought that the issue was due to harmful inbound links and two months ago I disavowed ones that were clearly spam. Somehow my site has actually gone down! I have a blog that I have updated infrequently and I do not know if it I am getting punished for duplicate content. On Google Webmaster Tools it says I have 279 crawled and indexed pages. Yesterday when I ran the MOZ crawl check I was amazed to find 1150 different webpages on my site. Despite the fact that it does not appear on the webmaster tools I have three different webpages due to the format that the Wordpress blog was created: "http://www.marketplace-solutions.com/report/part2leadershi/", "http://www.marketplace-solutions.com/report/page/91/" and "http://www.marketplace-solutions.com/report/category/competent-leadership/page/3/" What does not make sense to me is why Google only indexed 279 webpages AND why MOZ did not identify these three webpages as duplicate content with the Crawl Test Tool. Does anyone have any ideas? Would it be as easy as creating a massive robot.txt file and just putting 2 of the 3 URLs in that file? Thank you for your help.
Intermediate & Advanced SEO | | DR700950 -
What are the SEO issues we should consider on a plug in that creates a custom home page based on zip code or GPS location.
We are developing a plug in the changes the home page relative to a users location or zip code. We believe this will provide users with a more personalized experience. We are concerned about how this might affect SEO. We are also wondering if we should partner with one of the SEO ply in developers. We were thinking about Yoast. Is there another partner that might be better? I would appreciate any feedback people can give.
Intermediate & Advanced SEO | | Ron_McCabe0 -
How much (%) of the content of a page is considered too much duplication?
Google is not fond of duplication, I have been very kindly told. So how much would you suggest is too much?
Intermediate & Advanced SEO | | simonberenyi0 -
I'm pulling my hair out trying to figure out why google stopped crawling.. any help is appreciated
This is going to be kind of long, simply because there is a background to the domain name that is not typical to anybody in the world really and I'm not sure if its possible that it was penalized or ranked lower because of that or not. Because of that I'm going to include it with the hopes that giving the full picture some nice soul in the world who has more knowledge in this than me see's something or knows something and can point me in the right direction. Our site has been around for a few years, at one point the domain was seized by homeland security ICE, and then they had to give it back in Dec. which sparked a lot of the SOPA PIPA stuff and we became the poster child so to speak. The site had previously been up since 2008, but due to that whole mess the site was down for 13 months on the dreaded seized server with a scary warning graphic and site title which caused quite obviously a bunch of 404 errors and who knows what else damage to anything we'd had before that as far as page rank and incoming links. we had a lot of incoming links from high quality sites. We were advised upon getting the domain back to pretty much scrap all the old content that was on the site prior and just start fresh.. which we did. Googlebot started crawling slowly, but then as we started getting back into the swing of things people started linking to us,some with high page rank, we were getting indexed quite frequently and ranking high on search results in our niche.. Then something happened on March 4th, we had arguably our best day with google traffic, we'd been linked back by places like Huff Post etc for content in our niche.. and the next day literally it was a freefall. Darn near nothing. I've attached a screen shot from webmaster tools so you can see how drastic it was. I went crazy, trying to figure out what was wrong, searching obsessively through webmaster tools looking for any indication of a problem, searched the site on google site:dajaz1.com and what comes up is page 2 page 3 page 45 page 46. It's also taken to indexing our category and tag pages and even our search pages. I've now set those all to noindex follow but when I look at where the googlebots are at on the site, they're on the categories, pages, author pages, and tags. Some of our links are still getting indexed, but doing a search just of our site name and we're ranking below many of the media sites that have written about our legal issues, when a month ago we were at least top result for our own name. I've racked my brain trying to figure out the issue. I've disabled plugins, I'm on fetch as google bot all the time making sure our stuff is at least coming out as 200 (we had 2 days where we were getting 403 errors due to a super-cache issue, but once fixed googlebot returned like it never left) I've literally watched 1000 videos, read 100 forums, added in SEO plugins, tried to optimize the site to the point I'm worried I'm over doing it.. and still they've barely begun to crawl. As you can see there is some activity in the last 2-3 days, but even submitting a new site map once I changed the theme out of desperation it's only indexed 16. I've looked for errors all through webmaster tools and I can't find anything to tell me why that happened, how to fix it, and how to get googlebot to like us again. I'm pulling my hair out here. The links we have incoming are high quality links like huffington post , spin, complex, etc. Those haven't slowed down at all, we do outgoing links to sites we trust and are high quality as well. I've got interns working on how they're writing titles and such, I've gone through and attempted to fix duplicate pages and titles.. I've been going through and re-writing meta description tags What am I missing? I'm pulling my hair out trying to figure out what the issue is. Eternally grateful for any help provided. jnzb6.png
Intermediate & Advanced SEO | | malady0 -
Any experience regarding what % is considered duplicate?
Some sites (including 1 or two I work with) have a legitimate reason to have duplicate content, such as product descriptions. One way to deal with duplicate content is to add other unique content to the page. It would be helpful to have guidelines regarding what percentage of the content on a page should be unique. For example, if you have a page with 1,000 words of duplicate content, how many words of unique content should you add for the page to be considered OK? I realize that a) Google will never reveal this and b) it probably varies a fair bit based on the particular website. However... Does anyone have any experience in this area? (Example: You added 300 words of unique content to all 250 pages on your site, that each had 100 words of duplicate content before, and that worked to improve your rankings.) Any input would be appreciated! Note: Just to be clear, I am NOT talking about "spinning" duplicate content to make it "unique". I am talking about adding unique content to a page that has legitimate duplicate content.
Intermediate & Advanced SEO | | AdamThompson0