Technical SEO question re: java
-
Hi,
I have an SEO question that came my way, but it's a bit too technical for me to handle. Our entire ecom site is in java, which apparently writes to a page after it has loaded and is not SEO-friendly.
I was presented with a work-around that would basically consist of us pre redering an html page to search engines and leaving the java page for the customer. It sounds like G's definition of "cloaking" to me, but I wanted to know if anyone has any other ideas or work-arounds (if there are any) on how we can make the java based site more SEO-friendly.
Any thoughts/comments you have would be much appreciated. Thanks!!
-
Oooh no thank you - I'm not a big risk-taker when it comes to SEO. he-he. Thanks again for your help!
-
With the AJAX crawlability guide implementation, Google knows they're requesting a different page than the one being shown to users, so it's not quite the same as cloaking. That being said, you could go black hat and return a completely different page, but Google has their ways of finding these things out.
-
Hi John, One more question for you if you don't mind... Creating an html snapshot (as noted in the ajax link) is different than a "serving up an html page" as described here http://support.google.com/webmasters/bin/answer.py?hl=en&answer=66355 ? Is that true?
-
Thought you were talking about "Java". Needless to say JavaScript can cause all sorts of issues with SEO.
-
Hi John,
You're right, I meant JavaScript. Thank you so much for the response. This definitely helps!
-
Hi, Thanks for the reply. To begin, if you turn off javascript, you can't see any of the content on our pages - not the text, navigation, etc. I'm trying to figure out how to make the content displayable without having to re-do the entire system (which isn't plausible).
Does that make sense? The site is improvementscatalog.com if you want to see it. We're in the process of building content for it, but we just recently switched platforms and these new issues popped up.
-
Hi, Thanks for the reply. To begin, if you turn off javascript, you can't see any of the content on our pages - not the text, navigation, etc. I'm trying to figure out how to make the content displayable without having to re-do the entire system (which isn't plausible).
Does that make sense? The site is improvementscatalog.com if you want to see it. We're in the process of building content for it, but we just recently switched platforms and these new issues popped up.
-
I think you mean JavaScript and not Java. What you're suggesting is what Google recommends in their AJAX crawling guide here http://code.google.com/web/ajaxcrawling/. They want you to create a static HTML page to serve to Googlebot instead of your regular page.
Google is getting better at crawling JavaScript content that's loaded asynchronously, so you might want to dedicate your resources elsewhere. On one of my sites, Google is indexing text that's loaded asynchronously (Bing isn't yet), and Matt Cutts has said that Google is crawling some comments that are loaded asynchronously, like Facebook comments (see http://www.searchenginejournal.com/google-indexing-facebook-comments/35594/)
-
Yeah I agree the work-around sounds like it may be interpreted as black hat cloaking and get you in trouble.
Can you explain further how your application is working and why its not SEO friendly?
Cheers
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Homepage 301 and SEO Help
Hi All, Does redirecting alternate versions of my homepage with a 301 only improve reporting, or are there SEO benefits as well. We recently changed over our servers and this wasn't set-up as before and I've noticed a drop in our organic search traffic. i.e. there was no 301 sending mywebsite.com traffic to www.mywebsite.com Thanks in advance for any comments or help.
Technical SEO | | b4cab0 -
Site Migration Questions
Hello everyone, We are in the process of going from a .net to a .com and we have also done a complete site redesign as well as refreshed all of our content. I know it is generally ideal to not do all of this at once but I have no control over that part. I have a few questions and would like any input on avoiding losing rankings and traffic. One of my first concerns is that we have done away with some of our higher ranking pages and combined them into one parallax scrolling page. Basically, instead of having a product page for each product they are now all on one page. This of course has made some difficulty because search terms we were using for the individual pages no longer apply. My next concern is that we are adding keywords to the ends of our urls in attempt to raise rankings. So an example: website.com/product/product-name/keywords-for-product if a customer deletes keywords-for-product they end up being re-directed back to the page again. Since the keywords cannot be removed is a redirect the best way to handle this? Would a canonical tag be better? I'm trying to avoid duplicate content since my request to remove the keywords in urls was denied. Also when a customer deletes everything but website.com/product/ it goes to the home page and the url turns to website.com/product/#. Will those pages with # at the end be indexed separately or does google ignore that? Lastly, how can I determine what kind of loss in traffic we are looking at upon launch? I know some is to be expected but I want to avoid it as much as I can so any advice for this migration would be greatly appreciated.
Technical SEO | | Sika220 -
When to re-submit for reconsideration?
Hi! We received a manual penalty notice. We had an SEO company a couple of years ago build some links for us on blogs. Currently we have only about 95 of these links which are pretty easily identifiable by the anchor text used and the blogs or directories they originate from. So far, we have seen about 35 of those removed and have made 2 contacts to each one via removeem.com. So, how many contacts do you think need to be made before submitting a reconsideration request? Is 2 enough? Also, should we use the disavow tool on these remaining 65 links? Every one of the remaining links is from either a filipino blog page or a random article directory. Finally, do you think we are still getting juice from these links? i.e. if we do remove or disavow these anchor text links are we actually going to see a negative impact? Thanks for your help and answers!! Craig
Technical SEO | | TheCraig0 -
Duplicate Content on SEO Pages
I'm trying to create a bunch of content pages, and I want to know if the shortcut I took is going to penalize me for duplicate content. Some background: we are an airport ground transportation search engine(www.mozio.com), and we constructed several airport transportation pages with the providers in a particular area listed. However, the problem is, sometimes in a certain region multiple of the same providers serve the same places. For instance, NYAS serves both JFK and LGA, and obviously SuperShuttle serves ~200 airports. So this means for every airport's page, they have the super shuttle box. All the provider info is stored in a database with tags for the airports they serve, and then we dynamically create the page. A good example follows: http://www.mozio.com/lga_airport_transportation/ http://www.mozio.com/jfk_airport_transportation/ http://www.mozio.com/ewr_airport_transportation/ All 3 of those pages have a lot in common. Now, I'm not sure, but they started out working decently, but as I added more and more pages the efficacy of them went down on the whole. Is what I've done qualify as "duplicate content", and would I be better off getting rid of some of the pages or somehow consolidating the info into a master page? Thanks!
Technical SEO | | moziodavid0 -
Robots.txt Question
In the past, I had blocked a section of my site (i.e. domain.com/store/) by placing the following in my robots.txt file: "Disallow: /store/" Now, I would like the store to be indexed and included in the search results. I have removed the "Disallow: /store/" from the robots.txt file, but approximately one week later a Google search for the URL produces the following meta description in the search results: "A description for this result is not available because of this site's robots.txt – learn more" Is there anything else I need to do to speed up the process of getting this section of the site indexed?
Technical SEO | | davidangotti0 -
Feedback for the onpage seo for this site
Hi, Can the seo gurus here, suggest me if any on page factors affect my site? http://www.ridpiles.com/ Recently i have added, the following post to the main home page, http://www.ridpiles.com/2012/02/different-types-of-cures-for-piles/ This page is somewhat different than the title keyword. As the main page titile is "hemorrhoids treatment". The newly created blog post is on "cure for piles" Does this blog post has any affect on the on page factors due to different title? And do i require any changes regarding the on page seo? Will be waiting for your replies.
Technical SEO | | Indexxess0 -
Pintrest SEO
Has any testing been done to determine if Pintrest helps a website ranking?
Technical SEO | | StreetwiseReports0 -
Robots.txt question
Hello, What does the following command mean - User-agent: * Allow: / Does it mean that we are blocking all spiders ? Is Allow supported in robots.txt ? Thanks
Technical SEO | | seoug_20050