No Java, No Content..?
-
Hello Mozers!
I have a question for you: I am working on a site and while doing an audit I disabled JavaScript via the Web Developer plugin for Chrome. The result is that instead of seeing the page content, I see the typical “loading circle” but nothing else. I imagine this not a good thing but what does this implies technically from crawler perspective?
Thanks
-
Hello Jimmy and Andy,
Thank you for your answers! Yes, the cached version of Google displays correctly, and the text snippets are indexed, so everything looks fine and working as it should
Thanks
-
I would suspect that unless something has been done differently, that Google can read what is there, but as Jimmy said, check the page cache to see what Google can see.
You can also do a search for a snippet of text from the page and see if Google has it indexed.
-Andy
-
Hi,
The best way to see what the crawlers are seeing is to check their cache.
If you to go the cached page on Google there is a text only link, through that you will be able to see exactly what content Google has cached of the page.Usually text that appears in the source code of the page is visible to the search engines, but without knowing the what the javascript does, it is hard to say whether it would affect this.
Hope this helps.
Kind Regards
Jimmy
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Issue with duplicate content
Hello guys, i have a question about duplicate content. Recently I noticed that MOZ's system reports a lot of duplicate content on one of my sites. I'm a little confused what i should do with that because this content is created automatically. All the duplicate content comes from subdomain of my site where we actually share cool images with people. This subdomain is actually pointing to our Tumblr blog where people re-blog our posts and images a lot. I'm really confused how all this duplicate content is created and what i should do to prevent it. Please tell me whether i need to "noindex", "nofollow" that subdomain or you can suggest something better to resolve that issue. Thank you!
Technical SEO | | odmsoft0 -
Handling of Duplicate Content
I just recently signed and joined the moz.com system. During the initial report for our web site it shows we have lots of duplicate content. The web site is real estate based and we are loading IDX listings from other brokerages into our site. If though these listings look alike, they are not. Each has their own photos, description and addresses. So why are they appear as duplicates – I would assume that they are all too closely related. Lots for Sale primarily – and it looks like lazy agents have 4 or 5 lots and input the description the same. Unfortunately for us, part of the IDX agreement is that you cannot pick and choose which listings to load and you cannot change the content. You are either all in or you cannot use the system. How should one manage duplicate content like this? Or should we ignore it? Out of 1500+ listings on our web site it shows 40 of them are duplicates.
Technical SEO | | TIM_DOTCOM0 -
Duplicate content and rel canonicals?
Hi. I have a question relating to 2 sites that I manage with regards to duplicate content. These are 2 separate companies but the content is off a data base from the one(in other words the same). In terms of the rel canonical, how would we do this so that google does not penalise either site but can also have the content to crawl for both or is this just a dream?
Technical SEO | | ProsperoDigital0 -
Duplicate Content Due to Pagination
Recently our newly designed website has been suffering from a rankings loss. While I am sure there are a number of factors involved, I'd like to no if this scenario could be harmful... Google is showing a number of duplicate content issues within Webmaster Tools. Some of what I am seeing is duplicate Meta Titles and Meta Descriptions for page 1 and page 2 of some of my product category pages. So if a category has many products and has 4 pages, it is effectively showing the same page title and meta desc. across all 4 pages. I am wondering if I should let my site show, say 150 products per page to get them all on one page instead of the current 36 per page. I use the Big Commerce platform. Thank you for taking the time to read my question!
Technical SEO | | josh3300 -
Remotely Loaded Content
Hi Folks, I have a two part question. I'd like to add a feature to our website where people can click on an ingredient (we manufacture skin care products) and a tool-tip style box pops up and describes information about the ingredient. Because many products share some of the same ingredients, I'm going to load this data from a source file via AJAX. My questions are: Does this type of remotely-fetched content have any effect on how a search engines views and indexes the page? Can it help contribute to the page's search engine ranking? If there are multiple pages fetching the same piece of remotely-fetched content, will this be seen as duplicated content? Thanks! Hal
Technical SEO | | AlabuSkinCare0 -
Duplicate Page Content Report
In Crawl Diagnostics Summary, I have 2000 duplicate page content. When I click the link, my Wordpress return "page not found" and I see it's not indexed by Google, and I could not find the issue in Google Webmaster. So where does this link come from?
Technical SEO | | smallwebsite0 -
Multiple URLs and Dup Content
Hi there, I know many people might ask this kind of question, but nevertheless .... 🙂 In our CMS, one single URL (http://www.careers4women.de/news/artikel/206/) has been produced nearly 9000 times with strings like this: http://www.careers4women.de/news/artikel/206/$12203/$12204/$12204/ and this http://www.careers4women.de/news/artikel/206/$12203/$12204/$12205/ and so on and so on... Today, I wrote our IT-department to either a) delete the pages with the "strange" URLs or b) redirect them per 301 onto the "original" page. Do you think this was the best solution? What about implementing the rel=canonical on these pages? Right now, there is only the "original" page in the Google index, but who knows? And I don't want users on our site to see these URLs, so I thought deleting them (they exist only a few days!) would be the best answer... Do you agree or have other ideas if something like this happens next time? Thanx in advance...
Technical SEO | | accessKellyOCG0 -
Dismal content rankings
Hi, I realize this is a very broad question, but I am going to ask it anyways in the hopes that someone might have some insight. I have created a great deal of unique content for the site http://www.healthchoices.ca. You can select a video category from the top dropdown, then click on a video beside the provider box to see. The articles I've written are accessible by the View Article tab under each video. I have worked hard to make the articles informative and they are all unique with quotes from expert physicians. Even for strange health conditions that don't have a lot of competition - I don't see us appearing. Our search results are quite dismal for the amount of content we have. I guess I'm checking to see if anyone is able to point me in the right direction at all? If anything jumps out... Thanks, Erin
Technical SEO | | erinhealthchoices0