Why is OSE showing no data for this URL?
-
Hi all,
Does anyone have any ideas as to why OSE might not have any data for this URL:
http://www.ccisolutions.com/StoreFront/product/shure-slx24-sm58-wireless-microphone-system-j3
It is not a new page at all. It's been on the site for years.
Is OSE being quirky? Or is there an underlying problem with this page?
Thanks in advance for any light you can shed on this,
Dana
-
Hi Paul,
We discovered that the problem was being caused by a trailing "comma" at the end of the keyword string that we once used to populate the Meta keywords tag. Unfortunately, the keyword information in those fields is still being parsed. The parser did not know what to do when it encountered a comma followed by nothing.
We did run a query and found that this problem was affecting 128 of our product pages and had been for a long time. We haven't been populating the keywords for almost a year now, so the problem is at least that old.
The commas are now gone.
Thanks again to you and Andrew!
-
Glad I could help, Dana.
And yes, "borked" is a technical term. It's defined as existing in a badly broken state as a result of an inexperienced/inattentive user making unauthorised/incorrect changes to a website's code or content
Can also be used as a verb: "he borked the database so badly the whole site went 503".
Not that it's ever been applied to me or anything.
And yea - sometimes our tools can mislead us, even though the info they provided was "technically" correct.
Suggestion for a fast way to test the rest of the site for this kind of error: Use the paid version of Screaming Frog to program a search for a snippet of code that should be in the content area of every product page. Limit the crawl to the product pages category. (Or whatever sections of the site you're worried about.)
You could search for something as simple as class="productExtendedDescription" which would at least ensure the content container was there. Still wouldn't prove there was any content it it, but if you wanted to get fancy with regex, you could even do that too. You could also search for the tag, which would indicate that the rest of the pages' code likely exists.
Just an idea to speed up the testing process.
Paul
-
Thanks so much Paul,
Yes, when I ran a "Fetch as Googlebot" it returned a "Success" message, but when I looked at what Google is seeing there is no content on the page.
"borked" - great term...I am definitely going to have to file that one away for future use!
If the problem is isolated to this page, that's one thing. I am more concerned that this problem is effecting a larger number of pages.
Once I figure it out, I'll come back here and post what we found/fixed.
I really appreciate the comments from you and Andrew very much!
-
Dana, there's no content on that page.
The massive head section with all it's JavaScript is there, making it look like there's lots of code, but the actual body content has somehow been deleted.
This is all I see in the actual body of the page:
|
<form name="headerForm" action="IAFDispatcher" onsubmit="return submitQuery()" method="post">
That's it. There's no actual content, no footer, no closing or tag, which makes me think someone's actually deleted the content part of the code by accident.
Good luck figuring out who borked it
Paul
</form>
|
-
I just ran the source code for this page through the validator at: http://validator.w3.org/
There are a multitude of problems that need to be addressed. Thanks very much Andrew. I do have enough HTML knowledge to provide guidance to our IT manager on how to fix the problems. I don't have access to much of the source code, so it will certainly be a "project" to fix the issues.
I am sure these problems are everywhere all over the site, as many people with very little experience in coding and design have had their hands in the pot (so to speak) over the years.
At least this will allow me to prove to our CEO that our underlying code is indeed presenting a problem for indexing and crawling.
-
I did some comparisons with other pages and it doesn't seem that the drop-down frequency selector is the culprit. This page also has one: www.ccisolutions.com/StoreFront/product/shure-slx24-sm58-wireless-microphone-system-h5
but the cache in Google seems to be fine for this page and OSE displays data for it just fine.
-
Could the coding issue be related to the drop down box that's located just above the pricing on the right hand side? That is one thing that makes this product page different from others on our site.
Thoughts?
-
I also see what you mean that there is a problem with Google's cache. The cache date is really old (April 11) and there is no preview of the page.
Anyone who can point me in the right direction?
-
Thanks so much for responding Andrew. I have suspected problems with our code for a long time, but I am not a coder, sp it's been a challenge to attempt to identify the specific problem.
I believe this is not just a problem with this page, but could be a problem across many pages on our site.
Can you are any of my fellow Mozzers point to what you are seeing in the source code that leads you to believe it is corrupted?
Many thanks for any help. I truly appreciate it!
Dana
-
Hi Dana,
I think your page is corrupted, I have copied a link to the sourcecode I am seeing http://pastebin.com/BRfFT4RR
It looks like Google Cache is also having problems with this page. Perhaps OSE had trouble too and so skipped the page?
- Andrew
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Why are my month old links not showing in WMT or OSE?
Hey Everyone! I've been working hard on boosting my ranking after launching my new site 2 months ago. I realize that this process normally takes some time however I've come across a questions that I'm not sure I know that answer to... I had some really solid back links added about a month ago but nothing is showing up yet in WMT on them. When I run an OSE I don't see any links there (not even under newly discovered). Can someone please help with some insight on why they might now be showing. Is it just a time thing or should I be doing something different?Any insight will be very helpful. Thanks
Moz Pro | | MDesigns0 -
Moz vs google data conflict?
Hi there, I am doing an SEO site audit for a client(giveaway, and here is the problem: when performing site:domain.com on google --> 13,800 pages were found When I see this number it seems to be a bit too much compare to the links i checked on integrity(link check for broken links) which gave me a result of 1291. I digged in more into the Google results and saw hundreds(maybe thousands) of pages that are blocked by robots.txt. So I am thinking, ok this is it, thousands of pages can't be crawled by the search engines. Here is the big BUT though, then I check at my moz crawl (see attachment) and no pages are blocked by the SEs, and then look at the dups, only 23 recorded?? Is Moz not crawling properly the 13,800 results that google finds or is this some magical phenomenon happening here? I am really confused here that is why I need some help here! Thank you guys! A990Hu4.png k842AOn.png
Moz Pro | | Ideas-Money-Art0 -
OSE: What is the subdomain?
Hi, i was looking in the Open Site Explorer my domain and i can't find what is exactly the "subdomain metrics"! It check all the subdomains in the domain and make a total score?
Moz Pro | | petrospan0 -
Canonical link on canonical url
This might seem a bit of an odd one, but we seem to be going around in circles on this when using the on page optimizer tool. We have an ecommerce site (magento) which by default is putting a canonical link in the header on every product page. For example; www.example.com/product1.html has the But when we run the on page optimiser tool, we're losing points on the critical section for not having canonical set correctly. If we remove the tag, we get the tick and the a grade, but then further down the report we lose a tick for not using canonical links. What are we missing here?
Moz Pro | | andyjsi0 -
Batch lookup domain authority on list of URL's?
I found this site the describes how to use excel to batch lookup url's using seomoz api. The only problem is the seomoz api times out and returns 1 if I try dragging the formula down the cells which leaves me copying, waiting 5 seconds and copying again. This is basically as slow as manually looking up each url. Does anyone know a workaround?
Moz Pro | | SirSud1 -
Stats from Moz bar and OSE not changed over last three months - new update?
I have noticed for a few of my websites that the SEOMoz stats have not changed over the last three months; PR, MR and even total external links. Is both the tool bar and OSE going through further updates and not yet pulling through accurate results? Any advice on this would be much appreciated.
Moz Pro | | Adido-1053990 -
In OSE "Followed Linking Root Domains" = "links from homepages"
In OSE's, "Followed Linking Root Domains" are defined as "The number of root domains that have at least one followed link to a page or domain." Does this mean that if one of my competitors has, let's say, 1000 followed linking root domains, they have a link pointing to them on the homepage of 1000 other sites? Thanks for your help!
Moz Pro | | gerardoH0 -
How to remove Duplicate content due to url parameters from SEOMoz Crawl Diagnostics
Hello all I'm currently getting back over 8000 crawl errors for duplicate content pages . Its a joomla site with virtuemart and 95% of the errors are for parameters in the url that the customer can use to filter products. Google is handling them fine under webmaster tools parameters but its pretty hard to find the other duplicate content issues in SEOMoz with all of these in the way. All of the problem parameters start with ?product_type_ Should i try and use the robot.txt to stop them from being crawled and if so what would be the best way to include them in the robot.txt Any help greatly appreciated.
Moz Pro | | dfeg0