Why is OSE showing no data for this URL?
-
Hi all,
Does anyone have any ideas as to why OSE might not have any data for this URL:
http://www.ccisolutions.com/StoreFront/product/shure-slx24-sm58-wireless-microphone-system-j3
It is not a new page at all. It's been on the site for years.
Is OSE being quirky? Or is there an underlying problem with this page?
Thanks in advance for any light you can shed on this,
Dana
-
Hi Paul,
We discovered that the problem was being caused by a trailing "comma" at the end of the keyword string that we once used to populate the Meta keywords tag. Unfortunately, the keyword information in those fields is still being parsed. The parser did not know what to do when it encountered a comma followed by nothing.
We did run a query and found that this problem was affecting 128 of our product pages and had been for a long time. We haven't been populating the keywords for almost a year now, so the problem is at least that old.
The commas are now gone.
Thanks again to you and Andrew!
-
Glad I could help, Dana.
And yes, "borked" is a technical term. It's defined as existing in a badly broken state as a result of an inexperienced/inattentive user making unauthorised/incorrect changes to a website's code or content
Can also be used as a verb: "he borked the database so badly the whole site went 503".
Not that it's ever been applied to me or anything.
And yea - sometimes our tools can mislead us, even though the info they provided was "technically" correct.
Suggestion for a fast way to test the rest of the site for this kind of error: Use the paid version of Screaming Frog to program a search for a snippet of code that should be in the content area of every product page. Limit the crawl to the product pages category. (Or whatever sections of the site you're worried about.)
You could search for something as simple as class="productExtendedDescription" which would at least ensure the content container was there. Still wouldn't prove there was any content it it, but if you wanted to get fancy with regex, you could even do that too. You could also search for the tag, which would indicate that the rest of the pages' code likely exists.
Just an idea to speed up the testing process.
Paul
-
Thanks so much Paul,
Yes, when I ran a "Fetch as Googlebot" it returned a "Success" message, but when I looked at what Google is seeing there is no content on the page.
"borked" - great term...I am definitely going to have to file that one away for future use!
If the problem is isolated to this page, that's one thing. I am more concerned that this problem is effecting a larger number of pages.
Once I figure it out, I'll come back here and post what we found/fixed.
I really appreciate the comments from you and Andrew very much!
-
Dana, there's no content on that page.
The massive head section with all it's JavaScript is there, making it look like there's lots of code, but the actual body content has somehow been deleted.
This is all I see in the actual body of the page:
|
<form name="headerForm" action="IAFDispatcher" onsubmit="return submitQuery()" method="post">
That's it. There's no actual content, no footer, no closing or tag, which makes me think someone's actually deleted the content part of the code by accident.
Good luck figuring out who borked it
Paul
</form>
|
-
I just ran the source code for this page through the validator at: http://validator.w3.org/
There are a multitude of problems that need to be addressed. Thanks very much Andrew. I do have enough HTML knowledge to provide guidance to our IT manager on how to fix the problems. I don't have access to much of the source code, so it will certainly be a "project" to fix the issues.
I am sure these problems are everywhere all over the site, as many people with very little experience in coding and design have had their hands in the pot (so to speak) over the years.
At least this will allow me to prove to our CEO that our underlying code is indeed presenting a problem for indexing and crawling.
-
I did some comparisons with other pages and it doesn't seem that the drop-down frequency selector is the culprit. This page also has one: www.ccisolutions.com/StoreFront/product/shure-slx24-sm58-wireless-microphone-system-h5
but the cache in Google seems to be fine for this page and OSE displays data for it just fine.
-
Could the coding issue be related to the drop down box that's located just above the pricing on the right hand side? That is one thing that makes this product page different from others on our site.
Thoughts?
-
I also see what you mean that there is a problem with Google's cache. The cache date is really old (April 11) and there is no preview of the page.
Anyone who can point me in the right direction?
-
Thanks so much for responding Andrew. I have suspected problems with our code for a long time, but I am not a coder, sp it's been a challenge to attempt to identify the specific problem.
I believe this is not just a problem with this page, but could be a problem across many pages on our site.
Can you are any of my fellow Mozzers point to what you are seeing in the source code that leads you to believe it is corrupted?
Many thanks for any help. I truly appreciate it!
Dana
-
Hi Dana,
I think your page is corrupted, I have copied a link to the sourcecode I am seeing http://pastebin.com/BRfFT4RR
It looks like Google Cache is also having problems with this page. Perhaps OSE had trouble too and so skipped the page?
- Andrew
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Can using url builder for campaign tracking impact link equity?
We have used the URL builder tools for building custom links that are placed on our referrer websites mainly for campaign tracking in Google Analytics, but when you use a shortened link on another website how does that impact the the link juice or equity? Is there any negative impact on the link rankings? Or should you provide the specific landing page url to the company that will be posting a link to your site?
Moz Pro | | CSobus0 -
Site Explorer shows links as followable but they have nofollow tags
Hello, I am looking at site explorer and sites linking to my site moneyfact.co.uk. I've got thousands of links showing as 'followable' but when i check them they have rel="nofollow" tags. e.g: http://www.dianomioffers.co.uk/partner/moneyfacts.co.uk/brochures.epl?partner=93&partner_id=93&partner_variant_id=33 Why would they show as followable when the links are nofollowed? Thanks Steve
Moz Pro | | SteveBrumpton0 -
Why does mozbar show pagerank sometimes and not opensiteexplorer?
Hello Internet People! I have been using Opensiteexplorer and Mozbar side by side for 2 months now, running comparisons while link building. I have noticed a small problem, in that sometimes, Mozbar will display page rank information while Opensiteexplorer does not. Eventually Opensiteexplorer updated after several days or weeks, showing the same values as Mozbar. Can someone help me understand why this is happening and if it is a problem to trust the tool over the website? Thanks!
Moz Pro | | CGalownia0 -
MozTrust suddenly dropped to zero. Link data now unavailable???
I'm running an SEOMoz campaign for a small site to monitor some tweaks I've made and testing new things out. Over a year or so the changes were great - organic traffic was rising, domain metrics were too. Then in October 2012 domain trust, Moztrust and all other link metrics went to zero. There's no data in OSE for this particular small site and as far as I can tell, no impact on search. The limited number of inbound links all appear to be intact and there's no reason why the site would be hit by Panda/Penguin style updates. Why has this sudden loss of data occurred? Is the site no longer in SEOMoz's databases? Without any link data it's difficult to tell why this has happened. If it is no longer in the Moz database, how do I get it back - that data was useful!
Moz Pro | | StevieCC0 -
Tool for tracking actions taken on problem urls
I am looking for tool suggestions that assist in keeping track of problem urls, the actions taken on urls, and help deal with tracking and testing a large number of errors gathered from many sources. So, what I want is to be able to export lists of url's and their problems from my current sets of tools (SEOmoz campaigns, Google WM, Bing WM,.Screaming Frog) and input them into a type of centralized DB that will allow me to see all of the actions that need to be taken on each url while at the same time removing duplicates as each tool finds a significant amount of the same issues. Example Case: SEOmoz and Google identify urls with duplicate title tags (example.com/url1 & example.com/url2) , while Screaming frog sees that example.com/url1 contains a link that is no longer valid (so terminates in a 404). When I import the three reports into the tool I would like to see that example.com/url1 has two issues pending, a duplicated title and a broken link, without duplicating the entry that both SEOmoz and Google found. I would also like to see historical information on the url, so if I have written redirects to it (to fix a previous problem), or if it used to be a broken page (i.e. 4XX or 5XX error) and is now fixed. Finally, I would like to not be bothered with the same issue twice. As Google is incredibly slow with updating their issues summary, I would like to not important duplicate issues (so the tool should recognize that the url is already in the DB and that it has been resolved). Bonus for any tool that uses Google and SEOmoz API to gather this info for me Bonus Bonus for any tool that is smart enough to check and mark as resolved issues as they come in (for instance, if a url has a 403 error it would check on import if it still resolved as a 403. If it did it would add it to the issue queue, if not it would be marked as fixed). Does anything like this exist? how do you deal with tracking and fixing thousands of urls and their problems and the duplicates created from using multiple tools. Thanks!
Moz Pro | | prima-2535090 -
Dead links-urls
What is the quickest way to get Google to clean up dead
Moz Pro | | 1step2heaven
link? I have 74,000 dead links reported back, i have added a robot txt to
disallow and added on Google list remove from my webmaster tool 4 months ago.
The same dead links also show on the open site explores. Thanks0 -
My 2 Target Keywords Rank In Top 10 But Don't Show Up...?
Hi, My 2 target keywords definitely rank in the top 10 in the USA version of Google but they dont show up in SEOMOZ rank checker...I started using SEOMOZ abour a week or so ago and in 2 consecutaive ranking checks they haven't shown up... Market Samurai says they're 4 and 8 respectively, I can see them in the top 10 of Google and various USA proxies show they're in the top 10... Is there a glitch with SEOMOZ or am I doing something wrong? Thanks, James
Moz Pro | | James10 -
SEOMoz Campaign shows Warnings for pages with >200 and <300 links
We currently use SEOMoz's campaign tool to review the SEO progress of our site. One thing we are unsure of is that SEOMoz gives us a warning for over 1000 of our pages because we have around 200 links on those pages (all in the Menu Drop Downs). I read the post and watched the video, Whiteboard Friday Flat Site Architecture a while ago and Rand mentioned there is no issue with having a web page with 200 to 300 links and he even encouraged it. So why would these show up as warnings in our Campaign?
Moz Pro | | PBCLinear0