Are W3C Validators too strict? Do errors create SEO problems?
-
I ran a HTML markup validation tool (http://validator.w3.org) on a website. There were 140+ errors and 40+ warnings. IT says "W3C Validators are overly strict and would deny many modern constructs that browsers and search engines understand."
What a browser can understand and display to visitors is one thing, but what search engines can read has everything to do with the code.
I ask this: If the search engine crawler is reading thru the code and comes upon an error like this:
…ext/javascript" src="javaScript/mainNavMenuTime-ios.js"> </script>');}
The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element
in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed).
One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create
cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer
the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error).and this...
<code class="input">…t("?");document.write('>');}</code>
The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed).
One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error).
Does this mean that the crawlers don't know where the code ends and the body text begins; what it should be focusing on and not?
-
Google is a different case being run through the validator. I actually read an article on why google's site do not validate. The reason is that they send so much traffic, it actually saves them a good amount of money not closing tags that do not matter. Things like adding a self closing / to an img tag and the sorts.
While I do not think that validation is a ranking factor, I wouldn't totally dismiss it. It make code easier to maintain, and it has actually gotten me jobs before. Clients have actually ran my site through a validator before and hired me.
Plus funny little things work out too, someone tested my site on nibbler and it came back as one of the top 25 sites. I get a few hundred hits a day from it. I will take traffic any where I can get it.
-
I agree with Sheldon, and, just for perspective....try running http://www.google.com through the same w3c HTML validator. That should be an excellent illustration. A page with almost nothing on it, coded by the brilliant folks at Google still shows 23 errors and 4 warnings. I'd say not to obsess over this too much unless something is interfering with the rendering of the page or your page load speed.
Hope that helps!
Dana
-
Generally speaking, I would agree that validation is often too strict.
Google seems to handle this well, however. In fact, I seem to recall Matt C. once saying that the VAST majority of websites don't validate. I think he may have been talking strictly about HTML, though.
Validation isn't a ranking factor, of course, and most prevalent browsers will compensate for minor errors and render a page, regardless. So I really wouldn't be too concerned about validation just for validation's sake. As long as your pages render in most common browsers and neither page functionality nor user experience is adversely affected, I'd consider it a non-issue. As to whether a bot could be fooled into thinking the head had ended and the body had begun, I suppose it's possible, but I've never seen it happen, even with some absolutely horrible coding.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Plugins and SEO
I'd like some expert guidance. I've searched for a theme that does what I want and finally found something I like, but I'm wondering what you all think I should do to increase it's searchability. The plugin has all the listings and styling. All I need to do is past the code into the wordpress site ad voila! I have a page. Using the widget lets me allow upvotes and provide map etc. But it means the content is inside the widget instead of on the page. What would you modify if you wanted to keep the theme & widget to get the best results. http://best-of-sacramento.com/dentists This is my staging site.
Technical SEO | | julie-getonthemap1 -
SEO for sub domains
I've recently started to work on a website that has been previously targeting sub domain pages on its site for its SEO and has some ok rankings. To better explain, let me give an example...A site is called domainname.com. And has subdomains that they are targeted for seo (i.e. pageone.domainname.com, pagetwo.domainname.com, pagethree.domianname.com). The site is going through a site re-development and can reorganise its pages to another URL. What would be best way to approach this situation for SEO? Ideally, I'm tempted to recommend that new targeted pages be created - domainname.com/pageone, domainname.com/pagetwo, domainname.com/pagethree, etc - and to perform a 301 redirect from the old pages. Does a subdomain page structure (e.g. pageone.domainname.com) have any negative effects on SEO? Also, is there a good way to track rankings? I find that a lot of rank checkers don't pick up subdomains. Any tips on the best approach to take here would be appreciated. Hope I've made sense!
Technical SEO | | Gavo0 -
Posting on Forums for SEO benefit
Hi Is it beneficial to participate on forums in our market place to help become an authority in Googles eyes? I assume even if we don't get links it still useful in terms of co-citation... Thanks
Technical SEO | | jj34340 -
Roger has detected a problem
SEOMOZ says Roger has detected a problem: We have detected that the domain www.romancebookstore.com.au does not respond to web requests. Using this domain, we will be unable to crawl your site or present accurate SERP information . What is wrong with this domain??
Technical SEO | | damientown0 -
How to fix this 404 : Error ( 4XX (Client Error) )
In my report this indicates 404 : Error http://www.thexxxhouse.com/what_sets_us_aparat.html This web page removed from server .How to fix this in SEO friendly way .
Technical SEO | | innofidelity0 -
Tracking a Crawl error
Hi All, If you find a crawl error on your page. How do you find it? The error only says the URL that is wrong but this is not the location. Can i drill down and find out more information? Thank you!
Technical SEO | | wedmonds0 -
SEO tips for RSS feeds?
What SEO advice do you have for RSS feeds? Specifically, does the URL structure matter? Should the be noindex, follow or noindex, follow? Any other advice?
Technical SEO | | nicole.healthline0