Are W3C Validators too strict? Do errors create SEO problems?
-
I ran a HTML markup validation tool (http://validator.w3.org) on a website. There were 140+ errors and 40+ warnings. IT says "W3C Validators are overly strict and would deny many modern constructs that browsers and search engines understand."
What a browser can understand and display to visitors is one thing, but what search engines can read has everything to do with the code.
I ask this: If the search engine crawler is reading thru the code and comes upon an error like this:
…ext/javascript" src="javaScript/mainNavMenuTime-ios.js"> </script>');}
The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element
in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed).
One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create
cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer
the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error).and this...
<code class="input">…t("?");document.write('>');}</code>
The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed).
One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error).
Does this mean that the crawlers don't know where the code ends and the body text begins; what it should be focusing on and not?
-
Google is a different case being run through the validator. I actually read an article on why google's site do not validate. The reason is that they send so much traffic, it actually saves them a good amount of money not closing tags that do not matter. Things like adding a self closing / to an img tag and the sorts.
While I do not think that validation is a ranking factor, I wouldn't totally dismiss it. It make code easier to maintain, and it has actually gotten me jobs before. Clients have actually ran my site through a validator before and hired me.
Plus funny little things work out too, someone tested my site on nibbler and it came back as one of the top 25 sites. I get a few hundred hits a day from it. I will take traffic any where I can get it.
-
I agree with Sheldon, and, just for perspective....try running http://www.google.com through the same w3c HTML validator. That should be an excellent illustration. A page with almost nothing on it, coded by the brilliant folks at Google still shows 23 errors and 4 warnings. I'd say not to obsess over this too much unless something is interfering with the rendering of the page or your page load speed.
Hope that helps!
Dana
-
Generally speaking, I would agree that validation is often too strict.
Google seems to handle this well, however. In fact, I seem to recall Matt C. once saying that the VAST majority of websites don't validate. I think he may have been talking strictly about HTML, though.
Validation isn't a ranking factor, of course, and most prevalent browsers will compensate for minor errors and render a page, regardless. So I really wouldn't be too concerned about validation just for validation's sake. As long as your pages render in most common browsers and neither page functionality nor user experience is adversely affected, I'd consider it a non-issue. As to whether a bot could be fooled into thinking the head had ended and the body had begun, I suppose it's possible, but I've never seen it happen, even with some absolutely horrible coding.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt error
Moz Crawler is not able to access the robots.txt due to server error. Please advice on how to tackle the server error.
Technical SEO | | Shanidel0 -
Do I have a robots.txt problem?
I have the little yellow exclamation point under my robots.txt fetch as you can see here- http://imgur.com/wuWdtvO This version shows no errors or warnings- http://imgur.com/uqbmbug Under the tester I can currently see the latest version. This site hasn't changed URLs recently, and we haven't made any changes to the robots.txt file for two years. This problem just started in the last month. Should I worry?
Technical SEO | | EcommerceSite0 -
429 Errors?
I have over 500,000 429 errors in webmaster tools. Do I need to be concerned about these errors?
Technical SEO | | TheKrazyCouponLady0 -
SEO url best practices
We're revamping our site architecture and making several services pages that are accessible from one overarching service page. An example would be as follows: Services Student Services Essay editing Essay revision Author Services Book editing Manuscript critique We'll also be putting breadcrumbs throughout the site for easy navigation, however, is it imperative that we build the URLs that deep? For example, could we simply have www.site.com/essay-editing rather than www.site.com/services/students/essay-editing? I prefer the simplicity of the former, but I feel the latter may be more "search robot friendly" and better for SEO. Any advice on this is much appreciated.
Technical SEO | | Kibin0 -
Remove 404 errors
I've got a site (www.dikelli.com.au) that has some 404 errors. I'm using Dreamweaver to manage the site which was built for me by I can't seem to figure out how to remove the 404 pages as it's not showing up in the directory? How would I fix this up?
Technical SEO | | sterls0 -
Basic SEO HTML
Hello Everyone, One place I am weak is coding for SEO. I need to get better. One question I do have is can anyone explain why it's important to place css and java script files in an external file? How do you do this and how do you know if it's already being done? If it has not been done on a site is it hard to go back and do? I understand this is important from a site load time issue Thanks, Bill P.S. Can anyone recommend a resource where I can learn proper html coding for SEO? Thank you!
Technical SEO | | wparlaman0 -
Call tracking and Local SEO
I am the in house SEO/Web for a multi-branch/multi-state company and I am having a hard time tracking the amount of calls that are coming off of our site to the many branches. I am wondering if there are any tips for use call tracking numbers without negatively impacting my local SEO results? In one of the webinars they touched on image replacement and also that it some situations it might be out to just use the tracking numbers.
Technical SEO | | mmaes0 -
Crawl Errors In Webmaster Tools
Hi Guys, Searched the web in an answer to the importance of crawl errors in Webmaster tools but keep coming up with different answers. I have been working on a clients site for the last two months and (just completed one months of link bulding), however seems I have inherited issues I wasn't aware of from the previous guy that did the site. The site is currently at page 6 for the keyphrase 'boiler spares' with a keyword rich domain and a good onpage plan. Over the last couple of weeks he has been as high as page 4, only to be pushed back to page 8 and now settled at page 6. The only issue I can seem to find with the site in webmaster tools is crawl errors here are the stats:- In sitemaps : 123 Not Found : 2,079 Restricted by robots.txt 1 Unreachable: 2 I have read that ecommerce sites can often give off false negatives in terms of crawl errors from Google, however, these not found crawl errors are being linked from pages within the site. How have others solved the issue of crawl errors on ecommerce sites? could this be the reason for the bouncing round in the rankings or is it just a competitive niche and I need to be patient? Kind Regards Neil
Technical SEO | | optimiz10