Are W3C Validators too strict? Do errors create SEO problems?
-
I ran a HTML markup validation tool (http://validator.w3.org) on a website. There were 140+ errors and 40+ warnings. IT says "W3C Validators are overly strict and would deny many modern constructs that browsers and search engines understand."
What a browser can understand and display to visitors is one thing, but what search engines can read has everything to do with the code.
I ask this: If the search engine crawler is reading thru the code and comes upon an error like this:
…ext/javascript" src="javaScript/mainNavMenuTime-ios.js"> </script>');}
The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element
in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed).
One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create
cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer
the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error).and this...
<code class="input">…t("?");document.write('>');}</code>
The element named above was found in a context where it is not allowed. This could mean that you have incorrectly nested elements -- such as a "style" element in the "body" section instead of inside "head" -- or two elements that overlap (which is not allowed).
One common cause for this error is the use of XHTML syntax in HTML documents. Due to HTML's rules of implicitly closed elements, this error can create cascading effects. For instance, using XHTML's "self-closing" tags for "meta" and "link" in the "head" section of a HTML document may cause the parser to infer the end of the "head" section and the beginning of the "body" section (where "link" and "meta" are not allowed; hence the reported error).
Does this mean that the crawlers don't know where the code ends and the body text begins; what it should be focusing on and not?
-
Google is a different case being run through the validator. I actually read an article on why google's site do not validate. The reason is that they send so much traffic, it actually saves them a good amount of money not closing tags that do not matter. Things like adding a self closing / to an img tag and the sorts.
While I do not think that validation is a ranking factor, I wouldn't totally dismiss it. It make code easier to maintain, and it has actually gotten me jobs before. Clients have actually ran my site through a validator before and hired me.
Plus funny little things work out too, someone tested my site on nibbler and it came back as one of the top 25 sites. I get a few hundred hits a day from it. I will take traffic any where I can get it.
-
I agree with Sheldon, and, just for perspective....try running http://www.google.com through the same w3c HTML validator. That should be an excellent illustration. A page with almost nothing on it, coded by the brilliant folks at Google still shows 23 errors and 4 warnings. I'd say not to obsess over this too much unless something is interfering with the rendering of the page or your page load speed.
Hope that helps!
Dana
-
Generally speaking, I would agree that validation is often too strict.
Google seems to handle this well, however. In fact, I seem to recall Matt C. once saying that the VAST majority of websites don't validate. I think he may have been talking strictly about HTML, though.
Validation isn't a ranking factor, of course, and most prevalent browsers will compensate for minor errors and render a page, regardless. So I really wouldn't be too concerned about validation just for validation's sake. As long as your pages render in most common browsers and neither page functionality nor user experience is adversely affected, I'd consider it a non-issue. As to whether a bot could be fooled into thinking the head had ended and the body had begun, I suppose it's possible, but I've never seen it happen, even with some absolutely horrible coding.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Coming soon SEO
Hi, I was wondering what is the best practice to redirect all the links juice by redirecting all the pages of your website to a coming soon page. The coming soon page will point to the domain.com, not to a subfolder. Should I move the entire website to a subfolder and redirect this folder to the coming soon page? Thanks
Technical SEO | | bigrat950 -
Duplicate content problem
Hi there, I have a couple of related questions about the crawl report finding duplicate content: We have a number of pages that feature mostly media - just a picture or just a slideshow - with very little text. These pages are rarely viewed and they are identified as duplicate content even though the pages are indeed unique to the user. Does anyone have an opinion about whether or not we'd be better off to just remove them since we do not have the time to add enough text at this point to make them unique to the bots? The other question is we have a redirect for any 404 on our site that follows the pattern immigroup.com/news/* - the redirect merely sends the user back to immigroup.com/news. However, Moz's crawl seems to be reading this as duplicate content as well. I'm not sure why that is, but is there anything we can do about this? These pages do not exist, they just come from someone typing in the wrong url or from someone clicking on a bad link. But we want the traffic - after all the users are landing on a page that has a lot of content. Any help would be great! Thanks very much! George
Technical SEO | | canadageorge0 -
SEO of Social Media Pages
I have noticed something odd about how Google ranks social media pages, and was hoping someone would have a good explanation. When I search for a particular name in Google, the first two results are Twitter pages of two people who share the same name. #1 is an older account with more Tweets, but it has fewer followers, no external backlinks, and the URL is unrelated to the name #2 is a newer account, but it has more followers, a few external backlinks, and the name itself is in the URL. It has fewer overall Tweets, but has Tweeted more frequently over the past several months. #2 is also happens to be in the same City as I am. Given my understanding of Google's ranking factors, I would not have expected #1 to outrank #2. In fact, I would not have expected #1 to even be on the first page. What could be causing #1 to rank so highly? Does it make sense that the age of the account or the number of Tweets would affect SEO at all? Really, I am just trying to understand what are the main factors that determine the ranking of social media profile pages. Thanks
Technical SEO | | timsegraves0 -
Global Product Tabs and SEO
I am looking at a site that has global or duplicated content in 4 tabs for things like shipping, quality etc on tabs on each product page. The content is very good for UX and helps conversions but poor for SEO. Each product page has unique and tagged photos, unique title and a unique description. Is there away to specifically tell search engines that individual parts of a page are duplicated for good reason? Linking to the content to a single page decreases conversion dramatically but having it on tabs could effect ranking and quality as it is duplicated. Can anyone offer any advice with this?
Technical SEO | | Ian_W0 -
Question about creating friendly URLs
I am working on creating new SEO friendly URLs for my company website. The products are the items with the highest search volume and each is very geo-specific
Technical SEO | | theLotter
There is not a high search volume for the geo-location associated with the product, but the searches we do get convert well. Do you think it is preferable to leave the location out of the URL or include it?0 -
Country domain: Seo for other languages
Hi, I have an italian domain (.it) for an italian hotel, it is an old authoritative domain (1997) and it is well optimized for the keywords that include the city the hotel is in, now the page is decently positioned in Google Italy. There are many problems to have the same rank for German version (in google.de, google.at). The German version is in the /de folder. The hotel has another .com domain, much less authoritative (2007), in a German server, but it was and is only a simple redirect 301 (by code) to the German version in the .it domain. (obviously the rank for this domain is almost nonexistent). Do you have any suggestion? Thank you.
Technical SEO | | depi0 -
WP Blog Errors
My WP blog is adding my email during the crawl, and I am getting 200+ errors for similar to the following; http://www.cisaz.com/blog/2010/10-reasons-why-microsofts-internet-explorer-dominance-is-ending/[email protected] "[email protected]" is added to Every post. Any ideas how I fix it? I am using Yoast Plug in. Thanks Guys!
Technical SEO | | smstv0