HTML Encoding Error
-
Okay, so this is driving me nuts because I should know how to find and fix this but for the life of me cannot. One of the sites I work for has a long-standing crawl error in Google WMT tools for the URL /a%3E that appears on nearly every page of the site. I know that a%3E is an improperly encoded > but I can't seem to find where exactly in the code its coming from. So I keep putting it off and coming back to it every week or two only to wrack my brain and give up on it after about an hour (since its not a priority and its not really hurting anything). The site in question is https://www.deckanddockboxes.com/ and some of the pages it can be found on are /small-trash-can.html, /Dock-Step-Storage-Bin.html, and /Standard-Dock-Box-Maxi.html (among others). I figured it was about time to ask for another set of eyes to look at this for me. Any help would be greatly appreciated. Thanks!
-
Could be, I suppose. But it's been happening on and off for months now. I just mostly stop caring after a bit, clear out the errors and get annoyed when I see it pop up again. Its one of those things that doesn't actually cause a problem but I can't help feeling irked by its existence. All in all, I'm perfectly fine with the solution being "Google is wrong, leave it alone"... that's basically what I've been doing anyway.
-
I did a Screaming Frog crawl of your site, but didn't see any malformed links. Maybe it was a temporary issue that just hasn't been cleared from Google's cache.
-
Sorry, I wasn't getting email notifications that people had answered. I checked with our remaining coder who said that was there on purpose (much like Highland stated) and he's going to take a look deeper into it once he has the chance but doesn't know why its showing up like that.
-
in XHTML(which he's using) and HTML5, it is proper formatting to add a closing slash to tags that don't have a closing tag. So br, hr, input, etc. all need that closing slash.
-
This is a bit of a long shot, Mike, but it's such a weird error that long shots might pay off
On your code around line 769 you have a horizontal rule inserted, which has an extra, unneeded "/" before the final ">" of the tag. I can only assume that Googlebot is thinking that's an attempt at a relative URL?
Cart is empty
*** * *** **//This may be the problem?
You wouldn't have noticed it as the horizontal rule is still appearing as expected.
Like I said, long shot, but since the cart appears on nearly every page, that could explain it.
Dying to know if that's it, so lemme know either way?
Paul
-
The page is being linked from only internal pages on the site not from any outside websites or scraper. Some of the pages WMT says the incorrect page is being crawled from are listed above.
-
Where are you seeing the error in Webmaster Tools?
If it's in the Crawl Errors section, you can click on one of the links and click the "Linked From" tab, which will show you what pages are linking to the malformed link. A lot of times these will just be external scraper sites that are just linking to your site improperly.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Getting error in webmasters
My site was running perfectly from last one year... I don't know what happened now google is showing error while I am trying to use fetch option in webmasters. http://prntscr.com/6mtud5
Technical SEO | | Srinu0 -
What is the best way to handle these duplicate page content errors?
MOZ reports these as duplicate page content errors and I'm not sure the best way to handle it. Home
Technical SEO | | ElykInnovation
http://myhjhome.com/
http://myhjhome.com/index.php Blog
http://myhjhome.com/blog/
http://myhjhome.com/blog/?author=1 Should I just create 301 redirects for these? 301 http://myhjhome.com/index.php to http://myhjhome.com/ ? 301 http://myhjhome.com/blog/?author=1 to http://myhjhome.com/ ? Or is there a better way to handle this type of duplicate page content errors? and0 -
I am getting html validation errors on my schema, is there something I've done incorrectly?
When I tried to validate my html after added some schema, I am getting html validation errors. I have some examples below. Any ideas on what I've done incorrectly? www.mexpro.com Line 9, Column 80: there is no attribute "itemprop" `…ontent="http://www.mexpro.com/mexico/images/mexpro-logo.jpg" itemprop=**"**logo" />`_L Line 130, Column 23_: "itemscope" is not a member of a group specified for any attribute `itemtype="http://schema.org/Organization">`
Technical SEO | | RoxBrock0 -
Duplicate Page Title Error passing a php variable
Hi i've searched about this and read about this and i can't get my head around it and could really do with some help. I have a lot of contact buttons which all lead to the same enquiry form and dependant on where it has come from it fills in the enquiry field on the contact form. For example if you are on the airport transfer page it will carry the value so its prefilled in (.php?prt=Airport Transfers). The problem is it's coming up as a duplicate page however its just the 1. I have this problem with quite a few sites and really need to combat this issue. Any help would be very much appreciated. airport-transfers.php
Technical SEO | | i7Creative0 -
Has anyone else gotten strange WMT errors recently?
Yesterday, one of my sites got this message from WMT: "Over the last 24 hours, Googlebot encountered 1 errors while attempting to retrieve DNS information for your site. The overall error rate for DNS queries for your site is 100.0%." I did a fetch as Googlebot and everything seems fine. Also, the site is not seeing a decrease in traffic. This morning, a client for which I am doing some unnatural links work emailed me about a site of his that got this message: "Over the last 24 hours, Googlebot encountered 1130 errors while attempting to access your robots.txt. To ensure that we didn't crawl any pages listed in that file, we postponed our crawl. Your site's overall robots.txt error rate is 100.0%." His robots.txt looks fine to me. Is anyone else getting messages like this? Could it be a WMT bug?
Technical SEO | | MarieHaynes1 -
404 Error
Hello, Seomoz flagged a url as having a 404 client error. The reason the link doesn't return a proper content page is because the url name was changed. What should we do? Will this error disappear when Google indexes our site again? Or is there some way to manually eliminate it? Thanks!
Technical SEO | | OTSEO0 -
Can you have a /sitemap.xml and /sitemap.html on the same site?
Thanks in advance for any responses; we really appreciate the expertise of the SEOmoz community! My question: Since the file extensions are different, can a site have both a /sitemap.xml and /sitemap.html both siting at the root domain? For example, we've already put the html sitemap in place here: https://www.pioneermilitaryloans.com/sitemap Now, we're considering adding an XML sitemap. I know standard practice is to load it at the root (www.example.com/sitemap.xml), but am wondering if this will cause conflicts. I've been unable to find this topic addressed anywhere, or any real-life examples of sites currently doing this. What do you think?
Technical SEO | | PioneerServices0 -
How to remove crawl errors in google webmaster tools
In my webmaster tools account it says that I have almost 8000 crawl errors. Most of which are http 403 errors The urls are http://legendzelda.net/forums/index.php?app=members§ion=friends&module=profile&do=remove&member_id=224 http://legendzelda.net/forums/index.php?app=core&module=attach§ion=attach&attach_rel_module=post&attach_id=166 And similar urls. I recently blocked crawl access to my members folder to remove duplicate errors but not sure how i can block access to these kinds of urls since its not really a folder thing. Any idea on how to?
Technical SEO | | NoahGlaser780