HTML Encoding Error
-
Okay, so this is driving me nuts because I should know how to find and fix this but for the life of me cannot. One of the sites I work for has a long-standing crawl error in Google WMT tools for the URL /a%3E that appears on nearly every page of the site. I know that a%3E is an improperly encoded > but I can't seem to find where exactly in the code its coming from. So I keep putting it off and coming back to it every week or two only to wrack my brain and give up on it after about an hour (since its not a priority and its not really hurting anything). The site in question is https://www.deckanddockboxes.com/ and some of the pages it can be found on are /small-trash-can.html, /Dock-Step-Storage-Bin.html, and /Standard-Dock-Box-Maxi.html (among others). I figured it was about time to ask for another set of eyes to look at this for me. Any help would be greatly appreciated. Thanks!
-
Could be, I suppose. But it's been happening on and off for months now. I just mostly stop caring after a bit, clear out the errors and get annoyed when I see it pop up again. Its one of those things that doesn't actually cause a problem but I can't help feeling irked by its existence. All in all, I'm perfectly fine with the solution being "Google is wrong, leave it alone"... that's basically what I've been doing anyway.
-
I did a Screaming Frog crawl of your site, but didn't see any malformed links. Maybe it was a temporary issue that just hasn't been cleared from Google's cache.
-
Sorry, I wasn't getting email notifications that people had answered. I checked with our remaining coder who said that was there on purpose (much like Highland stated) and he's going to take a look deeper into it once he has the chance but doesn't know why its showing up like that.
-
in XHTML(which he's using) and HTML5, it is proper formatting to add a closing slash to tags that don't have a closing tag. So br, hr, input, etc. all need that closing slash.
-
This is a bit of a long shot, Mike, but it's such a weird error that long shots might pay off
On your code around line 769 you have a horizontal rule inserted, which has an extra, unneeded "/" before the final ">" of the tag. I can only assume that Googlebot is thinking that's an attempt at a relative URL?
Cart is empty
*** * *** **//This may be the problem?
You wouldn't have noticed it as the horizontal rule is still appearing as expected.
Like I said, long shot, but since the cart appears on nearly every page, that could explain it.
Dying to know if that's it, so lemme know either way?
Paul
-
The page is being linked from only internal pages on the site not from any outside websites or scraper. Some of the pages WMT says the incorrect page is being crawled from are listed above.
-
Where are you seeing the error in Webmaster Tools?
If it's in the Crawl Errors section, you can click on one of the links and click the "Linked From" tab, which will show you what pages are linking to the malformed link. A lot of times these will just be external scraper sites that are just linking to your site improperly.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Googlebot crawl error Javascript method is not defined
Hi All, I have this problem, that has been a pain in the ****. I get tons of crawl errors from "Googlebot" saying a specific Javascript method does not exist in my logs. I then go to the affected page and test in a web browser and the page works without any Javascript errors. Can some help with resolving this issue? Thanks in advance.
Technical SEO | | FreddyKgapza0 -
Google PageSpeed Insights Error
Google PageSpeed Insights showing this "An error occurred while fetching or analyzing the page.Dismiss" URL: http://westpalmbailbonds.com/ What should I do to fix this? Thanks in Advance
Technical SEO | | Beachflower0 -
Duplicate title/content errors for blog archives
Hi All Would love some help, fairly new at SEO and using SEOMoz, I've looked through the forums and have just managed to confuse myself. I have a customer with a lot of duplicate page title/content errors in SEOMoz. It's an umbraco CMS and a lot of the errors appear to be blog archives and pagination. i.e. http://example.com/blog http://example.com/blog/ http://example.com/blog/?page=1 http://example.com/blog/?page=2 and then also http://example.com/blog/2011/08 http://example.com/blog/2011/08?page=1 http://example.com/blog/2011/08?page=2 http://example.com/blog/2011/08?page=3 (empty page) http://example.com/blog/2011/08?page=4 (empty page) This continues for different years and months and blog entries and creates hundreds of errors. What's the best way to handle this for the SEOMoz report and the search engines. Should I rel=canonical the /blog page? I think this would probably affect the SEO of all the blog entries? Use robots.txt? Sitemaps? URL parameters in the search engines? Appreciate any assistance/recommendations Thanks in advance Ian
Technical SEO | | iragless0 -
What if 404 Error not possible?
Hi Everyone, I get an 404 error in my page if the URL is simply wrong, but for some parameters, like if a page has been deleted, or has expired, I get an error page indicating that the ID is wrong, but no 404 error. It is for me very difficult to program a function in php that solve the problem and modify the .htaccess with the mod_rewrite. I ask the developer of the system to give a look, but I am not sure if I will get an answer soon. I can control the content of the deleted/expired page, but the URL will be very similar to those that are ok (actually the url could has been fine, but now expired). Thinking of solutions I can set the expired/deleted pages as noindex, would it help to avoid duplicated title/description/content problem? If an user goes to i.e., mywebsite.com/1-article/details.html I can set the head section to noindex if it has expired. Would it be good enough? Other question, is it possible anyhow to set the pages as 404 without having to do it directly in the .htacess, so avoiding the mod_rewrite problems that I am having? Some magical tag in the head section of the page? Many thanks in advance for your help, Best Regards, Daniel
Technical SEO | | te_c0 -
Error Reporting
http://pro.seomoz.org/campaigns/33868/issues/18 Rel Canonical Found about 16 hours ago <dl> <dt>Tag value</dt> <dd>http://www.geeks.com/</dd> <dt>Description</dt> <dd>Using rel=canonical suggests to search engines which URL should be seen as canonical.</dd> <dd>We do have rel canonical on some of the pages this report is recommending that we "fix" this issue.</dd> <dd> Rel Canonical Found about 16 hours ago <dl> <dt>Tag value</dt> <dd>http://www.geeks.com/products.asp?cat=MBB</dd> <dt>Description</dt> <dd>Using rel=canonical suggests to search engines which URL should be seen as canonical.</dd> </dl> <a class="more expanded">Minimize</a> </dd> </dl>
Technical SEO | | JustinGeeks0 -
Half Implemented HTML 5 Structure
Hi there, I have just notced on a website that it has a halt implemented html 5 structure. Well, when I say half implemented, it has the doctype and then one <header>section. After that all of the divs are custom ones that have been added for the CSS. Could this lack of structure have a negative effect on the site? Cheers, Edward </header>
Technical SEO | | edwardlewis0 -
Google Webmaster tools error?
So I am trying to set the URL preference in google webmaster tools for my site. However when I try to save it it tells me to verify that I own the site. I have already done this so where can I go to verify I own the site exactly? Maybe I am wrong and I have not done this already but even on the homepage of webmaster tools I don't see an option to "verify".
Technical SEO | | ENSO0 -
Best 404 Error Checker?
I have a client with a lot of 404 errors from Web Master Tools, and i have to go through and check each of the links because Some redirect to the correct page Some redirect to another url but its a 404 error Some are just 404 errors Does anyone know of a tool where i can dump all of the urls and it will tell me If the url is redirected, and to where if the page is a 404 or other error Any tips or suggestions will be really appreciated! Thanks SEO Moz'rs
Technical SEO | | anchorwave0