Unknown "/" added causing 404 error
-
I have four 404 url redirect errors that I cannot sort out.
It tells me the referring url:
|
www.homedestination.com/calculator-mortgage-resources.html has a "/" on the end.
cannot find:
| www.homedestination.com/calculator-mortgage-resources.html |
I cannot figure out where this referring url is; as it is in the root file without a "/" on the end. Could it be on a page somewhere? All my Dreamweaver page link tests come back ok.
I must be missing something simple and would value help for others who may spot it?
Thanks!
|
-
Dan,
Thanks for the helpful answer.
Sorting through that many pages looking for a stray "/" will take time. Any shortcut ways to find it? I do not get the error in Google crawl checks; I do find it in my deeper seomoz advanced cvs files. Need I try fix it then?
You are right about the canonical tag. For whatever reason I get an error in seomoz that says:
Appropriate Use of Rel Canonical
Moderate fix
<dl>
<dt>Canonical URL</dt>
<dd>"http://www.homedestinantion.com/calculator-mortgage-resources.html"</dd>
<dt>Explanation</dt>
<dd>If the canonical tag is pointing to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. Make sure you're targeting the right page (if this isn't it, you can reset the target above) and then change the canonical tag to reference that URL.</dd>
<dt>Recommendation</dt>
<dd>We check to make sure that IF you use canonical URL tags, it points to the right page. If the canonical tag points to a different URL, engines will not count this page as the reference resource and thus, it won't have an opportunity to rank. If you've not made this page the rel=canonical target, change the reference to this URL. NOTE: For pages not employing canonical URL tags, this factor does not apply.</dd>
</dl>
-
Where are you getting your 404 error report? If from someone other then Google Webmaster Tools, probably not a worry.
That page is not indexed in Google (and if they did index it, they took out the slash, so you don't need to do a redirect.) You probably have a link on your site and someone accidently added the slash in the <a>href link. Just find the bad link on your site and take out the slash. </a>
<a>The only other thing that could be causing this, is a backlink, but opensite explorer does not have back links for that page yet.
The best place to check for 404s that need attention is Google Webmaster Tools.
-Dan
PS - You should really have a canonical tag on your site.</a>
-
My guess would be that it is coming from an external site.
-
Jordan,
Thanks for your response.
Yes. I have done that some months back. My problem is with url redirects that have the "/" on the end. AND all my aspx files - Adobe Business Catalyst does not allow me to use a 301 redirect on them. All that in another subject.
As none of my pages ends in a slash, I don't know where this came from and what to do since the traditional 301 url redirect fails.
-
I am not sure how to go about finding the broken link but if you want a simple fix you could try adding a 301 redirect so that the 404 error won't happen any more. Do you know how to do that?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
What to do with 404 errors when you don't have a similar new page to 301 to ??
Hi If you have 404 errors for pages that you dont have similar content pages to 301 them to, should you just leave them (the 404's are optimised/qood quality with related links & branding etc) and they will eventually be de-indexed since no longer exist or should you 'remove url' in GWT ? Cheers Dan
Technical SEO | | Dan-Lawrence0 -
Two "Twin" Domains Responding to Web Requests
I do not understand this point in my Campaign Set-Up. They are the same site as fas as I understand Can anyone help please? Quote from SEOMOZ "We have detected that the domain www.neuronlearning.eu and the domain neuronlearning.eu both respond to web requests and do not redirect. Having two "twin" domains that both resolve forces them to battle for SERP positions, making your SEO efforts less effective. We suggest redirecting one, then entering the other here." thanks John
Technical SEO | | johnneuron0 -
Massive Increase in 404 Errors in GWT
Last June, we transitioned our site to the Magento platform. When we did so, we naturally got an increase in 404 errors for URLs that were not redirected (for a variety of reasons: we hadn't carried the product for years, Google no longer got the same string when it did a "search" on the site, etc.). We knew these would be there and were completely fine with them. We also got many 404s due to the way Magento had implemented their site map (putting in products that were not visible to customers, including all the different file paths to get to a product even though we use a flat structure, etc.). These were frustrating but we did custom work on the site map and let Google resolve those many, many 440s on its own. Sure enough, a few months went by and GWT started to clear out the 404s. All the poor, nonexistent links from the site map and missing links from the old site - they started disappearing from the crawl notices and we slowly went from some 20k 404s to 4k 404s. Still a lot, but we were getting there. Then, in the last 2 weeks, all of those links started showing up again in GWT and reporting as 404s. Now we have 38k 404s (way more than ever reported). I confirmed that these bad links are not showing up in our site map or anything and I'm really not sure how Google found these again. I know, in general, these 404s don't hurt our site. But it just seems so odd. Is there any chance Google bots just randomly crawled a big ol' list of outdated links it hadn't tried for awhile? And does anyone have any advice for clearing them out?
Technical SEO | | Marketing.SCG0 -
Duplicate content / title caused by CAPITALS
What is the best way to stop duplicate content warning (and Google classing them as duplicate content), when it is caused by CAPITALS (i.e www.domain.com/Directory & www.domain.com/directory ). I try to always use lower case (unless a place name then i use Capitals for the first letter), but it looks like i have slipped up and got some mixed up and other sites will also be linking to Capitals Thanks Jon
Technical SEO | | jonny5123790 -
.htaccess and error 404
Hi, I permit to contact the community again because you have good and quick answer ! Yesterday, I lost the file .htaccess on my server. Right now, only the home page is working and the other pages give me this message : Not Found The requested URL /freshadmin/user/login/ was not found on this server Could you help me please? Thanks
Technical SEO | | Probikeshop0 -
How unique does a page need to be to avoid "duplicate content" issues?
We sell products that can be very similar to one another. Product Example: Power Drill A and Power Drill A1 With these two hypothetical products, the only real difference from the two pages would be a slight change in the URL and a slight modification in the H1/Title tag. Are these 2 slight modifications significant enough to avoid a "duplicate content" flagging? Please advise, and thanks in advance!
Technical SEO | | WhiteCap0 -
I have a ton of "duplicated content", "duplicated titles" in my website, solutions?
hi and thanks in advance, I have a Jomsocial site with 1000 users it is highly customized and as a result of the customization we did some of the pages have 5 or more different types of URLS pointing to the same page. Google has indexed 16.000 links already and the cowling report show a lot of duplicated content. this links are important for some of the functionality and are dynamically created and will continue growing, my developers offered my to create rules in robots file so a big part of this links don't get indexed but Google webmaster tools post says the following: "Google no longer recommends blocking crawler access to duplicate content on your website, whether with a robots.txt file or other methods. If search engines can't crawl pages with duplicate content, they can't automatically detect that these URLs point to the same content and will therefore effectively have to treat them as separate, unique pages. A better solution is to allow search engines to crawl these URLs, but mark them as duplicates by using the rel="canonical" link element, the URL parameter handling tool, or 301 redirects. In cases where duplicate content leads to us crawling too much of your website, you can also adjust the crawl rate setting in Webmaster Tools." here is an example of the links: | | http://anxietysocialnet.com/profile/edit-profile/salocharly http://anxietysocialnet.com/salocharly/profile http://anxietysocialnet.com/profile/preferences/salocharly http://anxietysocialnet.com/profile/salocharly http://anxietysocialnet.com/profile/privacy/salocharly http://anxietysocialnet.com/profile/edit-details/salocharly http://anxietysocialnet.com/profile/change-profile-picture/salocharly | | so the question is, is this really that bad?? what are my options? it is really a good solution to set rules in robots so big chunks of the site don't get indexed? is there any other way i can resolve this? Thanks again! Salo
Technical SEO | | Salocharly0 -
What is the criteria for link "Paged from Australia"
When i enter a keyword in google.com.au, and click on a link "Pages from australia" ( in the middle left ), i expect to australian sites only. But there are sites with .com extension. Then what is the meaning of link "Pages from australia". What does it signify ?
Technical SEO | | seoug_20050