4XX Errors - Adding %5c%5c to Links
-
Hi all
Hope someone can help me with this.
The internal links on my hubby's business site occasionally break and add %5c%5c%5c endlessly to the end of the url - like this:
site.com/about/hours-of-operation/\\\\\\\\%
I cannot for the life of me figure out why it is doing this and while it has happened to me from time to time, I can't recreate it.
My crawl diagnostics here in my SEOMox campaign show 19-20 urls doing this - it's nuts.
Any insight?
Thank you!!
Jennifer
~PotPieGirl
-
Both Shane and I looked and neither of saw any / vs \ issues.
Then, I just took a peek at my source code and look what I saw:
http://screencast.com/t/y02R4RS2L
Think that is it?
Thanks for replying!
Jennifer
-
Sent you a PM, Shane - thanks so much for the offer!
By "occasionally break", I mean that every now and again, any link on the site will freak out and add the %5c jibberish to the end.
Jennifer
-
It would really have to be....
The only reason for a %5C is the use of backslash - as that is actually what it means in code.
**business site occasionally break **
How do you mean "break" ?
My crawl diagnostics here in my SEOMox campaign show 19-20 urls doing this - it's nuts.
Is the only issue from this in SEOMOZ reports?
If you would like to PM me the site I can attempt to profile it
-
Thanks for replying so quickly, Shane!
I don't believe it's a / vs \ issue. It's a wordpress site. Key pages are in the top nav bar (all urls correct) and all the side bar links are 'widgets' and created by Wordpress.
For some reason I am suspecting a theme issue, but if I can't recreate the error, I'll have no way of knowing if changing the theme solves the problem.
Site has been online since 2010 with no issues...this is a new issue (past couple months according to crawl diagnostics).
Thanks!!!
-
somewhere in your url coding you added backslash instead of slash which is not valid.
-
It appears from research it is actually \ (backslash) not / (slash)
So possibly somewhere in your site you have used \ instead of / but of course this is just a possibility, as it appears that %5C is the delimiter or interpretation of \ in original Unix.
Hope this helps
PS a quick check at http://www.w3schools.com/tags/ref_urlencode.asp verified %5c is URL encode, for backslash - as it is treated as a special character, so somewhere in your CMS or code, you have used backslash instead of slash.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
I have over 3000 4xx errors on my site for pages that don't exist! Please help!
Hello! I have a new blog that is only 1 month old and I already have over 3000 4xx errors which I've never had on my previous blogs. I ran a crawl on my site and it's showing as my social media links as being indexed as pages. For example, my blog post link is:
Technical SEO | | thebloggersi
https://www.thebloggersincentive.com/blogging/get-past-a-creative-block-in-blogging/
My site is then creating a link like the below:
https://www.thebloggersincentive.com/blogging/get-past-a-creative-block-in-blogging/twitter.com/aliciajthomps0n
But these are not real pages and I have no idea how they got created. I then paid someone to index the links because I was advised by Moz, but it's still not working. All the errors are the same, it's indexing my Twitter account and my Pinterest. Can someone please help, I'm really at a loss with it.
2f86c9fe-95b4-4df5-aeb4-73570881938c-image.png0 -
Multiple Common Page Links
Hi everyone - I've taken over SEO for a site recently. In many cases, the reasons why something was done were not well documented. One of these is that on some pages, there are lists of selections. Each selection takes the user to a particular page. On the list page, there is often a link from an image, a name, and a couple of others. Each page often has 30 items with 4 links each. For some reason, the 4th of these internal links were no-followed. When I run this site through several different site evaluation tools, they are all troubled with the number of no-follow links on the site. (These instances from above add up to a 5 figure number). From a user perspective, I totally get why there is a link where each of these links exist. If I wanted to click on the image or the name or some other attribute, that totally makes sense. Its my understanding that Google / Bing are only going to consider the 1st instance. If this creates excessive links, wouldn't you want 3 of the 4 links in each set no-followed? If its only excessive unique links that really matter, then why would any be nofollowed.
Technical SEO | | APFM0 -
Linking Pages - 404s
Hello, I have noticed that we have recently managed to accrue a large number of 404s that are listed as Page Title/URL of Linking Page in Moz (e.g. http://www.onexamination.com/international/) but I do not know which site they are coming from, is there an easy why to find out or shall we just create redirects for them all? Thanks in advance for your help. Rose
Technical SEO | | bmjcai1 -
No follow links from ban sites
Hello boys and girls. I'm auditing my link profile and came across links pointing to my sites, from a ban site (site: domain.com gives back no result).yet the links are no follow. should i try to remove them all? i know no follow should be enough, yet my links are set in a bad neighborhood. what would you recommend??
Technical SEO | | Tit0 -
GWT shows 38 external links from 8 domains to this PDF - But it shows no links and no authority in OSE
Hi All, I found one other discussion about the subject of PDFs and passing of PageRank here: http://moz.com/community/q/will-a-pdf-pass-pagerank But this thread didn't answer my question so am posting it here. This PDF: http://www.ccisolutions.com/jsp/pdf/YAM-EMX_SERIES.PDF is reported by GWT to have 38 links coming from 8 unique domains. I checked the domains and some of them are high-quality relevant sites. Here's the list: Domains and Number of Links
Technical SEO | | danatanseo
prodiscjockeyequipment.com 9
decaturilmetalbuildings.com 9
timberlinesteelbuildings.com 6
jaymixer.com 4
panelsteelbuilding.com 4
steelbuildingsguide.net 3
freedocumentsearch.com 2
freedocument.net 1 However, when I plug the URL for this PDF into OSE, it reports no links and a Page Authority if only "1". This is not a new page. This is a really old page. In addition to that, when I check the PageRank of this URL, the PageRank is "nil" - not even "0" - I'm currently working on adding links back to our main site from within our PDFs, but I'm not sure how worthwhile this is if the PDFs aren't being allocated any authority from the pages already linking to them. Thoughts? Comments? Suggestions? Thanks all!0 -
Sitemap error
Hi, When i search for my blog post in google i get sitemap results, and when i click on it i get an error, here is the screen shot http://screencast.com/t/lXOIiTnVZR1 http://screencast.com/t/MPWkuc4Ocixy How can i fix that, it loos like if i just add www. it work just fine. Thanks
Technical SEO | | tonyklu0 -
Error Reporting
http://pro.seomoz.org/campaigns/33868/issues/18 Rel Canonical Found about 16 hours ago <dl> <dt>Tag value</dt> <dd>http://www.geeks.com/</dd> <dt>Description</dt> <dd>Using rel=canonical suggests to search engines which URL should be seen as canonical.</dd> <dd>We do have rel canonical on some of the pages this report is recommending that we "fix" this issue.</dd> <dd> Rel Canonical Found about 16 hours ago <dl> <dt>Tag value</dt> <dd>http://www.geeks.com/products.asp?cat=MBB</dd> <dt>Description</dt> <dd>Using rel=canonical suggests to search engines which URL should be seen as canonical.</dd> </dl> <a class="more expanded">Minimize</a> </dd> </dl>
Technical SEO | | JustinGeeks0 -
RSS Feed Errors in Google
We recently (2 months ago) launched RSS feeds for the category pages on our site. Last week we started seeing error pages in Webmaster Tools' Crawl Errors report pop up for feeds of old pages that have been deleted from the site, deleted from the sitemap, and not in Google's index since long before we launched the RSS feeds. Example: www.mysite.com/super-old-page/feed/ I checked and both the URL for the feed and the URL for the actual page are returning 404 statuses. www.mysite.com/super-old-page/ is also showing up in our Crawl Errors. Its been deleted for months but Webmaster Tools is very slow to remove the page from their Crawl Error report. Where is Google finding these feeds that never existed?
Technical SEO | | Hakkasan0