Where do these URL's come from?! (Indexation issues)
-
We have an international webshop with languages in the URLs. Our URLs are now set up as follows:
http://thermalunderwear.eu/eng/category/product
Now, we know that there's some kind of strange redirect problem causing problems with our indexation, this is a technical issue that should be fixed soon. But whether this is the cause of some other strange problems, I do not know. I'd be happy with any help/advice/tips.
1. The SEOmoz site crawler starts at http://thermalunderwear.eu. This currently does not yet redirect to http://thermalunderwear.eu/eng like we want it to, but all the links on the page do include the default language code. So all links on the page are http://thermalunderwear.eu/eng/category etc. However, apart from those URLs, the site crawler finds many URLs in the form http://thermalunderwear.eu/category/product etc., so not including the language variable. Where it gets these I do not know, and since these URLs dont exist and the webshop simply shows the homepage, these URLs all have 50+ duplicate titles/content. Why oh why?
2. If I do a Google search for indexed URL's with English as language, I get many results formatted like this:
Coldpruf Enthusiast mens thermal shirt - Thermal wear for men ...
thermalunderwear.eu/eng/men/coldpruf-enthusiast-mens-thermal-shirt 170+ items – Fine-ribbed longsleeve thermal shirt men from Enthusiast ... {$SCRIPT_NAME} eng/men/coldpruf-enthusiast-mens-the {$ajax_url} http://thermalunderwear.eu/ajaxWhat are those variables doing there? It looks like it's taking something from our Smarty debug console, which is hidden but still active in the source code, but also the ajax URL which is in a completely different location. What is Google trying to show here?
-
It sees it as a list, its like rich snipits , its a huge amount of your content, and things it is the main content.
see these reullts. 40+ is a list i have in my page, it shows a few samples
-
I guess that is the only solution then. I don't quite understand why Google picks that information to show in the SERP text (as well as the 170+ items) but we'll try disabling the Smarty debugging when we're not actively using it. I hope it helps!
-
I looked in the souce code of this page
http://thermalunderwear.eu/eng/men/devold-alpine-knee-thermal-socks-electric-blue
And i found {$SCRIPT_NAME} eng/men/coldpruf-enthusiast-mens-the
Your dubug code is in the souce code. you need to get rid of it, disable it or something. I have not used smarty debug, so I cant help much.
-
Ah thanks Alan! It looks like there is a problem in the code that generates the breadcrumb URLs. We will get that fixed asap, whicih should lower the number of duplicate content warnings considerably.
-
Your first problem
Look at this page,
http://thermalunderwear.eu/eng/kids-thermal-underwear/coldpruf-enthusiast-kids-thermal-shirt
you will see a link to http://thermalunderwear.eu/kids-thermal-underwear/coldpruf-enthusiast-kids-thermal-shirt
I will look at your other porblem in a few minutes
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to choose the best canonical URL
In a duplicate content situation, and assuming that both rel=canonical and a 301 redirect pass link equity (I know there is still some speculation on this), how should you choose the "best" version of the URL to establish as the redirect target or authoritative URL? For example, we have a series of duplicate pages on our site. Typically we choose the "cleanest" or shortest non-trailing-slash version of the URL as the canonical, but what if those pages are already established and have varying page authority/backlink profiles? The URLs are: example.com/stores/locate/index?parameters=tags - PA = 54, Inbound Links = 259 example.com/stores/locate/index - PA = 60, Inbound Links = 302 example.com/stores/ - This is the version that currently ranks. PA = 42, Inbound Links = 3 example.com/stores - PA = 40, Inbound Links = 8 This might not really even matter, but in the interests of conserving as much SEO value as possible, which would you choose as either the 301 redirect target and/or the canonical version? My gut is to go with the URL that's already ranking (example.com/stores/) but curious if PA, backlinks, and trailing slashes should be considered also. We of course would not 301 the URL with the tracking parameters. 🙂 Thanks for your help!
Moz Pro | | Critical_Mass0 -
Site explorer Issue
Hello, I'm looking to see in the Site Explorer the links coming from directories such as BOW, yahoo etc. I'm listed there from almost 1 year and these links are not listed, the same with my competitors. I'm missing something? Thank you Claudio
Moz Pro | | SharewarePros0 -
What should I put in 'Define Branded Keyword Rules' -Starting a Campaign
Hello, I am a new user here (this seems really interesting!), but english level is not very good (I am spanish) and I dont understand what means 'Define Branded Keyword Rules' Hope someone can explain me it in easy words so I can understand Thank you very much! 1362443047.jpeg
Moz Pro | | matiw0 -
Issue Using MozScape wtih SEOGadet's Link Excel Extension
Howdy Everyone! Since i've seen the MozCon presentation by Richard Baxter at SEOGadget, i've been completely amazed by the links extension for Excel. I tried this morning to give it a try. Unfortunately, I cannot get it working, and was hoping that someone here could help me out. I know it's out of the usual "realm" of questions, but I figure it's worth a try 🙂 I successfully installed the addon and entered my Access id (as "member-xxxxxxxxxx" member is in my id) and the secret key. I then downloaded the "OSE" excel spreadsheet" just to make sure I get all the calls right (as I know the doc works) Once I do this, and enter anything in, I get the "an error occured: the remote server returend an error: 401 unauthorized. I then went into the config file (as the setup doc suggests) and disabled run time caching (or at least set "SEOMOZ_API_use_cache_YN":"N") and the SEOMOZ_API_timeout to 100000 from 60000. I have also tried uninstalling and reinstalling the addin, along with regenerating the MozScape API Key Anyway, i'm not an excel wiz, and would appreciate any help that I could get on this. I'm also about to experiment with SEO Tools For Excel if anywone wants to check that out. Thanks in advance Zach Russell
Moz Pro | | Zachary_Russell0 -
So noob while adding competitors, why i can't add sub-directory or specific url?
I try to add my competitors but failed, but there is only sub domain we can add to competitors, so how can i add a sub-directory or specific url to my competitors?
Moz Pro | | lfproseo0 -
Where does the crawler find the urls?
The SEO Moz crawler has found a number of 500 error pages, and 404s etc which is very useful 🙂 however some of the urls are weird/broken formats we don't recognise and nobody remembers ever using - not weird enough to imply hacking, but something broken in the CMS Is there anyway to find out where the crawler found these urls? I can patch up and redirect the end result as best I can but I would prefer to fix plug the leak thanks 🙂
Moz Pro | | Fammy1 -
How come when I export a error list I can only export the first page?
I am working on fixing the 4xx errors. I have found the easiest way to do this would be to export the list, print it out, and check off the ones i've fixed. The site only lets me export the first page. We'll appreciate any help. Thanks, Ryan D. Gran --Not sure what category this question belongs in so selected SEOmoz Tools--
Moz Pro | | dggusmc0 -
SEOMoz site crawlers created an issue for our servers
I have set up a number of campaigns with your pro tool. Unfortunately we have 7 sites on our server and our IT dept have said that we had an issue when your site crawlers visited for several sites at the same time - is there any way that I can retain the campaigns but have the sites crawled on request rather than automatically?
Moz Pro | | StephenALee0