Is it OK to have a site that has some URLs with hyphens and other, older, legacy URLs that use underscores?
-
I'm working with a VERY large site that has recently been redesigned/recategorized. They kept only about 20% of the URLs from the legacy site, the URLs that had revenue tied to them, and these URLs use underscores. Whereas the new URLs created for the site use hyphens.
I don't think that this would be an issue for Google, as long as the pages are of quality, but I wanted to get everyone's opinion on this. Will it hurt me to have two different sets of URLs, those with using hyphens and those using underscores?
-
Search engines handle underscores in different ways, so it's best to make them uniform and use hyphens, all being equal - This appears to be best practice in Google's eyes.
I wouldn't worry about it too much thought and I would take vitalscom's advice if you were thinking about moving them over.
-
Absolutely not an issue, hyphens are just a little bit more SEO friendly. A lot of sites choose to leave legacy URLs when there is a chance at losing revenue by changing them. 301 redirects do lose a little bit of PR, so it makes sense to be apprehensive. It just looks less uniform.
I would suggest moving a small handful of them over and run a specific ranking report on just those URLs to monitor any loss of rank instead of moving them all in one shot and risking loss of revenue.
-
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
We 410'ed URLs to decrease URLs submitted and increase crawl rate, but dynamically generated sub URLs from pagination are showing as 404s. Should we 410 these sub URLs?
Hi everyone! We recently 410'ed some URLs to decrease the URLs submitted and hopefully increase our crawl rate. We had some dynamically generated sub-URLs for pagination that are shown as 404s in google. These sub-URLs were canonical to the main URLs and not included in our sitemap. Ex: We assumed that if we 410'ed example.com/url, then the dynamically generated example.com/url/page1 would also 410, but instead it 404’ed. Does it make sense to go through and 410 these dynamically generated sub-URLs or is it not worth it? Thanks in advice for your help! Jeff
Intermediate & Advanced SEO | | jeffchen0 -
New site. How important is traffic for a new site? And what about domain age?
Hi guys. I've been building a new site because i've seen a real SEO opportunity out there. I'm a mixing professional by trade and so I wanted to take advantage of SEO to help gain more work. Here's the site: www.signalchainstudios.co.uk I'm curious about domain age. This site fairly well optimised for my keywords, and my site got pretty good content on it (i think so anyway). But it's no where to be seen on the SERP's (link at all). Is this just a domain age issue? I'd have though it might be in the top 50 because my site's services are not hard to rank for at all! Also what about traffic? Does Google want to see an 'active' site before it considers 'promoting' it up the ranks? Or are back links and good content the main factor in the equation? Thanks in advance. I love this community to bits 🙂 Isaac.
Intermediate & Advanced SEO | | isaac6631 -
Site: inurl: Search
I have a site that allows for multiple filter options and some of these URL's have these have been indexed. I am in the process of adding the noindex, nofollow meta tag to these pages but I want to have an idea of how many of these URL's have been indexed so I can monitor when these have been re crawled and dropped. The structure for these URL's is: http://www.example.co.uk/category/women/shopby/brand1--brand2.html The unique identifier for the multiple filtered URL's is --, however I've tried using site:example.co.uk inurl:-- but this doesn't seem to work. I have also tried using regex but still no success. I was wondering if there is a way around this so I can get a rough idea of how many of these URL's have been indexed? Thanks
Intermediate & Advanced SEO | | GrappleAgency0 -
Why are these sites outranking me?
I am trying to rank for the phrase "a link between worlds walkthrough" I am on page 1 but there are several results that just outranks me and I cannot see any reason that they would be doing so. My site is hiddentriforce.com/a-link-between-worlds/walkthrough/ For that page I have 5 linking domains, varied anchor text that spans from things like "here" to a variety of related phrases. All of the links come from really good sites My page has 1400 likes, 90 shares, and about 20 each in tweets and +'s DA of 44 PA of 37 The 4 and 5 ranked sites both have WAY less social interactions, lower PA and DA, less links, etc Yet they outrank me why?
Intermediate & Advanced SEO | | Atomicx0 -
International URL Puzzle
Hello, I have 4 different URL's going to 4 different countries that all contain the same content and Google is seeing them as duplicate pages. For ecommerce reasons I have to have these 4 pages separated. Here is a example of the pages below so you can see the URL structure: www.example/com/canada www.example.com/australia www.example.com/usa www.example.com/UK How do I fix this duplicate content problem? Thanks!
Intermediate & Advanced SEO | | digitalops0 -
Googlebot found an extremely high number of URLs on your site
I keep getting the "Googlebot found an extremely high number of URLs on your site" message in the GWMT for one of the sites that I manage. The error is as below- Googlebot encountered problems while crawling your site. Googlebot encountered extremely large numbers of links on your site. This may indicate a problem with your site's URL structure. Googlebot may unnecessarily be crawling a large number of distinct URLs that point to identical or similar content, or crawling parts of your site that are not intended to be crawled by Googlebot. As a result Googlebot may consume much more bandwidth than necessary, or may be unable to completely index all of the content on your site. I understand the nature of the message - the site uses a faceted navigation and is genuinely generating a lot of duplicate pages. However in order to stop this from becoming an issue we do the following; No-index a large number of pages using the on page meta tag. Use a canonical tag where it is appropriate But we still get the error and a lot of the example pages that Google suggests are affected by the issue are actually pages with the no-index tag. So my question is how do I address this problem? I'm thinking that as it's a crawling issue the solution might involve the no-follow meta tag. any suggestions appreciated.
Intermediate & Advanced SEO | | BenFox0 -
What is the best way to allow content to be used on other sites for syndication without taking the chance of duplicate content filters
Cookstr appears to be syndicating content to shape.com and mensfitness.com a) They integrate their data into partner sites with an attribution back to their site and skinned it with the partners look. b) they link the image back to their image hosted on cookstr c) The page does not have microformats or as much data as their own page does so their own page is better SEO. Is this the best strategy or is there something better they could be doing to safely allow others to use our content, we don't want to share the content if we're going to get hit for a duplicate content filter or have another site out rank us with our own data. Thanks for your help in advance! their original content page: http://www.cookstr.com/recipes/sauteacuteed-escarole-with-pancetta their syndicated content pages: http://www.shape.com/healthy-eating/healthy-recipes/recipe/sauteacuteed-escarole-with-pancetta
Intermediate & Advanced SEO | | irvingw
http://www.mensfitness.com/nutrition/healthy-recipes/recipe/sauteacuteed-escarole-with-pancetta0 -
Block all but one URL in a directory using robots.txt?
Is it possible to block all but one URL with robots.txt? for example domain.com/subfolder/example.html, if we block the /subfolder/ directory we want all URLs except for the exact match url domain.com/subfolder to be blocked.
Intermediate & Advanced SEO | | nicole.healthline0