Trailing Slash Problems
-
Link juice being split between trailing slash and non versions. ie. ldnwicklesscandles.com/scentsy-uk and ldnwicklesscandles.com/scentsy-uk/
Initially asked in here and was told to do a rewrite in the htaccess file.
I don't have access to this with squarespace, nor can I add canonical tags on a page by page basis.
301 redirect from scentsy-uk to scentsy-uk/ didn't work either...said that the redirect wasn't completing in an error message on the browser.
Squarespace hasn't been very helpful at all.
My question is....is there another way to fix this? or should I just call it a day with squarespace and move to wordpress?
-
I know this is an old thread but just wondering if anyone ever found a solution in Squarespace, or did everyone just move over to Wordpress?
-
You'll be hard pressed to find a hosted platform that is technically optimized for search engines. Adobe Catalyst, Squarespace, Wix, etc. will all have little (or major) issues. I don't know of too many really popular sites hosted on these platforms, but that's not to say those hosted sites won't rank well for chosen keywords. Anyway, here's what Google has to say about it: http://www.youtube.com/watch?v=CTrdP7lJ2HU
-
Hi Christine:
Did you ever find a solution for this? I have a client who's Squarespace site shows rel-con issues with my recent crawl. And to your point, you can't implement that on a per page basis. Squarespace hasn't responded (yet) to a service request. Any suggestions would be helpful. Thank you!
-
Is there a way to get around this without moving to Wordpress? I only will do that if there's absolutely no other way to help my site.
-
Looks like a move to Wordpress is a safe bet then as your system seems very SEO-Unfriendly.
When you do move to Wordpress be sure to check out Yoast SEO Plugin http://yoast.com/wordpress/seo/
-
Aran I can't add canonical tags on a page by page basis. x
-
have you tried using the canonical tag?
-
Essentially, its only a issue when you have links to both the slash and non-slash versions. I would standardize on either having slashes or not, and make sure all links on the site follow the standard. However, after hearing that they lack basic SEO, I would convert to WP.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
On-Page Problem
Hello Mozzers, A friend has a business website and the on-page stuff is done really bad. He wants to rank for: conference room furnishing, video conference, digital signage. (Don't worry about the keywords, it's just made up for an example.) For these three services he has a page: hiswebsite.com/av AV stands for audio and video and is the h1. If you click on one of the service, the url doesn't change. Like if you click on video conference, just the text changes, the url stays /av. All his targeted pages got an F Grade, I am not surprised, the services titles are in . Wouldn't it be a lot better to make an own page for every service with a targeted keyword, like hiswebsite.com/video-conference All this stuff is on /av, how will a 301 resirect work to all the service pages, does this make sense? Any help is appreciated! Thanks in advance!
Technical SEO | | grobro1 -
Duplicate Content Problem!
Hi folks, I have a quite awkward problem. Since a few weeks a get a huge amount of "duplicate content errors" in my MOZ crawl reports. After a while of looking for the error I thought of the domains I've bought additionally. So I went to Google and typed in site:myotherdomains.com The results was as I expected that my original website got indexed with my new domains aswell. That means: For example my original website was index with www.domain.com/aboutus - Then I bought some additional domains which are pointing on my / folder. What happened is that I also get listed with: www.mynewdomains.com/com How can I fix that? I tried a normal domain redirect but it seems as this doesn't help as when I am visiting www.mynewdomains.com the domain doesnt change in my browser to www.myoriginaldomain.com but stays with it ... I was busy the whole day to find a solution but I am kinda desperate now. If somebody could give me advice it would be much appreciated. Mike
Technical SEO | | KillAccountPlease0 -
Why are my URL's with a trailing slash still getting indexed even though they are redirected in the .htaccess file?
My .htaccess file is set up to redirect a URL with a trailing / to the URL without the /. However, my SEOmoz crawl diagnostics report is showing both URL's. I took a look at my Google Webmaster account and saw some duplicate META title issues. Same thing, Google Webmaster is showing the URL with the trailing /. My website was live for about 3 days before I added the code to the .htaccess file to remove the trailing /. Is it possible that in those 3 days that both versions were indexed and haven't been removed even though the .htaccess file has been updated?
Technical SEO | | mkhGT0 -
Google+ Authorship, Rich Snippits and Three Names - a Problem?
Hello All, I have a conundrum that I thought I'd resolved - but that's popped its gnarly old head over the parapet again. I have a number of websites that I'd like to have show my ugly Google+ mug as author in the Google SERPS. I jumped through all the authorship verification hoops that Google threw at me and I thought I'd won. The problem? I have three names: Nick Beresford-Davies. One example of a page that I'm trying to achieve authorship with is: http://www.graphic-design-employment.com/illustrator-how-to-make-a-pattern.html I have verified authorship of the above website on my Google Profile:
Technical SEO | | Tinstar
https://plus.google.com/u/0/107765436751760696335/about Originally I footed the page with Nick Beresford-Davies (hyphenated) and the Structured Data Testing Tool ignored the hyphen and just saw Nick Beresford. So I tweaked my online name (to please Google!) to Nick Beresford Davies (no hyphen). Initially this seemed to work - but I just checked again and now Google, for reasons only known to itself, sees "nick davies" as the author, completely ignoring the name in the footer of the page (by Nick Beresford Davies) and the fact that the site has been verified by Google+. This is also the case for all other websites that I contribute to - and not all the bylines are in the footer - some are by the headline. When I test pages on the structured testing tool and enter my Google+ profile, it replies: nick davies, we've found your name as one of the authors from the page. You can use "Authorship verification by email" method above to verify your authorship.Error: Author name found on the page and Google+ profile name do not match. Please consider adding markup to the site.Much as I would like to succeed on the Google SERPS, I draw the line at changing my name to keep this robot happy - so if anyone has any suggestions, or can see any obvious step that I've missed, I'd be very grateful. I find it hard to believe that no other double-barrelled website author exists - so I'm hoping I'm not the only one to have experienced this... Thanks!0 -
Duplicate content problem?
Hello! I am not sure if this is a problem or if I am just making something too complicated. Here's the deal. I took on a client who has an existing site in something called homestead. Files cannot be downloaded, making it tricky to get out of homestead. The way it is set up is new sites are developed on subdomains of homestead.com, and then your chosen domain points to this subdomain. The designer who built it has kindly given me access to her account so that I can edit the site, but this is awkward. I want to move the site to its own account. However, to do so Homestead requires that I create a new subdomain and copy the files from one to the other. They don't have any way to redirect the prior subdomain to the new one. They recommend I do something in the html, since that is all I can access. Am I unnecessarily worried about the duplicate content consequences? My understanding is that now I will have two subdomains with the same exact content. True, over time I will be editing the new one. But you get what I'm sayin'. Thanks!
Technical SEO | | devbook90 -
Walking into a site I didn't build, easy way to fix this # indexing problem?
I recently joined a team with a site without a) Great content b) Not much of any search traffic I looked and all their url's are built in this way: Normal looking link -> not actually a new page but # like: /#content-title And it has no h1 tag. Page doesn't refresh. My initial thought is to gut the site and build it in wordpress, but first have to ask, is there a way to make a site with /#/ content loading friendly to search engines?
Technical SEO | | andrewhyde0 -
Penalities in a brand new site, Sandbox Time or rather a problem of the site?
Hi guys, 4 weeks ago we launched a site www.adsl-test.it. We just make some article marketing and developed a lots of functionalities to test and share the result of the speed tests runned throug the site. We have been for weeks in 9th google serp page then suddendly for a day (the 29 of february) in the second page next day the website home is disappeared even to brand search like adsl-test. The actual situalion is: it looks like we are not banned (site:www.adsl-test.it is still listed) GWT doesn't show any suggestion and everything looks good for it we are quite high on bing.it and yahoo.it (4th place in the first page) for adsl test search Anybody could help us to understand? Another think that I thought is that we create a single ID for each test that we are running and these test are indexed by google Ex: <cite>www.adsl-test.it/speedtest/w08ZMPKl3R or</cite> <cite>www.adsl-test.it/speedtest/P87t7Z7cd9</cite> Actually the content of these urls are quite different (because the speed measured is different) but, being a badge the other contents in the page are pretty the same. Could be a possible reason? I mean google just think we are creating duplicate content also if they are not effectively duplicated content but just the result of a speed test?
Technical SEO | | codicemigrazione0 -
Problem with indexed files before domain was purchased
Hello everybody, We bought this domain a few months back and we're trying to figure out how to get rid of indexed pages that (i assume) existed before we bought this domain - the domain was registered in 2001 and had a few owners. I attached 3 files from my webmasters tools, can anyone tell me how to get rid of those "pages" and more important: aren't this kind of "pages" result of some kind of "sabotage"? Looking forward to hearing your thoughts on this. Thank you, Alex Picture-5.png Picture-6.png Picture-7.png
Technical SEO | | pwpaneuro0