#hashtag Anchor text within content
-
Hi, i have a question about anchor text within my sites content.
It 'jumps' to content displayed further down the page via a side navigation at the top. These links don't take you away to any other page, instead take you further down the page to the relavent content.
My question is this: I've noticed in the URL that the anchor text - #jumpnavlink is placed at the end of the pages URL like so..
www.mywebsite.com/example-page.php#jumpnavlink
Is this creating a problem for duplicate content?
Is it creating a new URL for viewers to use?
Is it ok to have lots of these running throughout my sites content pages?Many thanks for any light that is shed on this one!
Cheers
Alex -
Hi Alex,
Matt is absolutely right - engines ignore everything to the right of the hash (properly called a "named anchor" or "URL fragment").
They ignore fragments for the reason you just described. Fragments have traditionally been used in web design to build "table of contents" navigation that drops a user down the page to a specific position on the page, whether a different section, or a specific user's comment, etc. If a page has 10 sections with 10 corresponding fragments, obviously the engine just wants to index the root URL, rather than waste time and money indexing all the fragments, which would be duplicates of the root URL.
Fragments can be very useful for visitors who want to link to a specific part of a long article, or directly to a comment they made.
There is no problem whatsoever with using lots of fragments for in-page navigation.
-
yup .. search engine will see it as single page -
www.mywebsite.com/example-page.php
_Now, if you numerous pages with hash tags that means, you are cramming loads of content in the same page that is www.mywebsite.com/example-page.php. This may be bad for the users but SEO's perspective this URLs bad, really bad.
To target different pages, you need to have URLs without the hash tag. _
-
As far as I am aware Google doesn't index urls with a hashtag as this is pointing to an internal part of the page - essentially it will only index the url without the hashtag, so it isn't creating a new URL. However to be certain of this I would stick a site: query with your url including the hashtag and see if it appears with the hashtag, if, as I expect it doesn't then you shouldn't have an issue with duplicate content under two urls. As far as them being ok in relation to your SEO efforts I don't see why there should be a negative impact as you are essentially making your site more user-friendly and pointing to a specific part of the page that is already indexed - your are not duplicating that part of the page by creating an internal anchor to that specific content.
Just for your confirmation I can tell you I have a page that uses an internal anchor (#) and when you do a site search in Google this page doesn't appear. When you search for it without the # it does...
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is the content on my website is garbage?
I received a mail from google webmasters, that my website is having low quality content. Website - nowwhatmoments.com
Technical SEO | | Green.landon0 -
Is this duplicate content when there is a link back to the original content?
Hello, My question is: Is it duplicate content when there is a link back to the original content? For example, here is the original page: http://www.saugstrup.org/en-ny-content-marketing-case-infografik/. But that same content can be found here: http://www.kommunikationsforum.dk/anders-saugstrup/blog/en-ny-content-marketing-case-til-dig, but there is a link back to the original content. Is it still duplicate content? Thanks in advance.
Technical SEO | | JoLindahl912 -
Duplicate content for vehicle inventory.
Hey all, In the automotive industry... When uploading vehicle inventory to a website I'm concerned with duplicate content issues. For example, 1 vehicle is uploaded to the main manufacturers website, then again to the actual dealerships website & then again to Craigslist & even sometimes to a group site. The information is all the same, description, notes, car details & images. What would you all recommend for alleviating duplicate content issues? Should I be using the rel canonical back to the manufacturers website? Once the vehicle is sold all pages disappear. Thanks so much for any advice.
Technical SEO | | DCochrane0 -
Mirrored content/ images
We are currently in the process of creating a new website in place of our old site (same URL etc.) We've recently created another website which has the same design/ layout/ pictures and general site architecture as our new site will have. If I was to add alt test to images only on one site would we still be penalised by Google as the sites 'look' the same, event thought they will have completely different URL's and different focusses on a similar topic. Content will be different also, but both sites will focus on a similar subject. Thanks
Technical SEO | | onlinechester0 -
How critical is Duplicate content warnings?
Hi, So I have created my first campaign here and I have to say the tools, user interface and the on-page optimization, everything is useful and I am happy with SEOMOZ. However, the crawl report returned thousands of errors and most of them are duplicate content warnings. As we use Drupal as our CMS, the duplicate content is caused by Drupal's pagination problems. Let's say there is a page called "/top5list" , the crawler decided /top5list?page=1" to be duplicate of "/top5list". There is no real solution for pagination problems in Drupal (as far as I know). I don't have any warnings in Google's webmaster tools regarding this and my sitemap I submitted to Google doesn't include those problematic deep pages. (that are detected as duplicate content by SEOMOZ crawler) So my question is, should I be worried about the thousands of error messages in crawler diagnostics? any ideas appreciated
Technical SEO | | Gamer070 -
Thin/Duplicate Content
Hi Guys, So here's the deal, my team and I just acquired a new site using some questionable tactics. Only about 5% of the entire site is actually written by humans the rest of the 40k + (and is increasing by 1-2k auto gen pages a day)pages are all autogen + thin content. I'm trying to convince the powers that be that we cannot continue to do this. Now i'm aware of the issue but my question is what is the best way to deal with this. Should I noindex these pages at the directory level? Should I 301 them to the most relevant section where actual valuable content exists. So far it doesn't seem like Google has caught on to this yet and I want to fix the issue while not raising any more red flags in the process. Thanks!
Technical SEO | | DPASeo0 -
Duplicate content, Original source?
Hi there, say i have two websites with identicle content. website a had content on before website b - so will be seen as the original source? If the content was intended for website b, would taking it off a then make the orinal source to google then go to website b? I want website b to get the value of the content but it was put on website a first - would taking it off website a then give website b the full power of the content? Any help of advice much appreciated. Kind Regards,
Technical SEO | | pauledwards0 -
Complex duplicate content question
We run a network of three local web sites covering three places in close proximity. Each sitehas a lot of unique content (mainly news) but there is a business directory that is shared across all three sites. My plan is that the search engines only index the business in the directory that are actually located in the place the each site is focused on. i.e. Listing pages for business in Alderley Edge are only indexed on alderleyedge.com and businesses in Prestbury only get indexed on prestbury.com - but all business have a listing page on each site. What would be the most effective way to do this? I have been using rel canonical but Google does not always seem to honour this. Will using meta noindex tags where appropriate be the way to go? or would be changing the urls structure to have the place name in and using robots.txt be a better option. As an aside my current url structure is along the lines of: http://dev.alderleyedge.com/directory/listing/138/the-grill-on-the-edge Would changing this have any SEO benefit? Thanks Martin
Technical SEO | | mreeves0