Does Google index internal anchors as separate pages?
-
Hi,
Back in September, I added a function that sets an anchor on each subheading (h[2-6]) and creates a Table of content that links to each of those anchors. These anchors did show up in the SERPs as JumpTo Links. Fine.
Back then I also changed the canonicals to a slightly different structur and meanwhile there was some massive increase in the number of indexed pages - WAY over the top - which has since been fixed by removing (410) a complete section of the site. However ... there are still ~34.000 pages indexed to what really are more like 4.000 plus (all properly canonicalised). Naturally I am wondering, what google thinks it is indexing. The number is just way of and quite inexplainable.
So I was wondering:
Does Google save JumpTo links as unique pages?
Also, does anybody know any method of actually getting all the pages in the google index? (Not actually existing sites via Screaming Frog etc, but actual pages in the index - all methods I found sadly do not work.)
Finally: Does somebody have any other explanation for the incongruency in indexed vs. actual pages?
Thanks for your replies!
Nico
-
Thanks - so I have to continue the search for where a tenfold increase in indexed pages (according to Search Console) might possibly come from. Sadly, the rest of your reply misses my problem; probably I have been unclear.
The reason I was asking for a method to know what pages ARE indexed is: I seem to have no problem getting stuff indexed (crystal-clear sitemap with dates; clear link structure &c.) but google seems over-eager and indexes more than there really is. If it is some technical problem, I'd like to fix that - but Google does not show anywhere what pages are actually indexed. There are lots of methods around - but none that I found do work as of now.
I have been well aware of JumpTo-Links, as I stated, and it works nicely. No problem at all with "not enough" indexed pages - really rather the opposite with no idea what causes it.
Regards
Nico
-
I agree with Russ that the anchors are not going to be indexed separately.... but I believe that those anchors are kickass page optimization that is second only behind the title tag. More info here.
-
1. The anchor pages aren't going to be indexed separately. If you are lucky, you might get a rich snippet from them in the SERPs, which would be nice. You can see an example of this if you search Google for "broken link building" and look at the top position.
2. Google likely has a crawl budget for sites based on a number of factors - inbound links, content uniqueness, etc. Your best bet is to make sure you have a strong link architecture, a complete and updated sitemap, and a good link profile.
3. Google can't index the whole web, nor would they want to. They just want to index pages that have a strong likelihood of ranking so they can build the best possible search engine.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages Not Getting Indexed
Hey there I have a website with pretty much 3-4 pages. All of them had a canonical pointing to one page and the same content ( which happened by mistake ) I removed the canonical URL and added one pointing to its page. Also, I added the original content that was supposed to be there to begin with. It's been weeks but those pages are not getting indexed on the SERPS while the one that they use to point with the canonical does.
Technical SEO | | AngelosS0 -
What's going on with google index - javascript and google bot
Hi all, Weird issue with one of my websites. The website URL: http://www.athletictrainers.myindustrytracker.com/ Let's take 2 diffrenet article pages from this website: 1st: http://www.athletictrainers.myindustrytracker.com/en/article/71232/ As you can see the page is indexed correctly on google: http://webcache.googleusercontent.com/search?q=cache:dfbzhHkl5K4J:www.athletictrainers.myindustrytracker.com/en/article/71232/10-minute-core-and-cardio&hl=en&strip=1 (that the "text only" version, indexed on May 19th) 2nd: http://www.athletictrainers.myindustrytracker.com/en/article/69811 As you can see the page isn't indexed correctly on google: http://webcache.googleusercontent.com/search?q=cache:KeU6-oViFkgJ:www.athletictrainers.myindustrytracker.com/en/article/69811&hl=en&strip=1 (that the "text only" version, indexed on May 21th) They both have the same code, and about the dates, there are pages that indexed before the 19th and they also problematic. Google can't read the content, he can read it when he wants to. Can you think what is the problem with that? I know that google can read JS and crawl our pages correctly, but it happens only with few pages and not all of them (as you can see above).
Technical SEO | | cobano0 -
How can I tell Google not to index a portion of a webpage?
I'm working with an ecommerce site that has many product descriptions for various brands that are important to have but are all straight duplicates. I'm looking for some type of tag tht can be implemented to prevent Google from seeing these as duplicates while still allowing the page to rank in the index. I thought I had found it with Googleoff, googleon tag but it appears that this is only used with the google appliance hardware.
Technical SEO | | bradwayland0 -
A few pages deindexed from Google .. PLEASE HELP!
My client has a fairly new site and we were agressively building content to the website. It is an ecommerce store and we have got a blog as well. We guest blogged in a few places and wrote 3-5 articles a day. Last few days, i noticed 3-4 pages that we were building links to got deindexed. What could be the reason? We weren't using any bots to build links, only a couple of it around 5-10 links to a page. Google WMT is not showing any messages and no manual action is seen. What could be the reason? I've submitted those URL for reindex and so far nothing seems to work. Any idea? Please help.
Technical SEO | | WayneRooney0 -
Cached pages still showing on Google
We noticed our QA site showing up on Google so we blocked them in our robot.txt file. We still had an issue with them crawling it so we blocked the site from the public. Now Google is still showing a cached version from the first week in March. Do we just have to wait until they try to re-crawl the site to clear this out or is there a better way to try and get these pages removed from results?
Technical SEO | | aspenchicago0 -
A site is not being indexed by Google Yahoo or Bing
This site - http://adoptionconnection.org/ is not being indexed by any of the search engines. I checked the easy stuff - robots text is: <meta name="<a class="attribute-value">robots</a>" content="<a class="attribute-value">all, index, follow</a>" /> <meta name="<a class="attribute-value">robots</a>" content="<a class="attribute-value">noodp</a>" /> <meta name="<a class="attribute-value">robots</a>" content="<a class="attribute-value">noydir</a>" /> I have checked what I can determine would cause the issue but have found nothing to prevent it from being indexed. I'm thinking it may be re-directs etc. Any answer would be great. Thanks in advance,
Technical SEO | | Intergen0 -
Getting More Pages Indexed
We have a large E-commerce site (magento based) and have submitted sitemap files for several million pages within Webmaster tools. The number of indexed pages seems to fluctuate, but currently there is less than 300,000 pages indexed out of 4 million submitted. How can we get the number of indexed pages to be higher? Changing the settings on the crawl rate and resubmitting site maps doesn't seem to have an effect on the number of pages indexed. Am I correct in assuming that most individual product pages just don't carry enough link juice to be considered important enough yet by Google to be indexed? Let me know if there are any suggestions or tips for getting more pages indexed. syGtx.png
Technical SEO | | Mattchstick0 -
Directory Indexed in Google, that I dont want, How to remove?
Hi One of my own websites, having a slight issue, Google have indexed over 500+ pages and files from a template directory from my eCommerce website. In google webmaster tools, getting over 580 crawl errors mostly these ones below I went into my robots text file and added Disallow: /skins*
Technical SEO | | rfksolutionsltd
Disallow: /skin1* Will this block Google from searching them again? and how do I go about getting the 500 pages that are already indexed taken out? Any help would be great | http://www.rfkprintsolutions.co.uk/skin1/modules/Subscriptions/subscription_priceincart.tpl | 403 error | Jan 15, 2012 |
| http://www.rfkprintsolutions.co.uk/skin1/modules/Subscriptions/subscription_info_inlist.tpl | 403 error | Jan 15, 2012 |
| http://www.rfkprintsolutions.co.uk/skin1/modules/Subscriptions/subscriptions_admin.tpl | 403 error | Jan 15, 2012 |0