Help, a certain directory is not being indexed
-
Before I start, dont expect this to be too easy. This really has me puzzled and am surprised I am still yet to find a solution for it. Get ready.
We have a wordpress website, launched over 6 months ago and have never had an issue getting content such as pages and post pages and categories indexed. However, I some what recently (about 2 months ago) installed a directory plugin (Business Directory Plugin) which lists businesses via unique urls that are accesible from a sub folder. Its these business listings that I absolutely cannot get indexed.
The index page to the directory which links to the business pages is indexed, however for some reason google is not indexing all the listing pages which are linked to from this page. Its not an issue of the content being uncrawlable or at least dont think so as when I run crawlers on my site such as xml sitemap crawlers it finds all the pages including the directory pages so I am sure its not an issue of the search engines not finding the content.
I have created xml sitemaps and uploaded to webmaster tools, tools recongises that there are many pages in the xml sitemap but google continues to only index a small percentage (everything but my business listings).
The directory has been there for about 8 weeks now so I know there is a issue as it should of been indexed by now.
See our main website at www.smashrepairbid.com.au and the business directory index page at www.smashrepairbid.com.au/our-shops/
To throw in a curve ball, in looking into this issue and setting up tools we noticed a lot of 404 error pages (nearly 4,000). We were very confused where these were coming from as they were only being generated from search engines - humans could not access the 404s and so we are guessing se's were firing some javascript code to generate them or something else weird. We could see the 404s in the logs so we know they were legit but again feel it was only search engines, this was validated when we added some rules to robots.txt and we saw the errors in the logs stop. We put the rules in robots txt file to try and stop google from indexing the 404 pages as we could not find anyway to fix the site / code (no idea what is causing them). If you do a site search in google you will see all the pages that are omitted in the results.
Since adding the rules to robots, our impressions shown through tools have jumped right up (increased by 5 times) so thought this was a good indication of improvement but still not getting the results we want.
Does anyone have any clue whats going on or why google and other se's are not indexing this content? Any help would be greatly appreciated and if you need any other information to assist just ask me.
Really appreciate anyone who can spare their time to help me, I sure do need it.
Thanks.
-
OK issue resolved!
Lynn thank you - was the relative url in the canonical tag that played havoc Changing it to absolute is now causing the pages to be indexed.
Lesson learnt.
-
Hey Kane,
The /shops url was a old url that had a directory in it. We blocked it in the robots as it was generating tons of 404 errors. In webmaster tools we can see thousands of 404 errors within that directory so we deleted it all and tried to block se's from throwing the errors (like i described in initial post).
A number of those listing do have very little information however there are a bunch that do have great content which is why I am not sure if that is the case. I will keep an eye on this though and also check about the logs and let you know what that says.
-
Thanks Lynn.
I have taken on your recommendation and changed the canonical tag to be absolute. Thanks for your help we will see how it goes.
-
As Lynn said, relative canonical tags could absolutely cause issues. That said, I'm seeing absolute URLs in the canonical tag now, so you may have fixed that in the past few days.
Also, I do see the Our Shops pages indexed when I search for site:smashrepairbid.com.au, but I don't see any other pages in the /our-shops/ directory aside from www.smashrepairbid.com.au/our-shops/?action=search
Your robots.txt is currently blocking /shops/. I don't think that would cause an issue but would be nice to remove that if it's not needed...
There's almost zero content on the pages I glanced at, eg. http://www.smashrepairbid.com.au/our-shops/1263/bakker-towing/ and http://www.smashrepairbid.com.au/our-shops/1616/coastal-towing-service/. When you look at it from Google's perspective, there's very little value being added by these pages. No unique photos, no phone number, no website, etc. There's a million local business scrapers that have more content than this, so why should they bother indexing these pages?
Try pulling up your logs and seeing if these URLs have been requested by Google's spiders. Here's a good guide from Ian Lurie on how to do that in Excel: http://www.portent.com/blog/analytics/how-to-read-a-web-site-log-file.htm
If the spiders are crawling those shop URLs but aren't indexing them, I think the first thing to do is add way more content to the pages.
-
Hi Trent,
Having a quick look I saw that you have relative urls in your canonical tag and this could be problematic. I think it would be worth making those urls absolute to avoid any confusion on Google's part in determining what page or page version should be indexed.
Cannot say for sure if this is the problem, but worth looking into.
Hope that helps!
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to fix duplicate content for homepage and index.html
Hello, I know this probably gets asked quite a lot but I haven't found a recent post about this in 2018 on Moz Q&A, so I thought I would check in and see what the best route/solution for this issue might be. I'm always really worried about making any (potentially bad/wrong) changes to the site, as it's my livelihood, so I'm hoping someone can point me in the right direction. Moz, SEMRush and several other SEO tools are all reporting that I have duplicate content for my homepage and index.html (same identical page). According to Moz, my homepage (without index.html) has PA 29 and index.html has PA 15. They are both showing Status 200. I read that you can either do a 301 redirect or add rel=canonical I currently have a 301 setup for my http to https page and don't have any rel=canonical added to the site/page. What is the best and safest way to get rid of duplicate content and merge the my non index and index.html homepages together these days? I read that both 301 and canonical pass on link juice but I don't know what the best route for me is given what I said above. Thank you for reading, any input is greatly appreciated!
On-Page Optimization | | dreservices0 -
What is the perfect way to handle multiple sitemaps index in Search Console?
Hello friends, I have this doubt for a long and i want to share it with you. In our agency many clients have a PHP template for the home page of their sites, and also have a blog with wordpress as CMS. When i am optimizing sitemaps, I have two separate files, an index of Sitemaps created with Wordpress SEO by Yoast (which inside has separate Sitemaps tags, categories, posts, pages, authors, etc.) and on the other hand the home page sitemap with the subsections. As you know the sitemap generated by "Wordpress SEO by Yoast" is dynamic as it creates the sitemap according to current site content, and is updated every time a new entry is raised or modify any URL. This makes it very practical. I can not have a unique index sitemap sitemaps nesting inside another, as it is not allowed by Google or Sitemap protocol. I read in the Google Support you can upload multiple sitemaps to Search Console but does not say anywhere on upload multiple sitemaps index, or a combination thereof. In my case, I would have to upload two separately files, the dynamically generated with wordpress and the manual created for the PHP template. In my opinion there is no problem and Google will index everything properly performing it this way, but I wanted to share it with you to see how you solve this problem and what experiences had. Thanks and best regards.
On-Page Optimization | | NachoRetta1 -
Does this index well
hi i have been looking at this template but as the content will be generated from their database will it index well - am i better to build a static equivalent using something like visual composer as i am not that technical: http://realhomes.inspirythemes.biz/listing/
On-Page Optimization | | neilhenderson0 -
How to block index of link and content
Hi, We have pages where articles are shown and in the sides we have small snippets of Articles which shows the title and close to 25 words and a image. When i search for something in Google the snippet image and content is shown and in Google when clicked it redirects to a page which is not meant to be shown for the keyword the visitor is querying Is there a way i can block all the links and content shown in the right and left side of the page so Google does not get confused with the page content thats not related to that page? thanks
On-Page Optimization | | AlexisWithers0 -
Description tag not showing in the SERPs because page is blocked by Robots, but the page isn't blocked. Any help?
While checking some SERP results for a few pages of a site this morning I noticed that some pages were returning this message instead of a description tag, A description for this result is not avaliable because of this site's robot.s.txt The odd thing is the page isn't blocked in the Robots.txt. The page is using Yoast SEO Plugin to populate meta data though. Anyone else had this happen and have a fix?
On-Page Optimization | | mac22330 -
Google indexing https insted of http pages
Hi!
On-Page Optimization | | ovieira
First of all i have a Wordpress portuguese languagem website (**http://**bit.ly/TGjpVx). For a while, for security pourposes, i had a SSL certificate installed on my website but i didn't renew it, for a few months now. I didn't have any special https page. All pages responded using http or https. My problem is that it seems that Google still indexes some o my webpages with https and not http, so when people click on it they get a bad cached page. No good for SEO, i think. What can i do about this? I only want Google, and other serach engines, to index my clean http pages (about 70 pages). Thanks,
OV0 -
How can I stop google reading a certain section of text with my H1 tag?
Hey Mozzers, I'm wondering if anybody knows of a way that I can stop google reading a certain part of text within my H1 texts? My issue is that I have individual office pages on my site, but many offices are based in the same city; such as 'London'. I want to keep London within the H1 tag for user experience but I do not want it to be picked up by the search engines and start a canonical issue. I've seen some people say to use document.write or use an image. Does anybody know of a correct way of doing this? Many Thanks.
On-Page Optimization | | Lakeside0 -
Will a "no follow" "no index" meta tag resolve duplicate content issue?
I have a duplicate content issue. If the page has already been indexed will a no follow no index tag resolve the issue or do I also need a rel canonical statement?
On-Page Optimization | | McKeeMarketing0