Reason for robots.txt file blocking products on category pages?
-
Hi
I have a website with thosands of products. On the category pages, all the products are linked to with the code “?cgid” in the URL. But “?cgid” is also blocked in the robots.txt file for some reason. So I'm thinking it's stopping all my products getting crawled by Google.
Am I right here? Is there any reason why a website would want to limit so many URL's? I'm only here a week and the sites getting great traffic, so don't want to go breaking it!!!
Thanks
-
Thanks again AL123al!
I would be concerned about my internal linking because of this problem. I've always wanted to keep important pages within 3 clicks of the Homepage. My worry here is that while these products can get clicked by a user within 3 clicks of the Homepage, they're blocked to Googlebot.
So the product URLS are only getting crawled in the sitemap, which would be hugely ineffcient? So I think I have to decide whether opening up these pages will improve my linking structure for Google to crawl the product pages, but is that important than increasing the amount of pages it's able to crawl and wasting crawl budget?
-
Hello,
The canonical product URLS will be getting crawled just fine as they are not blocked in the robots.txt. Without understanding your problem completely, I think the guys before you were trying to stop all the duplicate URLS with parameters being crawled and just leaving Google to crawl the canonicals - which is what you want.
If you remove the parameter from robots.txt then Google will crawl everything including the parameter URLS. This will waste crawl budget. So better that Google is only crawling the canonicals.
Regarding the sitemap, being present on the sitemap will help Googlebot decide what to prioritise crawling but won't stop it finding other URLS if there is good internal linking.
-
Thanks AL123al! The base URL's (www.example.com/product-category/ladies-shoes) do seem to be getting crawled here & there, and some are ranking which is great. But I think the only place they can get crawled is the sitemap, which has has over 28,000 URLs on one page (another thing I need to fix)!
So if Googlebot gets to the parameter URL through category pages (www.example.com/product-category/ladies-shoes?cgid...) and sees it's blocked, I'm guessing it can't see it's important to us (from the website hierarchy) or the canonical tag, so I'm presuming it's seriously damaging or power in getting products ranked
In Screaming Frog, 112,000 get crawled and 68% are blocked by robots. 17,000 are URL's which contain "?cgid", which I don't think is too big for Googlebot to crawl, the websites has a pretty good authority so I think we have a pretty deep crawl.
So I suppose what really want to know is will removing "?cgid" from the robots file really damage the site? I my opinion, I think it'll really help
-
This looks like the products are being appended by a parameter ?cgid - there may be other stuff attached to the end of each URL like this below:
e.g. www.example.com/product-category/ladies-shoes?cgid-product=19&controller=product etc
but canonical URL is www.example.com/product-category/ladies-shoes
These products may have had a canonical to the base URL which means that there won't be any problem with duplicates being indexed. So all well and good.
Except.....Google has to crawl each of these parameter URLs to find the canonical. In a huge website this means that crawl budget is being consumed by unnecessary crawling of these parameterised URLs.
You can tell Google not to crawl the parameter URLs in search console (at least in the old version you can). But you can also stop Google crawling these URLS unnecessarily by blocking them in robots txt if you are sure that the parameters are not changing how the page is looking in search.
So long story short is that is why you may see that the URLS with parameters are being blocked in robots.txt. The canonical version URLS will be getting crawled just fine since they don't have any parameters and hence not being blocked.
Hope that makes sense?
-
Yes, it's in the robot.txt, that's the problem. Someone had to physically put it in there, but I've no idea why they would.
-
Did you check your robot txt file? Or check if any plugin creating this problem.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Woocommerce SEO and Product attributes
Hi friends! I have a question that is advanced Woocommerce and seo-related.
Web Design | | JustinMurray
I'm seeing http://www.mywebsitex.com/pa_keyword/indexed in Google, but it cannot be properly optimized, and I would prefer to have a WordPress Page indexed for that keyword instead, which also lists those products and can be fully seo optimized. Woocommerce SEO plugin by Yoast lacks documentation and I have no clue if that would even fix this. I do have the Taxonomy (pa_keyword) set to not include these in the sitemap, but there doesn't seem to be a way to noindex/nofollow product attributes.
1. How can I best accomplish this?
2. Why are product attributes indexed by default?0 -
Should Blog Category Archive URLs be Set to "No-Index" in Wordpress?
It appears that Google Webmaster Tools is listing about 120 blog archives URLs in Google Index>Index Status that should not be listed. Our site map contains 650 pages, but Google shows 860. Pages like: <colgroup><col width="464"></colgroup>
Web Design | | Kingalan1
| http://www.nyc-officespace-leader.com/blog/category/manhattan-office-space | With Titles Like: <colgroup><col width="454"></colgroup>
| Manhattan Office Space Archives - Metro Manhattan Office Space | Are listed when in the Rogerbot crawl report for the site. How can we remove such pages from Google Webmaster Tools, Index Status? Our site map shows about 650 pages, yet Google show these extra pages. We would prefer that they not be indexed. Note that these pages do not appear when we run a site:www.nyc-officespace-leader.com search. The site has suffered a drop in ranking since May and we feel it prudent to keep Google from indexing useless URLs. Before May 650 pages showed on the Webmaster Tools Index status, and suddenly in early June when we upgraded the site the index grew by about 175 pages. I suspect the 120 blog archives URLs may have something to do with it. How can we get them removed? Can we set them to "No-Index", or should the robot text be used to remove them? Or can some type of removal request be made to Google? My developers have been struggling with this issue since early June. The bloat on the site is about 175 URLs not on the site map. Is there any go to authority on this issue (it is apparently rather complicated) that can provide a definitive answer? Thanks!!
Alan0 -
40 percent redundant content on landing pages with 60 percent unique information.
I have searched schema.org for tags to use for our redudant content on 25 unique local landing pages. The redundant content references our services and abilities on each page. Could anyone tell me how to retain this content and direct the search engines to disregard this portion of the landing page. We are a WordPress site -- if there is a plugin - I would love to know which one might work, although I have not been able to find one that will protect us from duplicate content issues. Thank you in advance.
Web Design | | seant1190 -
Schema.org - Right way to mark the pages
Dear all, Almost since we started designing our site, we are using schema microdata. It is not only because of the rich snippets, but because I want the search engines to better understand what we have. For example, the +1 buttom would not work properly without schema microdata, because it kind of ignores the OpenGraph parameters that specified image and description; and since we are a (very small) local bussiness directory (between other things), all our clients have a hand written schema complient description on their lisings, including address, opening ours, telephone number, description, etc. It is hand written by us because the tools avialable are simply not good enough to cover all different scenarios that a listing can present. I have not use, until today, a proper for the homepage, and it is probably the cause that our page lost the nice links below the site description in the google snippet. I did not place it on the body tag, but near the description, closing it inmediately after the description finishs. Now this is solved and we will wait to see if the links come back in the next weeks. Now to the question. Our site has three sections, with three different systems installed, two running wordpress and a third running another script. the main site is the local bussiness directory. The front page is mark as "schema.org/WepPage", and I do not know how to mark the other pages of the main site. I was thinking of marking the listings as "schema.org/ItemPage" since they are related to specific clients. Would you consired it to be right? Then, we have landing pages for the categories, should they be mark as WepPage, or as an Article, or something else? Many thanks in advance for your help, Best Regards, Daniel
Web Design | | te_c0 -
Page Title Optimization
I am reviewing the optimization on my site and it appears that my page titles follow this method: PAGE_NAME | KEYWORD in CITY ST - COMPANY_NAME I am pretty well optimized for "KEYWORD in CITY ST" but am wondering if I should drop it from all page titles except for the pages that actually deal with that keyword. What are your thoughts on optimizing?
Web Design | | nusani0 -
Is there a limit for 301 redirection in htaccess file?
For the SEO perspective, there is a limit for the number of 301 redirection inside the htaccess file?
Web Design | | Naghirniac0 -
I have a button that repeats it self many times on same page, what can i do so button name does not affect my SEO?
I have a shopping car button named "Add to car" but it repeats on many pages on my website, is this affecting my seo? If yes.. What should i do so it does not affect? Should button appear on hover? Thanks
Web Design | | SeMeAntoja0 -
Best Home Page design examples - from a SEO perspective. Could You give me a few?
I am int he process of redesigning a home page and wanted to know from our community if anyone had examples on some great Homepage Designs from an SEO perspective. Things I am looking for : 1. Homepage URL/Design 2. Navitagion 3. Call to action Thank you, much appreciated. The home page I am redesigning is www.gunshotdigital.com Vijay
Web Design | | vijayvasu0