Sitemaps and "noindex" pages
-
Experimenting a little bit to recover from Panda and added "noindex" tag for quite a few pages. Obviously now we need Google to re-crawl them ASAP and de-index. Should we leave these pages in sitemaps (with updated "lastmod") for that? Or just patiently wait?
What's the common/best way?
-
These are not so bad pages, they are catalog pages, must be there with noindex, follow tags.
Going grey isn't an option unfortunately - there are ~50,000 of them.
-
If you have identified the pages that are giving you growing pains, and they are not essential to users, why not just remove them and 301 the old pages?
You can change the crawl rate of Google in Google Webmaster Tools:
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=48620
Or you could be a little grey and ping the pages that you wanted to change by posing the links on a high DA (domain athority) site like Twitter. The higher the domain authority the faster Google crawls the site.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
NoIndex tag, canonical tag or automatically generated H1's for automatically generated enquiry pages?
What would be better for automatically generated accommodation enquiry pages for a travel company? NoIndex tag, canonical tag, automatically generated H1's or another solution? This is the homepage: https://www.discoverqueensland.com.au/ You would enquire from a page like this: https://www.discoverqueensland.com.au/accommodation/sunshine-coast/twin-waters/the-sebel-twin-waters This is the enquiry form: https://www.discoverqueensland.com.au/accommodation-enquiry.php?name=The+Sebel+Twin+Waters®ion_name=Sunshine+Coast
Technical SEO | | Kim_Lazaro0 -
Duplicate pages with "/" and without "/"
I seem to have duplicate pages like the examples below: https://example.com https://example.com/ This is happening on 3 pages and I'm not sure why or how to fix it. The first (https://example.com) is what I want and is what I have all my canonicals set too, but that doesn't seem to be doing anything. I've also setup 301 redirects for each page with "/" to be redirected to the page without it. Doing this didn't seem to fix anything as when I use the (https://example.com/) URL it doesn't redirect to (https://example.com) like it's supposed to. This issue has been going on for some time, so any help would be much appreciated. I'm using Squarespace as the design/hosting site.
Technical SEO | | granitemountain0 -
Google Webmaster Tools is saying "Sitemap contains urls which are blocked by robots.txt" after Https move...
Hi Everyone, I really don't see anything wrong with our robots.txt file after our https move that just happened, but Google says all URLs are blocked. The only change I know we need to make is changing the sitemap url to https. Anything you all see wrong with this robots.txt file? robots.txt This file is to prevent the crawling and indexing of certain parts of your site by web crawlers and spiders run by sites like Yahoo! and Google. By telling these "robots" where not to go on your site, you save bandwidth and server resources. This file will be ignored unless it is at the root of your host: Used: http://example.com/robots.txt Ignored: http://example.com/site/robots.txt For more information about the robots.txt standard, see: http://www.robotstxt.org/wc/robots.html For syntax checking, see: http://www.sxw.org.uk/computing/robots/check.html Website Sitemap Sitemap: http://www.bestpricenutrition.com/sitemap.xml Crawlers Setup User-agent: * Allowable Index Allow: /*?p=
Technical SEO | | vetofunk
Allow: /index.php/blog/
Allow: /catalog/seo_sitemap/category/ Directories Disallow: /404/
Disallow: /app/
Disallow: /cgi-bin/
Disallow: /downloader/
Disallow: /includes/
Disallow: /lib/
Disallow: /magento/
Disallow: /pkginfo/
Disallow: /report/
Disallow: /stats/
Disallow: /var/ Paths (clean URLs) Disallow: /index.php/
Disallow: /catalog/product_compare/
Disallow: /catalog/category/view/
Disallow: /catalog/product/view/
Disallow: /catalogsearch/
Disallow: /checkout/
Disallow: /control/
Disallow: /contacts/
Disallow: /customer/
Disallow: /customize/
Disallow: /newsletter/
Disallow: /poll/
Disallow: /review/
Disallow: /sendfriend/
Disallow: /tag/
Disallow: /wishlist/
Disallow: /aitmanufacturers/index/view/
Disallow: /blog/tag/
Disallow: /advancedreviews/abuse/reportajax/
Disallow: /advancedreviews/ajaxproduct/
Disallow: /advancedreviews/proscons/checkbyproscons/
Disallow: /catalog/product/gallery/
Disallow: /productquestions/index/ajaxform/ Files Disallow: /cron.php
Disallow: /cron.sh
Disallow: /error_log
Disallow: /install.php
Disallow: /LICENSE.html
Disallow: /LICENSE.txt
Disallow: /LICENSE_AFL.txt
Disallow: /STATUS.txt Paths (no clean URLs) Disallow: /.php$
Disallow: /?SID=
disallow: /?cat=
disallow: /?price=
disallow: /?flavor=
disallow: /?dir=
disallow: /?mode=
disallow: /?list=
disallow: /?limit=5
disallow: /?limit=10
disallow: /?limit=15
disallow: /?limit=20
disallow: /*?limit=250 -
How Google can interpret all "hreflag" links into HTML code
I've found the solution. The problem was that did not put any closing tag into the HTML code....
Technical SEO | | Red_educativa0 -
Google caching the "cookie law message"
Hello! So i've been looking at the cached text version of our website. (Google Eyes is a great add on for this) One thing I've noticed is that, Google caches our EU Cookie Law message. The message appears on the top of the page and Google is caching this. The message is enclosed within and but it still is being cached. I'm going to ask the development mean to move the message at the bottom of the page and fix the position, but reviewing other websites with cookie messages, Google isn't caching them in their text only versions. Any tips or advice?
Technical SEO | | Bio-RadAbs0 -
Container Page/Content Page Duplicate Content
My client has a container page on their website, they are using SiteFinity, so it is called a "group page", in which individual pages appear and can be scrolled through. When link are followed, they first lead to the group page URL, in which the first content page is shown. However, when navigating through the content pages, the URL changes. When navigating BACK to the first content page, the URL is that for the content page, but it appears to indexers as a duplicate of the group page, that is, the URL that appeared when first linking to the group page. The client updates this on the regular, so I need to find a solution that will allow them to add more pages, the new one always becoming the top page, without requiring extra coding. For instance, I had considered integrating REL=NEXT and REL=PREV, but they aren't going to keep that up to date.
Technical SEO | | SpokeHQ1 -
Help with pages Google is labeling "Not Followed"
I am seeing a number of pages that I am doing 301 redirects on coming up under the "Not Followed" category of the advanced index status in google webmasters. Google says this might be because the page is still showing active content or the redirect is not correct. I don't know how to tell if the page is still showing active content, and if someone can please tell me how to determine this it would be greatly appreciated. Also if you can provide a solution for how to adjust my page to make sure that the content is not appearing to be active, that would be amazing. Thanks in advance, here is a few links to pages that are experiencing this: www.luxuryhomehunt.com/homes-for-sale/sunnyisles.html www.luxuryhomehunt.com/homes-for-sale/summerield.html
Technical SEO | | Jdubin0 -
Does using tags instead of " " good for SEO purposes?
I'm currently using <pr>tags for paragraphs and came across an article that said it is better for search engines to see the</pr> tag than
Technical SEO | | ibex
tag to separate paragraphs.0