Should I block .ashx files from being indexed ?
-
I got a crawl issue that 82% of site pages have missing title tags
All this pages are ashx files (4400 pages).
Should I better removed all this files from google ? -
Thanks !
As simple as that
-
Are the pages useful to the user? Do you expect users to actively use these pages on your site? Do you want users to be able to find these pages when they search for their issues through Google?
If you've answered 'yes' to any of these questions, I wouldn't suggest removing them from Google. Instead, take your time and set a schedule to optimize each of these pages.
If these pages are: Not valuable to the user, unnecessary to be indexed by Google, locked behind a membership gate, duplicate pages, thin content - then these would be good reasons to noindex them from all search engines.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Robots.txt file issues on Shopify server
We have repeated issues with one of our ecommerce sites not being crawled. We receive the following message: Our crawler was not able to access the robots.txt file on your site. This often occurs because of a server error from the robots.txt. Although this may have been caused by a temporary outage, we recommend making sure your robots.txt file is accessible and that your network and server are working correctly. Typically errors like this should be investigated and fixed by the site webmaster. Read our troubleshooting guide. Are you aware of an issue with robots.txt on the Shopify servers? It is happening at least twice a month so it is quite an issue.
Moz Pro | | A_Q0 -
Large site with content silo's - best practice for deep indexing silo content
Thanks in advance for any advice/links/discussion. This honestly might be a scenario where we need to do some A/B testing. We have a massive (5 Million) content silo that is the basis for our long tail search strategy. Organic search traffic hits our individual "product" pages and we've divided our silo with a parent category & then secondarily with a field (so we can cross link to other content silo's using the same parent/field categorizations). We don't anticipate, nor expect to have top level category pages receive organic traffic - most people are searching for the individual/specific product (long tail). We're not trying to rank or get traffic for searches of all products in "category X" and others are competing and spending a lot in that area (head). The intent/purpose of the site structure/taxonomy is to more easily enable bots/crawlers to get deeper into our content silos. We've built the page for humans, but included link structure/taxonomy to assist crawlers. So here's my question on best practices. How to handle categories with 1,000+ pages/pagination. With our most popular product categories, there might be 100,000's products in one category. My top level hub page for a category looks like www.mysite/categoryA and the page build is showing 50 products and then pagination from 1-1000+. Currently we're using rel=next for pagination and for pages like www.mysite/categoryA?page=6 we make it reference itself as canonical (not the first/top page www.mysite/categoryA). Our goal is deep crawl/indexation of our silo. I use ScreamingFrog and SEOMoz campaign crawl to sample (site takes a week+ to fully crawl) and with each of these tools it "looks" like crawlers have gotten a bit "bogged down" with large categories with tons of pagination. For example rather than crawl multiple categories or fields to get to multiple product pages, some bots will hit all 1,000 (rel=next) pages of a single category. I don't want to waste crawl budget going through 1,000 pages of a single category, versus discovering/crawling more categories. I can't seem to find a consensus as to how to approach the issue. I can't have a page that lists "all" - there's just too much, so we're going to need pagination. I'm not worried about category pagination pages cannibalizing traffic as I don't expect any (should I make pages 2-1,000) noindex and canonically reference the main/first page in the category?). Should I worry about crawlers going deep in pagination among 1 category versus getting to more top level categories? Thanks!
Moz Pro | | DrewProZ1 -
404 : Errors in crawl report - all pages are listed with index.html on a WordPress site
Hi Mozers, I have recently submitted a website using moz, which has pulled up a second version of every page on the WordPress site as a 404 error with index.html at the end of the URL. e.g Live page URL - http://www.autostemtechnology.com/applications/civil-blasting/ Report page URL - http://www.autostemtechnology.com/applications/civil-blasting/index.html The permalink structure is set as /%postname%/ For some reason the report has listed every page with index.html at the end of the page URL. I have tried a number of redirects in the .htaccess file but doesn't seem to work. Any suggestions will be strongly appreciated. Thanks
Moz Pro | | AmanziDigital0 -
Why would SEOmoz be blocked from a domain?
Am I missing something? I'm trying to do Link Research on www.weddingcarsofhampshire.co.uk but it seems to be blocked, any ideas why that would be the case?
Moz Pro | | JemRobinson0 -
How to remove /index.html that causes duplicated content
Hi, How to remove /index.html that causes duplicated content?
Moz Pro | | whitelies
From my website navigation links, it does not shows the /index.html. However, when I run the seomoz crawl errors, it show duplicated content. Can anyone tell me how to do it?0 -
Local search block seems to mess up ranking calculations!
Time and time again i see SEO moz come out with a new rankings report and tell me that I've gone up or down in the rankings by 9 and I get really excited. Then i go look at the search and once agian we are in the exact same position! What it seems like is sometimes SEO Moz decides to count the local search block, and other times it decides not too. Is there any way to fix this? To make it always count the local search block would be preferred.
Moz Pro | | adriandg0 -
Best Practices for having Social Profiles indexed
There has been a lot of talk lately around social profiles potentially improving your brand as well as search. What I'd like to know is the best practices for getting those social profiles crawled and indexed so they actually provide a good link to my site. I'm also wondering what the difference between what Linkscape sees and what Google sees and when I'm looking at Open Site Explorer's rankings on one of those social profiles how can I be sure that Google sees it the same way. I ask this because a lot of these profiles are not well internally linked to. An example is about.me, it's a potentially great link, but it's essentially an island, and even after dropping a couple Twitter links to my profile, Open Site Explorer shows and Page Authority of 1, and it's not even indexed with Google. What I did last night was put a link to my about.me, flickr and wedding wire in my Connect menu drop down on my site to get that crawled hopefully soon. Are there other methods of getting those crawled and indexed so it starts passing some juice?
Moz Pro | | WilliamBay
What do you guys do?0 -
Why is blocking the SEOmoz crawler considered a red "error?"
Why is blocking the SEOmoz crawler considered a red "error?" Please see attached image... Y3Vay.png
Moz Pro | | vkernel0