Should I block .ashx files from being indexed ?
-
I got a crawl issue that 82% of site pages have missing title tags
All this pages are ashx files (4400 pages).
Should I better removed all this files from google ? -
Thanks !
As simple as that
-
Are the pages useful to the user? Do you expect users to actively use these pages on your site? Do you want users to be able to find these pages when they search for their issues through Google?
If you've answered 'yes' to any of these questions, I wouldn't suggest removing them from Google. Instead, take your time and set a schedule to optimize each of these pages.
If these pages are: Not valuable to the user, unnecessary to be indexed by Google, locked behind a membership gate, duplicate pages, thin content - then these would be good reasons to noindex them from all search engines.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content issues with file download links (diff. versions of a downloadable application)
I'm a little unsure how canonicalisation works with this case. 🙂 We have very regular updates to the application which is available as a download on our site. Obviously, with every update the version number of the file being downloaded changes; and along with it, the URL parameter included when people click the 'Download' button on our site. e.g. mysite.com/download/download.php?f=myapp.1.0.1.exe mysite.com/download/download.php?f=myapp.1.0.2.exe mysite.com/download/download.php?f=myapp.1.0.3.exe, etc In the Moz Site Crawl report all of these links are registering as Duplicate Content. There's no content per se on these pages, all they do is trigger a download of the specified file from our servers. Two questions: Are these links actually hurting our ranking/authority/etc? Would adding a canonical tag to the head of mysite.com/download/download.php solve the crawl issues? Would this catch all of the download.php URLs? i.e. Thanks! Jon
Moz Pro | | jonmc
(not super up on php, btw. So if I'm saying something completely bogus here...be kind 😉 )0 -
Mozscape index dosen't update
Last Mozscape index update: July 11, 2013. Next Mozscape index update: August26, 2013 Today is August26.Why mozscape doesnt updated?
Moz Pro | | vahidafshari451 -
Why would SEOmoz be blocked from a domain?
Am I missing something? I'm trying to do Link Research on www.weddingcarsofhampshire.co.uk but it seems to be blocked, any ideas why that would be the case?
Moz Pro | | JemRobinson0 -
Why am I not getting my allowance of 10,000 inbound links in csv download file? 370 out of 4700??
Hi, I'm desparately trying to audit my backlinks to remove a penguin penalty on my site livefit.co.uk When I do the inbound link report i'm not getting all the links in the download. I know there is a limit of 25 links from each linking site so we get the full picture of links bu: I have 4700 links so why does it need to limit it when we are supposed to see up to 10,000? When you check the link profile on the report it doesn't seem there are many sites with anything close to 25, so surely that rule is invalid as an explanation here? Should I just work off OSE? But there is less useful info than on the csv.. I'd be very grateful for your thoughts. Thanks! James
Moz Pro | | LiveFit0 -
Does SeoMoz realize about duplicated url blocked in robot.txt?
Hi there: Just a newby question... I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there. They are intended to be blocked by the web robot.txt file. Here is an example url (joomla + virtuemart structure): http://www.domain.com/component/users/?view=registration and the here is the blocking content in the robots.txt file User-agent: * _ Disallow: /components/_ Question is: Will this kind of duplicated url errors be removed from the error list automatically in the future? Should I remember what errors should not really be in the error list? What is the best way to handle this kind of errors? Thanks and best regards Franky
Moz Pro | | Viada0 -
Getting your site totally indexed by SEOMOZ
Hi guys! Ijust started using SEOMOZ software and wondered how it could be that my site has over 10.000 pages but in the Pro Dashboard it only indexed about 1500 of them. I've been waiting a few weeks now but the number has been stable ever since. Is there a way to get the whole site indexed by SEOMoz software? Thanks for your answers!
Moz Pro | | ssiebn70 -
How to get rid of the message "Search Engine blocked by robots.txt"
During the Crawl Diagnostics of my website,I got a message Search Engine blocked by robots.txt under Most common errors & warnings.Please let me know the procedure by which the SEOmoz PRO Crawler can completely crawl my website?Awaiting your reply at the earliest. Regards, Prashakth Kamath
Moz Pro | | 1prashakth0