23,000 pages indexed, I think bad
-
Thank you Thank you Moz People!!
I have a successful vacation rental company that has terrible seo but getting better. When I first ran Moz crawler and page grader, I had 35,000 errors and all f's.... tons of problem with duplicate page content and titles because not being consistent with page names... mainly capitalization and also rel canonical errors... with that said, I have now maybe 2 or 3 errors from time to time, but I fix every other day.
Problem Maybe
My site map shows in Google Webmaster
submitted 1155
1541 indexedBut google crawl shows 23,000 pages probably because of duplicate errors or possibly database driven url parameters...
- How bad is this and how do I get this to be accurate, I have seen google remove tool but I do not think this is right?
2) I have hired a full time content writer and I hope this works
- My site in google was just domain.com but I had put a 301 in to www.domain.com becauses www. had a page authority where the domain.com did not. But in webmasters I had domain.com just listed. So I changed that to www.domain.com (as preferred domain name) and ask for the first time to crawl. www.domain.com . Anybody see any problems with this?
THank you MOZ people, Nick
-
Hi Nick,
I do not think you have that bad of a problem if google is showing 23,000 pages even though you have only submitted 1155 pages. How are you getting the 23,000 number? Try using the site operator search for "site:domain.com" in google (if you have not already) This will show you all of your pages that are indexed.
As long as you are working on fixing your errors and making sure you have no dupe content by unsing canonical tags, removing all the page that have thin content OR adding content to pages with little to no content you are going to be in good shape.
In regards to your comment about googles webmasters tools. I have a lot of sites in GWMT where I add both www and none www even though I have a 301 redirect with htaccess. It will not effect you if you add the none www one back (if you deleted it) as that one has data and the one you have just added will not have data in it for a while.
Best,
Cibble.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Which pages should I index or have in my XML sitemap?
Hi there, my website is ConcertHotels.com - a site which helps users find hotels close to concert venues. I have a hotel listing page for every concert venue on my site - about 12,000 of them I think (and the same for nearby restaurants). e.g. https://www.concerthotels.com/venue-hotels/madison-square-garden-hotels/304484 Each of these pages list the nearby hotels to that concert venue. Users clicking on the individual hotel are brought through to a hotel (product) page e.g. https://www.concerthotels.com/hotel/the-new-yorker-a-wyndham-hotel/136818 I made a decision years ago to noindex all of the /hotel/ pages since they don't have a huge amount of unique content and aren't the pages I'd like my users to land on . The primary pages on my site are the /venue-hotels/ listing pages. I have similar pages for nearby restaurants, so there are approximately 12,000 venue-restaurants pages, again, one listing page for each concert venue. However, while all of these pages are potentially money-earners, in reality, the vast majority of subsequent hotel bookings have come from a fraction of the 12,000 venues. I would say 2000 venues are key money earning pages, a further 6000 have generated income of a low level, and 4000 are yet to generate income. I have a few related questions: Although there is potential for any of these pages to generate revenue, should I be brutal and simply delete a venue if it hasn't generated revenue within a time period, and just accept that, while it "could" be useful, it hasn't proven to be and isn't worth the link equity. Or should I noindex these "poorly performing pages"? Should all 12,000 pages be listed in my XML sitemap? Or simply the ones that are generating revenue, or perhaps just the ones that have generated significant revenue in the past and have proved to be most important to my business? Thanks Mike
Technical SEO | | mjk260 -
Removing a site from Google index with no index met tags
Hi there! I wanted to remove a duplicated site from the google index. I've read that you can do this by removing the URL from Google Search console and, although I can't find it in Google Search console, Google keeps on showing the site on SERPs. So I wanted to add a "no index" meta tag to the code of the site however I've only found out how to do this for individual pages, can you do the same for a entire site? How can I do it? Thank you for your help in advance! L
Technical SEO | | Chris_Wright1 -
Why are my images not being indexed?
I have submitted an image sitemap with over 2,000 images yet only about 35 have been indexed. Could you please help me understand why Google is not indexing my images? www.creative-calendars.com
Technical SEO | | nicole20140 -
Investigating a huge spike in indexed pages
I've noticed an enormous spike in pages indexed through WMT in the last week. Now I know WMT can be a bit (OK, a lot) off base in its reporting but this was pretty hard to explain. See, we're in the middle of a huge campaign against dupe content and we've put a number of measures in place to fight it. For example: Implemented a strong canonicalization effort NOINDEX'd content we know to be duplicate programatically Are currently fixing true duplicate content issues through rewriting titles, desc etc. So I was pretty surprised to see the blow-up. Any ideas as to what else might cause such a counter intuitive trend? Has anyone else see Google do something that suddenly gloms onto a bunch of phantom pages?
Technical SEO | | farbeseo0 -
How to Stop Google from Indexing Old Pages
We moved from a .php site to a java site on April 10th. It's almost 2 months later and Google continues to crawl old pages that no longer exist (225,430 Not Found Errors to be exact). These pages no longer exist on the site and there are no internal or external links pointing to these pages. Google has crawled the site since the go live, but continues to try and crawl these pages. What are my next steps?
Technical SEO | | rhoadesjohn0 -
Pages extensions
Hi guys, We're in the process of moving one of our sites to a newer version of the CMS. The new version doesn't support page extensions (.aspx) but we'll keep them for all existing pages (about 8,000) to avoid redirects. The technical team is wondering about the new pages - does it make any difference if the new pages are without extensions, except for usability? Thanks!
Technical SEO | | lgrozeva0 -
Pages crawled is only 23 even after 8 days??
Hello all, My site www.practo.com has at least more than 500+ pages. Still seomoz says its only 23 crawled till date even after 8 -10 days of the trial period. Now most of the pages on my site are in-site search pages. They appear when you search relevant terms with combinations etc. Is that hindering the moz crawler to look for those pages? Aditya
Technical SEO | | shanky11 -
Non-www home page indexed, but www for rest of site
Hi there, grateful for any ideas on why this is happening: http://www.google.co.uk/search?q=site:www.vitispr.com vs http://www.google.co.uk/search?q=site:vitispr.com Google seems to be indexing and caching vitispr.com for our home page but the www. versions for everything else. As you can see the second query finds the home page. Any ideas why that might be? Other info that might be relevant: non-www etc. are all 301'd to www versions. moved domains/urls etc. around in March of this year and for a week or we were redirecting to the non-www version webmaster tools says 'www' preferred Thanks!
Technical SEO | | JaspalX0