Sitemap generator partially finding list of website URLs
-
Hi everyone,
When creating my XML sitemap here it is only able to detect a portion of the website. I am missing at least 20 URLs (blog pages + newly created resource pages). I have checked those missing URLs and all of them are index and they're not blocked by the robots.txt.
Any idea why this is happening? I need to make sure all wanted URLs to be generated in an XML sitemap.
Thanks!
-
Gaston,
Interestingly enough by default the generator only located only half of the URLs. I hope that one of those 2 fields will do the trick.
-
Hi Taysir,
I´ve never used that service. I suspect that the section you refer to should do the trick.
I believe that you do know how many URLs there are in the whole site, so you can compare how much pro-sitemaps.com finds to your numbers.Best luck!
GR -
Thanks for your response Gaston. These pages are definitely not blocked by the robots.txt file. I think that it is an internal linking problem. I actually subscribed to pro-sitemap.com and was wondering if I should use this section and add remaining sitemap URLs that are missing: https://cl.ly/0k0t093f0Y1T
Do you think this would do the trick?
-
Google not only provides a basic template you could do the sitemap manually if you wished, and this link has Google listing several dozen open source sitemap generators.
If Google Webmaster's can't read the one you generated fully, then clearly an alternate generator should definitely fix that for you. Good luck!
-
Hi taysir!
Have you tried any other crawler to check whether those pages can be finded?
I'd strongly suggest you Screaming Frog spider, the free version allows you up to 500 URLs. Also, it has a feature to create sitemaps from the crawled URLs. Even though dont know if that available in the free version.
Here some info about that feature: XML sitemap genetator - Screaming FrogUsual issues in not being findable are:
- Poor internal linking
- Not having a sitemap (this is why you find out)
- Blocked resources in robots.txt
- Blocked pages with robots meta tag
That being said, its completely normal that Google has indexed pages that you cant find in a AdHoc crawl, that is because GoogleBot could have found those pages from external linking.
Also keep in mind that having pages blocked with Robots.txt or robots meta tag will not prevent that page from being indexed nor will make them deindex if you add some rules to block them.Hope it helps.
Best luck
GR
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Generation 'child' sitemaps?
First off, am I correct in thinking that a 'child' sitemap is a sitemap of a subfolder and everything that sits under it, i.e. www.example.com/example If so, can someone give me a good recommendation for generation a free child sitemap please? Many thanks, Rhys
Technical SEO | | SwanseaMedicine0 -
Setting Up A Website For Redirects
I've got an old defunct domain with a lot of backlinks to individual pages. I'd like to use these backlinks for link juice by redirecting them to individual pages on the new domain (both sites belong to the same company). What is the best way to set this up? I presume I need some kind of hosting & site, even if it's just a default Wordpress install, which I can then use to set up the redirects? Would it be best done using .htaccess file for 301 redirects or some other way?
Technical SEO | | abisti20 -
Slow website
Hi I have just migrated from a custom written php/mysql site to a site using wordpress and woocommerce. I couldnt believe the drop in speed . I am using a few plugins for wordpress - contact forms / social sharing. and I have a few woocommerce plugins for taking payment etc. I am hosting images css's and js's on W3 Total Cache and MAXCDN hoping to speed the site up but tools at http://tools.pingdom.com/fpt sometimes show that the time between browser request and reply can be between 1 and 15 secs. I have searched all day looking for a post I read about two months ago with a tool that seems to look at server responce and redirect processing etc hoping it would help but cant find it. If anyone knows what I am talking about I would appreciate them posting a link The site is http://www.synergy-health.co.uk and an example of an inner page is http://www.synergy-health.co.uk/home/shop/alacer-emergen-c-1000-mg-vitamin-c-acai-berry-30-packets-8-4-g-each/ Any suggestions please? Perhaps I have w3total cache set wrong? Also, as the has been tanked and is in freefal iin google ranking since January would this be a good time to change the structure of Url from home/shop/product to domain.name/brand/product? Thanks in advance !
Technical SEO | | StephenCallaghan0 -
URL or sitemap submit to search engines?
Hello, I have just updated content at some URL site links, and I also added new URL content. Should I submit URL or re-create a sitemap then submit it to search engines? And please advise me some tools for submit them?
Technical SEO | | JohnHuynh0 -
How to keep a URL social equity during a URL structure/name change?
We are in the process of making significant URL name/structure change to one of our property and we want to keep the social equity (likes, share, +1, tweets) from the old to the new URL. We have been trying many different option without success. We are running our social "button" in an iframe. Thanks
Technical SEO | | OlivierChateau0 -
Creating new website with possible Url change (301 involved?)
Hi, I am currently getting a web designer to upgrade my website. I have built lost of links to my internal pages, should I get him to 301 redirect example.com/about.html (old) to example.com/about (new) OR Is there any need for this once the page doesn't change to example.com/about-us? Thank you in advance 🙂
Technical SEO | | Socialdude0 -
Duplicate content error from url generated
We are getting a duplicate content error, with "online form/" being returned numerous times. Upon inspecting the code, we are calling an input form via jQuery which is initially called by something like this: Opens Form Why would this be causing it the amend the URL and to be crawled?
Technical SEO | | pauledwards0 -
URL Structure Question
Hey folks, I have a weird problem and currently no idea how to fix it. We have a lot of pages showing up as duplicates although they are the same page, the only difference is the url structure. They seem to show up like: http://www.example.com/page/ and http://www.example.com/page What would I need to do to force the URLs into one format or the other to avoid having that one page counting as two? The same issue pops up with upper and lower case: http://www.example.com/Page and http://www.example.com/page Is there any solution to this or would I need to forward them with 301s or similar? Thanks, Mike
Technical SEO | | Malarowski0