Crawl Results
-
How fresh is SEOMOZ crawl results ?.
On my report for today I can see that my website ranking for several keywords run manually and individually on Google, Yahoo and bing to be better than the actual SEOMOZ report.
Also have been noticing that Back link count on SEOMOZ report to be significantly less than counted with other sites and software.Can someone advise me on this ?
-
Ok Got it. Thanks a lot for the explanation.
S.H
-
There are a couple of different components here. The back link count is updated every 2-4 weeks, when Mozscape is updated. We have our own crawler, just as other sites have their own crawlers, so we'll all have different numbers depending on how many URLs we've crawled that cycle.
-
Hi James,
Thank you for your answer.
The Keywords I am searching for individually are same keywords I input on my SEOMOZ Campaign.
As far as back links Majestic seo is one of the Three that I have used and they show a count 8000 more than SEOMOZ !
Sherif
-
SEOmoz rankings are updated weekly, I believe every Thursday. It's possible that you are getting skewed results when doing manual searches due to search personalization. I would attribute the difference in results to either one of these issues.
As for the back link count being less than other sites it's best to consider Open Site Explorer as only one source of data for building your back link profile. To get a better picture of your full back link profile you should also pull your links from Webmaster tools, Majestic SEO, and aHrefs if possible. This will give you a fairly comprehensive look at your link profile.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Please Help! Crawl & Site Errors - Will This Impact My SEO?
Hello Moz, I need urgent help. I remove a tonne of product pages and put everything into one product page to deal with duplicate content. I thought this was a good thing to do until I got an email from Google saying: "Googlebot identified a significant increase in the number of URLs on ****.com that return a 404 (not found) error. " I checked it out and found the problem: 4 Soft 404's
Technical SEO | | crocman
41 Not Found's What do I need to do to fix this? Is it a problem or should I just ignore? I removed all the pages on WordPress but I need to do it somehow manually through Google? I have worked so hard on my SERP's that this will destroy me if I'm penalised. Please can someone advise?0 -
Inurl: search shows results without keyword in URL
Hi there, While doing some research on the indexation status of a client I ran into something unexpected. I have my hypothesis on what might be happing, but would like a second opinion on this. The query 'site:example.org inurl:index.php' returns about 18.000 results. However, when I hover my mouse of these results, no index.php shows up in the URL. So, Google seems to think these (then duplicate content) URLs still exist, but a 301 has changed the actual goal URL? A similar things happens for inurl:page. In fact, all the 'index.php' and 'page' parameters were removed over a year back, so there in fact shouldn't be any of those left in the index by now. The dates next to the search results are 2005, 2008, etc. (i.e. far before 2013). These dates accurately reflect the times these forums topic were created. Long story short: are these ~30.000 'phantom URLs' in the index out of total of ~100.000 indexed pages hurting the search rankings in some way? What do you suggest to get them out? Submitting a 100% coverage sitemap (just a few days back) doesn't seem to have any effect on these phantom results (yet).
Technical SEO | | Theo-NL0 -
Google Search Results Flip-Flop
For a site we manage, Google can’t seem to decide which of two pages to present for a search for “skid steer attachments.” Almost weekly, it flip-flops from the home page to an interior page (which is a shopping cart category page that we have not actually optimized for the phrase.) The site is berlon.com. Have any of you had a similar experience and, if so, how did you address it? I’ve attached a Moz screen shot that shows the changes. mNfmJoY
Technical SEO | | PKI_Niles0 -
Does the order of results from "site:www.example.com" tell us anything?
Does google rank in order of page authority with "site:www.example.com" or is it random? As most of the results of the first 6 pages for our site are internal search results pages ( eg www.example.com/search/product-results)
Technical SEO | | PaddyDisplays
The fact that search results are index at all is frustrating, they are not linked to internally or externally. The open site explorer does not have any back links for any of the search pages, and I checked the submitted site map and no search urls are submitted, so I don't know how google are finding the search urls. Also tested some of the search urls with aherf and no back links. But since its ranking the search pages ahead of the category(landing) pages with "site:" has me worried that not only are they indexing the urls, but they giving them higher page authority0 -
Bing search results
Hi I am getting little or no traffic via Bing. Why would this be? all my traffic is from Google and social media. my site is www.cocoonfxmedia.co.uk
Technical SEO | | Cocoonfxmedia0 -
Avoiding Duplicate Content in E-Commerce Product Search/Sorting Results
How do you handle sorting on ecommerce sites? Does it look something like this? For Example: example.com/inventory.php example.com/inventory.php?category=used example.com/inventory.php?category=used&price=high example.com/inventory.php?category=used&location=seattle If not, how would you handle this? If so, would you just include a no-index tag on all sorted pages to avoid duplicate content issues? Also, how does pagination play into this? Would it be something like this? For Example: example.com/inventory.php?category=used&price=high__ example.com/inventory.php?category=used&price=high&page=2 example.com/inventory.php?category=used&price=high&page=3 If not, how would you handle this? If so, would you still include a no-index tag? Would you include a rel=next/prev tag on these pages in addition to or instead of the no-index tag? I hope this makes sense. Let me know if you need me to clarify any of this. Thanks in advance for your help!
Technical SEO | | AlexanderAvery1 -
Trying to reduce pages crawled to within 10K limit via robots.txt
Our site has far too many pages for our 10K page PRO account which are not SEO worthy. In fact, only about 2000 pages qualify for SEO value. Limitations of the store software only permit me to use robots.txt to sculpt the rogerbot site crawl. However, I am having trouble getting this to work. Our biggest problem is the 35K individual product pages and the related shopping cart links (at least another 35K); these aren't needed as they duplicate the SEO-worthy content in the product category pages. The signature of a product page is that it is contained within a folder ending in -p. So I made the following addition to robots.txt: User-agent: rogerbot
Technical SEO | | AspenFasteners
Disallow: /-p/ However, the latest crawl results show the 10K limit is still being exceeded. I went to Crawl Diagnostics and clicked on Export Latest Crawl to CSV. To my dismay I saw the report was overflowing with product page links: e.g. www.aspenfasteners.com/3-Star-tm-Bulbing-Type-Blind-Rivets-Anodized-p/rv006-316x039354-coan.htm The value for the column "Search Engine blocked by robots.txt" = FALSE; does this mean blocked for all search engines? Then it's correct. If it means "blocked for rogerbot? Then it shouldn't even be in the report, as the report seems to only contain 10K pages. Any thoughts or hints on trying to attain my goal would REALLY be appreciated, I've been trying for weeks now. Honestly - virtual beers for everyone! Carlo0 -
Crawling image folders / crawl allowance
We recently removed /img and /imgp from our robots.txt file thus allowing googlebot to crawl our image folders. Not sure why we had these blocked in the first place, but we opened them up in response to an email from Google Product Search about not being able to crawl images - which can/has hurt our traffic from Google Shopping. My question is: will allowing Google to crawl our image files eat up our 'crawl allowance'? We wouldn't want Google to not crawl/index certain pages, and ding our organic traffic, because more of our allotted crawl bandwidth is getting chewed up crawling image files. Outside of the non-detailed crawl stat graphs from Webmaster Tools, what's the best way to check how frequently/ deeply our site is getting crawled? Thanks all!
Technical SEO | | evoNick0