Are there discrepancies between GWT and SEOMoz?
-
In our keyword rank tracking report, we've dominated a keyword in Google and have secured the slot for years. All evidence points in this direction. In Google Webmaster Tools, however, this particular keyword averages a rank of 6.5. Is anyone else experience these kinds of discrepancies? What is your take on it?
-
That makes MUCH more sense. I'll Google it a bit and see what others have to say. Thanks Keri!
-
If I remember correctly, Google is doing an average of all of the pages that rank for that term. So if your privacy policy ranks in 20th position for your company name, that gets factored in. I think that's how it works, but could be wrong. It should be a place to start looking for information though.
-
Everything is way off. Even our company name according to GWT is ranking at an average of 4.5. Does GWT also include universal results in their ranking report?
-
The scope of data Google has ( i would not expect them to show all that to user through GWT ) and OSE has are not the same so there will be some discrepancies, here and there .
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Possible problem with new site (GWT no queries/very low index vs. submitted)
Hi everyone, I recently launched a new website for a small business loan company in the Dallas area. The site has been live for roughly a month and a half. I submitted everything to GWT as usual, including my sitemap. I am not sure what's going on with the site, as there is no activity from GWT in the impressions or queries. The submit vs. index is 24/3 (and hasn't moved). Also the queries graph on the overview stops at 3/18/2015... On another note, when I go to Crawl > Sitemaps, it shows that there were pages indexed during the month of march and then on April 3 it drops from 17 to 2 and never increases. Google says there are no errors or issues found, but I feel like there's something wrong. When I do site:, my URLs do pop up which makes me believe there's just a problem with my GWT. With that being said, I'm not happy THINKING there's something wrong. I need to actually know what the problem is. The only thing I can think of that I have done is purchase SSL for the site, but when I search what pages are indexed using www. it shows all the HTTPS URLS, so that would tell me that the site is getting indexed without a problem? Does anyone have a clue as to what might be happening? I will attach some screen shots so that you can get a better idea... KQ2366i D5xBNZf mF7kkgW
Intermediate & Advanced SEO | | jameswesleyhunt0 -
GWT does not play nice with 410 status code approach to expire content? Use 301s?
We have been diligently managing our index size in Google for our sites and are returning a 410 status code for pages that we no longer consider "up-to-date" but still carry value for users to access to have Google remove them from our index to keep it lean. However we have been receiving GWT warning across sites because of the 410 status codes Google is encountering which makes us nervous that Google could interpret this approach as a lack of quality of our site. Does anyone have a view if the 410 approach is the right approach for the given example or if we should consider maybe simply using 301s or another status code to keep our GWT errors clean? Further notes there is hardly ever any link juice being sent to those pages so it is not like we are missing out on that the pages for which we return 410 are also marked as noindex and nofollow
Intermediate & Advanced SEO | | petersocapro0 -
Impact seomoz.org to moz.com?
HI Seomoz, Yes, this question is to you :-), but it might come in handy for the rest of us. With the revamp of your service and website, you guys also decided to change your web address to Moz.com. Myself and perhaps others here consider to do the same. Yet I fear loss in the SERP's. Did you guys experience any loss in the SERP's for your most valuable keywords? What was your plan of action?
Intermediate & Advanced SEO | | wellnesswooz0 -
SEOMoz Keyword Ranking Accuracy
Getting some mixed message on this. Is the keyword ranking informational available in SEOMoz based on the data in Mozscape or based on live Google data? Reason I ask is that my ranking for "crown moulding" for www.worldofmoulding differs: Based on the today's scan my www.worldofmoulding.com ranks in position 40 in Google. Based on a manual check with cleared out cache no local setting ... the site is 4th spot Running a third party check using "Advanced Web Ranking" tool it comes up in 9th spot, but they count the local business SERPs in which case it matches my manual search. It seems that SEOMoz is not in line with actual ranking. Any suggestion for what can be used to accurately asses SERP placement for terms?
Intermediate & Advanced SEO | | VanadiumInteractive0 -
SEOMoz Paid Directories List.... Are paid links always bad?
We all know paid links are bad.... so it begs the question.... why does the SEOMoz directory list have so many directories that only submit businesses for a fee.... i.e. a paid link. Are they worth paying for or not? Interested to hear peoples thoughts and experiences with some of the directories listed on the SEOMoz directory list....
Intermediate & Advanced SEO | | JohnW-UK0 -
Crawl errors in GWT!
I have been seeing a large number of access denied and not found crawl errors. I have since fixed the issued causing these errors; however, I am still seeing the in webmaster tools. At first I thought the data was outdated, but the data is tracked on a daily basis! Does anyone have experience with this? Does GWT really re-crawl all those pages/links everyday to see if the errors still exist? Thanks in advance for any help/advice.
Intermediate & Advanced SEO | | inhouseseo0 -
Newbie to SEO and SEOMOZ help
Hey everyone i just came across SEOMOZ today, i have been building websites for 3 years now but SEO is something which has always been a scary topic to consider trying to master. I have made a decision to do this in 2012 and i have been looking for a software package which can stear me and teach me. I have been reading the site help today and i feel totally swamped! i have created my campaign but a lot of the results dont make much sense to me and i am unsure of how to fix the errors they found. For instance the crawl diagnostics shows i have 5 4xx client errors. They show me a link to the page where the error is http://www.mydomain.com/category/latest-news/function.require but when i go to see what this is i just find an error 404 not found page.How do i go about removing this error if i have no idea where the problem is? I have started reading SEO User guide and beginers guide and i know it is going to take me a long time to get use to this all, but i am struggling to find the starting point and hope someone can possible help me find the first few steps. Thanks
Intermediate & Advanced SEO | | buntrosgali0 -
SEOMOZ duplicate page result: True or false?
SEOMOZ say's: I have six (6) duplicate pages. Duplicate content tool checker say's (0) On the physical computer that hosts the website the page exists as one file. The casing of the file is irrelevant to the host machine, it wouldn't allow 2 files of the same name in the same directory. To reenforce this point, you can access said file by camel-casing the URI in any fashion (eg; http://www.agi-automation.com/Pneumatic-grippers.htm). This does not bring up a different file each time, the server merely processes the URI as case-less and pulls the file by it's name. What is happening in the example given is that some sort of indexer is being used to create a "dummy" reference of all the site files. Since the indexer doesn't have file access to the server, it does this by link crawling instead of reading files. It is the crawler that is making an assumption that the different casings of the pages are in fact different files. Perhaps there is a setting in the indexer to ignore casing. So the indexer is thinking that these are 2 different pages when they really aren't. This makes all of the other points moot, though they would certainly be relevant in the case of an actual duplicated page." ****Page Authority Linking Root Domains http://www.agi-automation.com/ 43 82 http://www.agi-automation.com/index.html 25 2 http://www.agi-automation.com/Linear-escapements.htm 21 1 www.agi-automation.com/linear-escapements.htm 16 1 http://www.agi-automation.com/Pneumatic-grippers.htm 30 3 http://www.agi-automation.com/pneumatic-grippers.htm 16 1**** Duplicate content tool estimates the following: www and non-www header response; Google cache check; Similarity check; Default page check; 404 header response; PageRank dispersion check (i.e. if www and non-www versions have different PR).
Intermediate & Advanced SEO | | AGIAutomation0