Page 2 to page 1
-
I've found a lot of times it does not take much activity to get a keyword from ranking on page 3 of Google or further down to page 2 but there seems to be a hurdle from page 2 to page 1. It is very frustrating to be between 11 and 15 but not being able to make that push to 9 or 10. Has anyone got or seen any data to justifiy this?
-
Yes, that's probably fair to say, although, in a lot of niches, you'll need more than just some of one or the other--it can take a lot of both.
-
Thanks Chris and ImWaqas, I'll keep plugging away. Is it fair to say that with some keyword optimisation to get to grade A and some internal linking it's easy to get to page 2 but to get to page 1 there needs to be some out reach or social programme?
-
Well,, It is Very Common that getting to Top Spot rankings is Difficult. Usually Its Easy to get keywords(Local terms) from Page 10 to Page 2 in a Month or so and then It takes alot of effort to get to Page 1.
Likewise, Its Easy to get to Spot 8 or 9 but Getting in First 5 spots always take alot of effort.
So Don't get worried and keep doing what you are doing and hopefully be on Page 1 Soon.
best of luck.
-
This is very common. Often those sites on the top of the search results are the ones who's owners have put considerable time and effort into optimizing, while the ones lower down in the results may have little or no optimization (this depends on the search term and the niche). Because of that, it can be very easy to pop up to page 2 with very little effort but moving up from there to page 1 is often the hard part
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is it necessary to have unique H1's for pages in a pagination series (i.e. blog)?
A content issue that we're experiencing includes duplicate H1 issues within pages in a pagination series (i.e. blog). Does each separate page within the pagination need a unique H1 tag, or, since each page has unique content (different blog snippets on each page), is it safe to disregard this? Any insight would be appreciated. Thanks!
Algorithm Updates | | BopDesign0 -
Have you ever seen or experienced a page indexed which is actually from a website which is blocked by robots.txt?
Hi all, We use robots file and meta robots tags for blocking website or website pages to block bots from crawling. Mostly robots.txt will be used for website and expect all the pages to not getting indexed. But there is a condition here that any page from website can be indexed by Google even the site is blocked from robots.txt; because crawler may find the page link somewhere on internet as stated here at last paragraph. I wonder if this really the case where some webpages have got indexed. And even we use meta tags at page level; do we need to block from robots.txt file? Can we use both techniques at a time? Thanks
Algorithm Updates | | vtmoz0 -
One of our top visited page (login page) missing primary keyword, does this makes ranking drop of our homepage for same keyword?
Hi all, So, I have removed the "primary keyword" from login page, which is most visited page on our website to avoid keywords in non related pages. I noticed our homepage ranking dropped for same "primary keyword". Visitors of this login page directly land without searching with "primary keyword". Then how removing it from such page drops our ranking? Thanks
Algorithm Updates | | vtmoz0 -
Is it bad from an SEO perspective that cached AMP pages are hosted on domains other than the original publisher's?
Hello Moz, I am thinking about starting to utilize AMP for some of my website. I've been researching this AMP situation for the better part of a year and I am still unclear on a few things. What I am primarily concerned with in terms of AMP and SEO is whether or not the original publisher gets credit for the traffic to a cached AMP page that is hosted elsewhere. I can see the possible issues with this from an SEO perspective and I am pretty sure I have read about how SEOs are unhappy about this particular aspect of AMP in other places. On the AMP project FAQ page you can find this, but there is very little explanation: "Do publishers receive credit for the traffic from a measurement perspective?
Algorithm Updates | | Brian_Dowd
Yes, an AMP file is the same as the rest of your site – this space is the publisher’s canvas." So, let's say you have an AMP page on your website example.com:
example.com/amp_document.html And a cached copy is served with a URL format similar to this: https://google.com/amp/example.com/amp_document.html Then how does the original publisher get the credit for the traffic? Is it because there is a canonical tag from the AMP version to the original HTML version? Also, while I am at it, how does an AMP page actually get into Google's AMP Cache (or any other cache)? Does Google crawl the original HTML page, find the AMP version and then just decide to cache it from there? Are there any other issues with this that I should be aware of? Thanks0 -
Do pages with canonicals need meta data?
Page A has a canonical to Page B. Should Page A have meta data values such as description, keywords, dublin core values, etc.? If yes, should the meta data values be different on Page A and Page B?
Algorithm Updates | | Shirley.Fenlason1 -
Decline in ranking for a particular theme of keywords after Penguin 2.0
Hi everyone Last month we found on of our clients rankings were seeing significant declines in ranking (like from 3 to 28). This occurred around the time of the Penguin 2.0 update. After further investigation we found that only a collection of keywords were affected. If we were hit by the algorithm update we would expect to find all keywords declining. We have not been manually penalized and other keyword themes are seeing moderate day to day ranking improvements. I know that rankings jump from day to day but a sudden decline of around 20 places for a theme of keywords isn't what we would expect. Thanks all.
Algorithm Updates | | PeteW0 -
Has Google problems in indexing pages that use <base href=""> the last days?
Since a couple of days I have the problem, that Google Webmaster tools are showing a lot more 404 Errors than normal. If I go thru the list I find very strange URLs that look like two paths put together. For example: http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm If I check on which page Google found that path it is showing me the following URL: http://www.domain.de/languages/languageschools/havanna/spanishcourse.htm If I check the source code of the Page for the Link leading to the London Page it looks like the following: [...](languages/languageschools/london/london.htm) So to me it looks like Google is ignoring the <base href="..."> and putting the path together as following: Part 1) http://www.domain.de/laguages/languageschools/havanna/ instead of base href Part 2) languages/languageschools/london/london.htm Result is the wrong path! http://www.domain.de/languages/languageschools/havanna/languages/languageschools/london/london.htm I know finding a solution is not difficult, I can use absolute paths instead of relative ones. But: - Does anyone make the same experience? - Do you know other reasons which could cause such a problem? P.s.: I am quite sure that the CMS (Typo3) is not generating these paths randomly. I would like to be sure before we change the CMS's Settings to absolute paths!
Algorithm Updates | | SimCaffe0 -
Google said that low-quality pages on your site may affect rankings on other parts
One of my sites got hit pretty hard during the latest Google update. It lost about 30-40% of its US traffic and the future does not look bright considering that Google plans a worldwide roll-out. Problem is, my site is a six year old heavy linked, popular Wordpress blog. I do not know why the article believes that it is low quality. The only reason I came up with is the statement that low-quality pages on a site may affect other pages (think it was in the Wired article). If that is so, would you recommend blocking and de-indexing of Wordpress tag, archive and category pages from the Google index? Or would you suggest to wait a bit more before doing something that drastically. Or do you have another idea what I could to do? I invite you to take a look at the site www.ghacks.net
Algorithm Updates | | badabing0