Tuesday July 12th = We suddenly lost all our top Google rankings. Traffic cut in half. Ideas?
-
The attached screenshot shows all.
Panda update hit us hard = we lost half our traffic.
Three months later, Panda tweak gave us traffic back.
Now, this past Tuesday we lost half our traffic again and ALL our top ranking Keywords/phrases on Google (all other search engines keywords holding rank fine).
Did they tweak their algorithm again? What are we doing wrong??
-
couldn't they edit out the links back to your site?
-
something to throw into the mix from a guy I know who was also hit hard by the panda update was the incoming links cname. Do the incoming links you have all come from one particular source or host ?
-
It's a possibility. There have been ongoing discussions on internal linking on most of the major SEO forums for quite some time.
My personal feeling is that most websites can effectively reduce the number of internal links by auditing their link flow, determining their "ideal" real estate and ensuring that they are not necessarily duplicating unnecessary links, which would steal some of the juice that might otherwise go to some of the bigger pages.
Only <5% of people ever see the footer in the average website, so my opinion has always been that the footer should contain supporting links to areas to help the user in "context". Contact, About, Sitemap and Investors, for examples, are classic links one might find there.
Big real estate - or important pages in the website - should be linked to from your main nav or areas above the fold with lots of user exposure.
Keep in mind when changing and removing links - it is a process. Do not go in and remove all or a significant part of your links in one week.
Make one or two good changes, then wait for a period of a week or so, then make others small changes over time
Hope this helps.
Todd
www.seovisions.com -
OK. Well, my suggested strategy would be:
- Go through the list Todd gave to make sure that there is nothing wrong on your site. If not, you can probably assume it was a Google algorithm tweak, so proceed to...
- Work on improving your site, starting with any areas you know of that might be an issue. I would say any content that is not unique would be a good place to start.
-
If you think people are scraping your content, you might want to look into making sure you link back to your pages within your content so that you always get links back in those cases.
I noticed that when scraper site pick up SEOmoz content, tehy dont pick up the footer with the author links back. So make sure to keep links in the actual body text
I also saw a number of site that pull seomoz content into an iframe on their site. Places like twitter and Flickr detect when their pages are opened in an iframe and give a error message -> tabs.to/POL-Lp
-
Thanks - reading up on rel=author now. Looks kinda complicated from the Google instructions..
-
Great advice - I especially like the suspicious inbound links idea.. never thought of that.
We have had experts say that we have too many links per page on average on our site, and that we should consolidate the links in the sitewide footer as they add so many links per page.
Is this a good idea? Are we being hurt by having so many outbound links and such a link heavy footer?
Thanks
-
Place links within your content too if people are talking your content, also add the rel=author attribute.
I would also look at the link profile for your site, have you added any dodgy links recently.
Regards.
-
Drops such as the one you have experienced can be difficult to assess. I would advise the following procedures to rule out other issues first. As Egol correctly stated it is important to not jump to fixes right away, for two reasons. First of all, the situation could be temporary and could revert. Second of all, changes you make will obscure the potential issues, making it more difficult to find problem spots.
1. Check robots.txt to ensure there are no issues with file additions that could be blocking major pages
2. Check link canonical tag, if you use it, to ensure there are no issues there with incorrect urls
3. Check inbound links both using inbound link tools and Webmaster tools for any suspicious bursts of links or links that look dodgy you might not account for
4. Run a sitewide meta check on all titles and meta descriptions and ensure everything is correct. There are software companies that offer fairly inexpensive options that will spider the entire website relatively quickly. Do this late at night post-swell
4. Use Xenu to check all broken links and fix. Even if there are only a few.
5. Run the google bot indexing tool in GWT and check for any instances of funny code or potential problems
6. Analyze your analytics to determine which keyword clusters lots the most positioning. This can often give you clues as to what might have happened.Hope this helps.
Todd
www.seovisions.com -
Study this... implement carefully....
http://www.google.com/support/webmasters/bin/answer.py?answer=1229920
-
So at the footer of any article we publish by another author have a link to that author's original article with rel="author" in it?
-
Hold your fire. Your traffic might come back tomorrow.
However, it looks like you are on the edge of whatever Google does not like because you are flashing in and out.
Hang in there, keep working to improve and iron out any problems that you think might be causing this.
-
One thing that I would do is implement the rel="author" attribute in a link to your author pages if you have not already done that.
-
That page has been around for 10 years on our site, and as you can see, many many people are ripping it off, along with all other pages on our site.
Would you recommend an aggressive campaign of "cease and desist" emails to these plagiarizers, coupled with more addition of unique content??
-
All of our top ranked articles are unique, although many people rip them off and it's hard as heck for us to track them all down and get them to remove our content.
I recently did this for our "Raised Garden Bed" Page, which contributes a huge amount of our traffic. It was very labour intensive.
Our blog has a lot of articles that other authors have written and are republished elsewhere.
-
Adam is asking the right question. On http://eartheasy.com/live_water_saving.htm, I grabbed some text "Many beautiful shrubs and plants thrive with far less watering than other species. Replace herbaceous perennial borders with native plants. Native plants will use less water and be more resistant to local plant diseases." That text appears on a bunch of sites. I tried this with several other phrases from different pages on your site, and almost every time several other sites shared identical text to yours.
Google is penalizing you because your content is identical to a bunch of other sites. The more unique, original content you have, the more you should see your rankings rise.
-
Are your articles unique or are they syndicated?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate website pages indexed: Ranking dropped. Does Google checks the duplicate domain association?
Hi all, Our duplicate website which is used for testing new optimisations got indexed and we dropped in rankings. But I am not sure whether this is exact reason as it happened earlier too where I don't find much drop in rankings. Also I got replies in the past that it'll not really impact original website but duplicate website. I think this rule applies to the third party websites. But if our own domain has exact duplicate content; will Google knows that we own the website from any other way we are associated like IP addresses and servers, etc..to find the duplicate website is hosted by us? I wonder how Google treats duplicate content from third party domains and own domains. Thanks
Algorithm Updates | | vtmoz0 -
404s in Google Search Console and javascript
The end of April, we made the switch from http to https and I was prepared for a surge in crawl errors while Google sorted out our site. However, I wasn't prepared for the surge in impossibly incorrect URLs and partial URLs that I've seen since then. I have learned that as Googlebot grows up, he'she's now attempting to read more javascript and will occasionally try to parse out and "read" a URL in a string of javascript code where no URL is actually present. So, I've "marked as fixed" hundreds of bits like /TRo39,
Algorithm Updates | | LizMicik
category/cig
etc., etc.... But they are also returning hundreds of otherwise correct URLs with a .html extension when our CMS system generates URLs with a .uts extension like this: https://www.thompsoncigar.com/thumbnail/CIGARS/90-RATED-CIGARS/FULL-CIGARS/9012/c/9007/pc/8335.html
when it should be:
https://www.thompsoncigar.com/thumbnail/CIGARS/90-RATED-CIGARS/FULL-CIGARS/9012/c/9007/pc/8335.uts Worst of all, when I look at them in GSC and check the "linked from" tab it shows they are linked from themselves, so I can't backtrack and find a common source of the error. Is anyone else experiencing this? Got any suggestions on how to stop it from happening in the future? Last month it was 50 URLs, this month 150, so I can't keep creating redirects and hoping it goes away. Thanks for any and all suggestions!
Liz Micik0 -
Meta Descriptions - Google ignores what we have
Hi I still write meta descriptions to help with CTR. I am currently looking at a page where the CTR needs improving. I check the meta on Google SERPs & it isn't pulling through the meta description we have - but other info on the page. This isn't ideal - why does this happen? Will Google just make the decision and are descriptions not worth writing?
Algorithm Updates | | BeckyKey0 -
Ranking dropped with no page changes
My rank for a keyword went from ranking #1 to #22. The page grade for this keyword is A, there was no site structure changes. The only thing I can see is that tumblr and reddit and other sources are now listed for this keyword and it's difficulty went from the mid-low teens to 28%. However, even given that, I do not a see a reason for this keyword alone to fall so far. It was giving us a ton of traffic, in fact, most of our organic search results came from this term for nearly two months. And 2 weeks ago for no reason, we were pushed to page 3. Has anyone else had similar experiences how do you counter it, and what can we do?
Algorithm Updates | | mozmemberanon0 -
Does a KML file have to be indexed by Google?
I'm currently using the Yoast Local SEO plugin for WordPress to generate my KML file which is linked to from the GeoSitemap. Check it out http://www.holycitycatering.com/sitemap_index.xml. A competitor of mine just told me that this isn't correct and that the link to the KML should be a downloadable file that's indexed in Google. This is the opposite of what Yoast is saying... "He's wrong. 🙂 And the KML isn't a file, it's being rendered. You wouldn't want it to be indexed anyway, you just want Google to find the information in there. What is the best way to create a KML? Should it be indexed?
Algorithm Updates | | projectassistant1 -
Is Google Rotating Good Matches?
I have a theory that Google may be trying to be fair to white-hat-seo sites that are doing the right things with blogging, linking, social media, etc. [ie that deserve equal good positioning] are being cycled to and from the first page, perhaps in a weekly or monthly basis. My theory would be that they are purposefully doing it to give those sites more equal exposure. My case: I've had top rankings for http://thedogbitelawyer.com for almost all of the important terms for dog bite lawyers for a couple of years now. When Penguin came out we lost some ground across the board, and identified that perhaps there was too much duplicate content left over from when I inherited the site. I reworked the site wording and link structure a bit and gained back positioning. Since that time we are up and down like a yo-yo on the top terms! Anybody else have this suspicion? If it's true, I don't need to stress, if we are bouncing around for other reason's I'd better keep stressing!
Algorithm Updates | | JCDenver0 -
How Do Geo Rankings Work?
I know that's vague, so let me specify. I recently got a client on the second page for a relatively difficult 2 word keyword. That is when the location is set to Chicago, Il in Google and private browsing in Chrome (so I'm not logged in). This is great because Chicago is the more important location (the client is located there and that's what his location is when he searches in Google). But when he goes home to the suburbs and searches, the ranking completely disappears. Why would he rank in a much more desired location such as Chicago vs a suburb way out of the city? Is that something you can control or target in terms of optimization? It's difficult trying to explain why this is happening to clients.
Algorithm Updates | | MichaelWeisbaum0 -
Google changing case of URLs in SERPs?
Noticed some strange behavior over the last week or so regarding our SERPs and I haven't been able to find anything on the web about what might be happening. Over the past two weeks, I've been seeing our URLs slowly change from upper case to lower case in the SERPs. Our URLs are usually /Blue-Fuzzy-Widgets.htm but Google has slowly been switching them to /blue-fuzzy-widgets.htm. There has been no change in our actual rankings nor has it happened to anyone else in the space. We're quite dumbfounded as to why Google would choose to serve the lower case URL. To be clear, we do not build links to these lower case URLs, only the upper. Any ideas what might be happening here?
Algorithm Updates | | Natitude0