How to stop downward drift
-
I have a site, spirithome.com (despite the name, a non-com site), that for reasons I don't know took a hit with Panda, and has been drifting down ever since. What can someone do to stop an overall slow drift down? (PS: it just came off a predictable seasonal high; I'm talking of an overall picture.)
-
Also, don't wait for google to tell you if you have duplicates.
That is what I was doing - silly me!
I did a comprehensive headline check and I discovered that there were more than a thousand duplicates in our system because back a few years, one of the editors was doubleclicking the mouse when publishing stories and that was before I put in a mechanism to prevent them. I thought there were only occasional ones and I'd fixed them, but it seems I didn't!
So check every page on your site for duplicate titles, duplicate descriptions and duplicate content.
-
My best guess at your problem is that your site has a lot of content that has been copied by other websites or has been republished from other websites.
The result is that you are being hit by the panda filter. That usually results in a site-wide reduction in search rankings.
I have seen this on one of my sites where we republish articles at the request of educational institutions and government agencies. We removed a lot of that content from the google index with the following line in the of the html code.
name="robots" content="noindex, follow" />
Rankings went back up after a lot of those pages were deindexed (but deindexing those pages cut a lot of our potential search traffic)
No guarantee that this is your problem. Just my best guess.
Read Alan Gray's answer and follow his 5 suggestions if you want other actionable ideas.
I believe that your problem will require surgery and hard work.
-
I'm still looking for some working answers.
-
"took a hit": like, permanently losing 25% directly upon Panda's US release, and another 12% upon its worldwide release. It was quite obvious.
SEOmoz and GA show no evidence that rewritten pages do any better than the more-or-less static ones (no pages have been totally static). The three pages that are less than a year old have pretty much failed to register at all with Google, even though they're in the index, have social links and rate A w. SEOmoz page analysis. (This doesn't count devotionals.) Two of the pages are on what has become pretty much a hot topic in the media, too.
I'm hoping that the blog (coming in a few weeks) helps. But I still need to know how to kill the site's overall downdraft - roughly 80 visitors a week lower each of the past 4 weeks. And pages that have been top 10 are now dropping 10-20 positions from where they were, despite content and page analysis improvements.
-
"indexing time stamp"
I can say with absolute confidence that an "indexing time stamp" does not exist with Google - or - it does not work at least 1/2 of the time.
I thought Panda was supposed to enhance that ability rather than do just the opposite and punish the origins of the content.
From what I have seen, Panda is a domain-level throttle that impacts sites that trigger an invisible trip wire. The presence of duplicate content (you duped somebody or somebody duped you), slapping visitors faces with ads are possible locations for the wire.
Google has no reliable way to know the originator of content unless author pages and recip rel="me" links are in place - and those can be forged by others.
-
If you take a snippet of text that is keyword rich, and slap it on 10 other sites, they will almost always (in google search) outperform your whole unique story.
Alan, I'm missing something or not getting it (perhaps both).
I understood there was an "indexing time stamp" in Google that helped Google identify the original content and therefore punish those that scraped it.
Is this not the case? I thought Panda was supposed to enhance that ability rather than do just the opposite and punish the origins of the content.
-
Alan, that is part of what I believe Google really wants. With this and rel=auth there should be the ability to at least begin to mark ownership and provide a timestamp type trail. By virtue of the content originally linking to EGOL, it is then "his." That won't stop duplication of the content. It likely provides a point of comparison however in the who wrote it first equation.
I just read through it the first time you posted and thought it was good. When you pulled it out in relation to EGOL's response I thought, Duhhh, I missed the importance in the moment. Great Job Alan.
-
Thank you Alan. I think that is a really good idea.
On the site with the long articles, all of the content that I have written appears there without attribution. I have hundreds of articles on the site and enjoy them being anonymous.
However, I know that your suggestion might fix or reduce the problem. I've thought about doing it in the past. I need to put some thought into claiming the content. It would probably increase my income.
Thanks for making me think about this again. You deserve more than one thumbs up for your reply.
-
Boodreaux
If you haven't already done this, consider doing my #5 in the answer below.
That should brand your ID into your text and photos
Here is what happens with google - which is not as smart as we (and they) think they are.
If you take a snippet of text that is keyword rich, and slap it on 10 other sites, they will almost always (in google search) outperform your whole unique story.
-
EGOL,
If you haven't already done this, consider my #5 in the answer below.
-
I have 540 sites that are grabbing my jobs and re-posting them without permission. I hope I am not in line with getting hit by Panda.
-
I have some pages that once performed well and suddenly started to decline. They had been on my site with product descriptions that I had written a few years ago. I looked for dupe content and found that lots of domains that feature products that are "made in China" had copied my text. I don't know if that was the cause but maybe.
I also have a couple articles of a couple thousand words on a topic that currently is getting a lot of search. I noticed that my long tail traffic for these pages was declining. I found that a lot of spam sites (dozens and dozens) had grabbed a few sentences from my site and a few from several other sites, slapped them together and were now ranking for very long tail queries.
-
Interesting. What would compel the Oracle to grab the block of text and do such a thing?
-
Robert,
EGOL and Robert are both right.
There are some things you can do to stop the drift, and possibly reverse it, but that is more effort
I'm out with my ipad, so the formatting of this may not be good.
First, look for content that is 100% yours and see if that is falling too.
- let me know what you find.
Fix the red group problems that SEOMoz tells you about
Then
1. go to your WMT account and see if there are any problems listed. - fix them
2. check to seee if you have any outgoing links that are boroken - fix them.
3. check for duplicate titles and descriptions and content. - fix them or delete and redirect.
4. rewrite posts that are not uniquely yours - this is the hard work, especially if you have a lot of pages
5. Get a G+ profile if you don't already have one and link your site to that and link that to your site.
-
I visited random pages on the site and grabbed a random block of text, placed it between quotations in the google search box and submitted the query. I did this five times and four of them returned duplicate content on other domains. These grabs of content were not Bible quotes. They were information that I would assume to be unique to this site as they were unattributed.
A site that has a lot of duplicate content - even if done with permission or under license or plagiarized by others can suffer from Panda.
-
Robert,
I am trying to help so please do not take this as being affrontive. You are talking of a site as opposed to a page. What is ranked is a page; are you seeing each page slowly dropping or are you seeing the home page drop and referring to that as the site dropping?
If it is the homepage that is at issue, for what keywords are you having an issue? I am curious as to "took a hit with Panda," and what you saw that led to that assumption?
Generally, if a page is drifting downward it is the result of stale content, no traffic, etc. I like to look at those issues from a keyword query to meta description to on site and content kind of methodology. What are you seeing as to traffic over time?
I realize this is not an answer, but hopefully the clarification will assist in helping you.
Best
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Remove all stop words from permalink?
I saw many websites theses days remove stop words from the URL, How important is to remove stop words from the URL?
On-Page Optimization | | varunrupal0 -
Google Stopped Displaying Rich Snippets Stars
Background Google stopped displaying stars next to the website "szallas.hu" in the SERPs, eg: http://goo.gl/RXSS58 Google used to display the stars there is no technical problem, since I can see the stars in the Rich Snippet Testing Tool: http://goo.gl/bQUSNr Why did Google stop showing the stars? What can I do? I already filled in a form at https://support.google.com/webmasters/contact/rich_snippets_feedback.
On-Page Optimization | | Sved0 -
What are "stop" words in Title Tags?
My client is following his GoDaddy SEO Checklist, and it is reporting 5 errors in Title Tags, saying the Titles contain "stop" words. I can't figure out what these are. Any ideas?
On-Page Optimization | | cschwartzel0 -
Should stop words be used in titles? I'm aiming for natural title structure.
I have fully optimized on-page SEO for the following keyword (not really the one I use, but it can serve as an example): -personal driver in los angeles Even though "in" is a stop word, I prefer to have a natural (non-robotic) structure for the title - I do this by including "in". I believe that "personal driver los angeles" is too spammy and too robotic. Is this a good or a bad thing?
On-Page Optimization | | zorsto0 -
If I put 'keyword/url' combination to 'stop run weekly', will it dissapear from the summary page in the on-page grader?
The summary page of the on-page grader chooses the keyword and url combination itself. Now if I choose another combination, I would like the former to dissapear from the summary page. The only option is 'stop running weekly'. But will it disappear from the list also?
On-Page Optimization | | jongeneelbv0 -
Could incorrect H1 tag stop a page ranking?
A few weeks ago I added an image to the top of each page and somehow (I've only just noticed) the image was inside an h1 tag. Presumably this would be the first element google would crawl and would it consider the image the h1 possibly confusing google about the pages content? My actual h1 further down the page would then possible be disregarded by google? The pages I am talking about have dropped in google and I wonder if this could have been the issue?
On-Page Optimization | | SamCUK0 -
Html and css errors - what do SE spiders do if they come across coding errors? Do they stop crawling the rest of the code below the error
I have a client who uses a template to build their websites (no problem with that) when I ran the site through w3c validator it threw up a number of errors, most of which where minor eg missing close tags and I suggested they fix them before I start their off site SEO campaigns. When I spoke to their web designer about the issues I was told that some of the errors where "just how its done" So if that's the case, but the validator still registers the error, do the SE spiders ignore them and move on, or does it penalize the site in some way?
On-Page Optimization | | pab10 -
How can I stop google reading a certain section of text with my H1 tag?
Hey Mozzers, I'm wondering if anybody knows of a way that I can stop google reading a certain part of text within my H1 texts? My issue is that I have individual office pages on my site, but many offices are based in the same city; such as 'London'. I want to keep London within the H1 tag for user experience but I do not want it to be picked up by the search engines and start a canonical issue. I've seen some people say to use document.write or use an image. Does anybody know of a correct way of doing this? Many Thanks.
On-Page Optimization | | Lakeside0