Did I get hit with a panda update?
-
I have a site that is a marketplace. We don't own any items, the sellers fill everything out and then it goes up on the site. Many of our sellers also have their own sites and just send us a spreadsheet with all of their items and we bulk upload. In that case what we are putting up is very similar to what they already have up on their own site.
I used the Fruition penalty checker and they seem to be suggesting that we got hit with some penalties for Panda and Quality Content. With the Google Algorithm it is hard to know for sure what we got hit with.
Is it possible Google sees us as one of those crappy scraper sites? Is there anything we can do? We never see the items so I can't add to peoples description.
-
It isn't like we have no traffic, we still get thousands in organic traffic a day. It just isn't as much as it used to be.
We are the largest in our industry, not near the size of Amazon or Ebay, but still pretty large. So we have customers who exclusively sell through us. Most of our sellers are small business and don't have in house technical teams, so it is easier working with us.
I understand content marketing and we are starting to do that but if we have been hit with Panda I don't know if it matters.
-
If your site is selling the same products that many other sites are, then the only way you can expect to rank well is if you are adding significant value that isn't available anywhere else.
It's not enough to say that your added value is the fact that users can find everything in one convenient place.
Let's say that I had an ecommerce store that sold camping equipment. But, everything in my store was also available on many other websites. If someone was searching for a particular type of sleeping bag, for example, why would Google want to show them my store rather than the original merchant?
But, let's say that my page that sells that sleeping bag also offers information such as the following:
-A guide to choosing the sleeping bag
-Unique customer reviews
-A video showing how to roll the bag up
and so on.
Now that I've got some good, helpful content, users may decide that rather than buying the product from the manufacturer they'd like to browse my site and get good helpful information and possibly buy from me.
In the past, you could get away with having a site that simply carried product feeds and you could rank that site well if you built enough links. But, self made links are less and less effective as Google is getting better at figuring out what is legitimate. So now, if you want to rank well, you have to truly have a site that is extremely helpful and unique...not just unique in the words on the page, but unique in the type of value you are offering to others, which is not always an easy task.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Search Console - Best practice to fetch pages when you update them?
Hi guys, If you make changes to a page e.g. add more content or something is it good practice to get google to fetch that page again in search console? My assumption is this way, Google can review the updated page quicker, resulting in faster changes in the SERPs for that page. Thoughts? Cheers.
Intermediate & Advanced SEO | | wozniak650 -
Fetch as Google -- Does not result in pages getting indexed
I run a exotic pet website which currently has several types of species of reptiles. It has done well in SERP for the first couple of types of reptiles, but I am continuing to add new species and for each of these comes the task of getting ranked and I need to figure out the best process. We just released our 4th species, "reticulated pythons", about 2 weeks ago, and I made these pages public and in Webmaster tools did a "Fetch as Google" and index page and child pages for this page: http://www.morphmarket.com/c/reptiles/pythons/reticulated-pythons/index While Google immediately indexed the index page, it did not really index the couple of dozen pages linked from this page despite me checking the option to crawl child pages. I know this by two ways: first, in Google Webmaster Tools, if I look at Search Analytics and Pages filtered by "retic", there are only 2 listed. This at least tells me it's not showing these pages to users. More directly though, if I look at Google search for "site:morphmarket.com/c/reptiles/pythons/reticulated-pythons" there are only 7 pages indexed. More details -- I've tested at least one of these URLs with the robot checker and they are not blocked. The canonical values look right. I have not monkeyed really with Crawl URL Parameters. I do NOT have these pages listed in my sitemap, but in my experience Google didn't care a lot about that -- I previously had about 100 pages there and google didn't index some of them for more than 1 year. Google has indexed "105k" pages from my site so it is very happy to do so, apparently just not the ones I want (this large value is due to permutations of search parameters, something I think I've since improved with canonical, robots, etc). I may have some nofollow links to the same URLs but NOT on this page, so assuming nofollow has only local effects, this shouldn't matter. Any advice on what could be going wrong here. I really want Google to index the top couple of links on this page (home, index, stores, calculator) as well as the couple dozen gene/tag links below.
Intermediate & Advanced SEO | | jplehmann0 -
How long does it take for google to update my meta tag des?
Hi Guys: I have 3 top level domains, basically - I can see that my website is ranking in another country (USA) for my keyword - however the domain name that is showing up for the USA search is correct, but the meta tag description is incorrect, this meta tag description is for the NZ domain - not the USA domain. I have updated the hreflang tags, when i submit my new sitemap how long does it take google to update the meta tag description? new
Intermediate & Advanced SEO | | edward-may0 -
Pages getting into Google Index, blocked by Robots.txt??
Hi all, So yesterday we set up to Remove URL's that got into the Google index that were not supposed to be there, due to faceted navigation... We searched for the URL's by using this in Google Search.
Intermediate & Advanced SEO | | bjs2010
site:www.sekretza.com inurl:price=
site:www.sekretza.com inurl:artists= So it brings up a list of "duplicate" pages, and they have the usual: "A description for this result is not available because of this site's robots.txt – learn more." So we removed them all, and google removed them all, every single one. This morning I do a check, and I find that more are creeping in - If i take one of the suspecting dupes to the Robots.txt tester, Google tells me it's Blocked. - and yet it's appearing in their index?? I'm confused as to why a path that is blocked is able to get into the index?? I'm thinking of lifting the Robots block so that Google can see that these pages also have a Meta NOINDEX,FOLLOW tag on - but surely that will waste my crawl budget on unnecessary pages? Any ideas? thanks.0 -
Panda Recovery Question
Dear Friends, One of my customers was hit by the Panda, we were working on improve the tiny content on several pages and the remaining pages were: 1 NOINDEX/FOLLOW 2. Removed from sitemap.xml 3. Un-linked from the site (no one page on the site link to the pour content) As conclusion we can't see any improvement, my question is should I remove the pour content pages (404)? What is your recommendation? Thank you for your time Claudio
Intermediate & Advanced SEO | | SharewarePros0 -
How to get a page re-crawed quickly
Does anyone know a way to get Google to re-crawl a webpage that does not belong to me. There are a bunch of pages that I have had links removed on and I want Google to re-crawl those pages to see the links have been removed. (current wait time is way way too long) Can anyone suggest some ways to get the page re-crawled. (I am unable to get the website owners to use WMT to do anything). Suggestions like good ping services and various other techniques would be very much appreciated. Thanks
Intermediate & Advanced SEO | | gazzerman10 -
Ever had a case where publication of products & descriptions in ebay or amazon caused Panda penalty?
One of our shops got a Panda penalty back in september. We sell all our items with same product name and same product description also on amazon.com , amazon.co.uk, ebay.com and ebay.co.uk. Did you ever have a case where such multichannel sales caused panda penalty?
Intermediate & Advanced SEO | | lcourse0 -
How to get the most out of inbound links from magazine and newspaper stories?
When a newspaper article or magazine article writes a story about your company, some people like seem to immediately write blog posts about the press hit, and then in the blog post they link to the press hit. Other businesses have entire "In the News" sections in which they post links to news stories that reference or link to the company. Does this in any way hinder or the cancel out the link juice that should come from the original inbound link by making it look like a link trade or link swap, or for any other reason? Just curious if anyone has any thoughts on this...Thanks.
Intermediate & Advanced SEO | | williammarlow0