User generated content - manual warning from Google
-
Over the weekend our website received large amounts of spammy comments / user profiles on our forums. This has led to Google giving us a partial manual action until we clear things up. So far we have:
- Cleared up all the spam, banned the offending user accounts, and temporary enabled admin-approval for new sign ups.
We are currently investigating upgrading the forum software to the latest version in order to make the forums less susceptible to this kind of attack. Could anyone let me know whether they think it is the right time for us to submit a reconsideration request to get the manual action removed? Will the temporary actions we have taken be enough to get the ban lifted, or should we wait until the forum software has been updated?
I'd really appreciate any advice, especially if there is anyone here who has experienced this issue themselves
-
Thanks both - I wasn't expecting that answer. I suppose you learn something every day. I have now submitted the reconsideration request so hopefully that will go through fine!
-
Hi,
Since you've fix the issue - you can send the reconsideration request now. Make sure you explain, briefly, what you did and you should be fine.
Cheers.
-
My understanding is that you've now corrected your 'violation' (although you weren't in control of it) and therefore you can submit a reconsideration request. Google is unlikely to decline your reconsideration request based on the fact it might happen again, especially if it's your first warning.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Google Manual Penalty Lifted - Why is my website still decreasing on traffic?
Hi there, I was hoping that somebody has a potential answer to this or if anyone else has experienced this issue. Our website has recently hit by a manual penalty (structured data wasn't matching the content on the page) After working hard on this to fix the issue across the site, we submitted a reconsideration request which was approved by Google a few days later. I understand that not all websites recover and it doesn't guarantee rankings will go back to normal, but it seems as if the traffic is continuing to drop at an even quicker rate. There's a number of small technical optimisations that have been briefed into the dev team such as: Redirecting duplicate versions, fixing redirects on internal links, There's also work on-page running in the background fixing up keyword cannibalization, consolidating content keyword mapping and ensuring the internal link structure is sound. Has this happened to anyone else before? If so, how did you recover? Any suggestions/advice would be really appreciated. Thank you
Intermediate & Advanced SEO | | dbutler9120 -
How to fix Duplicate Content Warnings on Pagination? Indexed Pagination?
Hi all! So we have a Wordpress blog that properly has pagination tags of rel="prev" and rel="next" set up for pages, but we're still getting crawl errors with MOZ for duplicate content on all of our pagination pages. Also, we are having all of our pages indexed as well. I'm talking pages as deep as page 89 for the home page. Is this something I should ignore? Is it hurting my SEO potentially? If so, how can I start tackling it for a fix? Would "noindex" or "nofollow" be a good idea? Any help would be greatly appreciated!
Intermediate & Advanced SEO | | jampaper0 -
Did Google Ignore My Links?
Hello, I'm a little new to SEO, but I recently was featured (around 2 yrs ago) on some MAJOR tech blogs. For some reason however, my links aren't getting picked up for over 2 years - not even in MOZ, or other link checker services. - By now I should have had amazing boost from this natural building, but not sure what happened? This was completely white hat and natural links. The links were after the article was created though, would this effect things? - Please let me know if you have any advice! - Maybe I need to ping these some how or something? - Are these worthless? Thanks so much for your help! Here's some samples of the links that were naturally given to http://VaultFeed.com http://thenextweb.com/microsoft/2013/09/13/microsoft-posts-cringe-worthy-windows-phone-video-ads-mocking-apple/ http://www.theverge.com/2013/9/15/4733176/microsoft-says-pulled-iphone-parody-ads-were-off-the-mark http://www.theregister.co.uk/2013/09/16/microsoft_mocks_apple_in_vids_it_quickly_pulls/ http://www.dailymail.co.uk/sciencetech/article-2420710/Microsoft-forced-delete-cringe-worthy-spoof-videos-mocking-new-range-iPhones.html And a LOT more... Not sure if these links will never be valid, or maybe I'm doing something completely wrong? - Is there any way for Google to recognize these now, and then they'll be seen by MOZ and other sites too? I've done a LOT of searching and there's no definitive advice I've seen for links that were added after the URL was first indexed by Google.
Intermediate & Advanced SEO | | DByers0 -
Scraped content ranking above the original source content in Google.
I need insights on how “scraped” content (exact copy-pasted version) rank above the original content in Google. 4 original, in-depth articles published by my client (an online publisher) are republished by another company (which happens to be briefly mentioned in all four of those articles). We reckon the articles were re-published at least a day or two after the original articles were published (exact gap is not known). We find that all four of the “copied” articles rank at the top of Google search results whereas the original content i.e. my client website does not show up in the even in the top 50 or 60 results. We have looked at numerous factors such as Domain authority, Page authority, in-bound links to both the original source as well as the URLs of the copied pages, social metrics etc. All of the metrics, as shown by tools like Moz, are better for the source website than for the re-publisher. We have also compared results in different geographies to see if any geographical bias was affecting results, reason being our client’s website is hosted in the UK and the ‘re-publisher’ is from another country--- but we found the same results. We are also not aware of any manual actions taken against our client website (at least based on messages on Search Console). Any other factors that can explain this serious anomaly--- which seems to be a disincentive for somebody creating highly relevant original content. We recognize that our client has the option to submit a ‘Scraper Content’ form to Google--- but we are less keen to go down that route and more keen to understand why this problem could arise in the first place. Please suggest.
Intermediate & Advanced SEO | | ontarget-media0 -
Real Estate MLS listings - Does Google Consider duplicate content?
I have a real estate website. The site has all residential properties for sale in a certain State (MLS property listings). These properties also appear on 100's of other real estate sites, as the data is pulled from a central place where all Realtors share their listings. Question: will having these MLS listings indexed and followed by Google increase the ratio of duplicate vs original content on my website and thus negatively affect ranking for various keywords? If so, should I set the specific property pages as "no index, no follow" so my website will appear to have less duplicate content?
Intermediate & Advanced SEO | | khi50 -
What are your thoughts on Content Automation?
Hi, I want to ask forum members’ opinion on content automation. And before I raise the eyebrows of many of you with this question, I’d like to state I am creating content and doing SEO for my own website so I’m not looking to cut corners with spammy tactics that could hurt my website from an organic search perspective. The goal is to automate pages in the areas of headings, Meta Titles, Meta Descriptions, and perhaps a paragraph of content. More importantly, I’d like these pages to add value to the users experience so the question is…. How do I go about automating the pages, and more specifically, how is meta title, meta descriptions etc. automated? I’d also like to hear from people that recommend steering clear of any form of content automation. I hope my question isn’t too bit vague and I look forward to hearing from other Mozzers. Regards, Russell in South Africa
Intermediate & Advanced SEO | | Shamima0 -
Hit by Penguin, Can I move the content from the old site to a new domain and start again with the same content which is high quality
I need some advice please. My website got the unnatural links detected message and was hit by penguin.. hard. Can I move the content from the current domain to a new domain and start again or does the content need to be redone also. I will obviously turn of the old domain once its moved. The other option is to try and identify the bad links and change my anchor profile which is a hit and miss task in my opinion. Would it not be easier just to identify the good links pointing to the old domain and get those changed to point to the new domain with better anchors. thanks Warren
Intermediate & Advanced SEO | | warren0071 -
Getting a site to rank in both google.com and google.co.uk
I have a client who runs a yacht delivery company. He gets business from the US and the UK but due to the nature of his business, he isn't really based anywhere except in the middle of the ocean somewhere! His site is hosted in the US, and it's a .com. I haven't set any geographical targeting in webmaster tools either. We're starting to get some rankings in google US, but very little in google UK. It's a small site anyway, and he'd prefer not to have too much content on the site saying he's UK based as he's not really based anywhere. Any ideas on how best to approach this?
Intermediate & Advanced SEO | | PerchDigital0