Anyone drawn a Google Penguin?
-
Just a bit of light relief in the world of SEO. Hey SEO people are funny and can laugh as well
Our SEO team has had a go at drawing some penguins and the results are funny to us as they are so childlike.
Anyone else fancy adding their drawing to the flicker group.
This is just a bit of fun for us all!
-
I am waiting for someone to draw one with a monitor flying at him.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How Google could quickly fix the whole Links problem...
A Thursday morning brainstorm that hopefully an important Google manager will see... Google could quickly end all the problems of link buying, spammy links, and negative SEO with one easy step: Only count the 100 best follow links to any domain. Ignore all the nofollows and everything beyond the 100 best. They can choose what "best" means. Suddenly links would be all about quality. Quantity would not matter. Fiverr links, comment links, and all the other mass-produced spam links would literally be ignored. Unless that's all a domain had, and then they would surely be stomped by any domain with 100 decent natural links. Would it be an improvement over today's situation?
Industry News | | GregB1230 -
How can i discover how many of my pages have been indexed by google?
I am currently in the process of trying to produce a report for my corporation and this is a metric that i cannot seem to find on OpenSiteExplorer. Could anyone help?
Industry News | | CF20150 -
Did Google Search Just Get Crazy Local?
Hey All, I think it's a known fact at this point that when signed into a personal Google account while doing a search, the results are very oriented around keywords and phrases you have already searched for, as well as your account's perceived location; for instance when I wanted to check one of my own web properties in SE listings I would sign out or it would likely appear first as a false reading. Today I noticed something very interesting: even when not signed in, Google's listings were giving precedence to locality. It was to a very extreme degree, as in when searching for "web design," a firm a mile away ranked higher than one 1.5 miles away and such. It would seem that the algos having this high a level of location sensitivity and preference would actually be a boon for the little guys, which is, I assume why it was implemented. However, it brings up a couple of interesting questions for me. 1. How is this going to affect Moz (or any SE ranking platform, for that matter) reports? I assume that Google pulls locations from IP Addresses, therefore would it not simply pull the local results most relevant for the Moz server(s) IP? 2. What can one do to rise above this aggressive level of location based search? I mean, my site (which has a DA of 37 and a PA of 48) appears above sites like webdesign.org (DA of 82, PA of 85). Not that I'm complaining at the moment, but I could see this being a fairly big deal for larger firms looking to rank on a national level. What gives? I'd love to get some opinions from the community here if anyone else has noticed this...
Industry News | | G2W1 -
Does Penguin Help Negative SEO?
With negative link targeting seeming to become more and more of a ‘standard practice’ for more and more agencies and freelance SEOs (I, for one, have had to use the disavow tool far more than I ever thought I would) and the fact that there are more “link building services” that really only build ‘crap’ links than there were when that type of link building worked, I am honestly a bit afraid that Google is really just pushing SEO’s to the ‘dark side’ or at least handing black hat link builders a great tool for bringing down the competition. I had one SEO actually say to me “If my client can’t recover than at least I can target everyone that jumped ahead of them and only spend around $300 on bad link building”. This came from someone I NEVER thought would say anything of the sort and really got me to thinking’ “will this be the future of SEO?” I know the answer is no but still, it seems more and more people are just throwing their hands up and targeting competition rather than working on their own websites and with updates like Penguin I am afraid that more of my time will be spent disavowing links than building them.
Industry News | | Vizergy0 -
Are Wordpress sites being dinged by Google? Read a few articles regarding.
I read a couple "SEO" related articles that sites built in Wordpress are going to be dinged by Google because Google sees Wordpress sites as simple to make and a higher potential to be "spammy". Is there any truth to this? Your thoughts? I do give "thumbs up" and "best answer" marks and appreciate receiving thumbs up myself... Thanks
Industry News | | JChronicle1 -
My site was hit by the Penguin Update, Now What?
My site is very young, having only been up for about a month and a half. Despite being nascent we were seeing a ton of organic traffic. Enter Google Penguin update.... Traffic is down 40% or so over the past week. So, assuming the damage has been done, the question is, what do we do next to start moving back towards where we were? If we're doing everything everything right do we just chalk it up to the fact that our site is very new and stay the course on original content and link building? I know you can submit your site for reconsideration which is something that we are going to do this week, but I wonder what else we can do to start edging back to where we were pre-penguin update. Maybe this would be a good Whiteboard Friday topic.
Industry News | | knowyourbank0 -
Google Product Feeds - New Requirements
We are in the jewelry industry, and for Google product feeds, we list our products under "Apparel & Accessories > Jewelry". As of the new Google feed requirements, they are saying that we have to choose a gender and color for each product that is in the Apparel category. While this makes sense for clothes, it doesn't exactly for jewelry because many items are for both men and women, and there's not always a color associated with each product. I can enter some of these fields manually, but with 5,000+ products, it makes it difficult w/ each update. Anyone have solutions for this? Or a way around it? Can we just include those fields but leave them blank? Any other solutions?
Industry News | | applesofgold1 -
What is the best method for getting pure Javascript/Ajax pages Indeded by Google for SEO?
I am in the process of researching this further, and wanted to share some of what I have found below. Anyone who can confirm or deny these assumptions or add some insight would be appreciated. Option: 1 If you're starting from scratch, a good approach is to build your site's structure and navigation using only HTML. Then, once you have the site's pages, links, and content in place, you can spice up the appearance and interface with AJAX. Googlebot will be happy looking at the HTML, while users with modern browsers can enjoy your AJAX bonuses. You can use Hijax to help ajax and html links coexist. You can use Meta NoFollow tags etc to prevent the crawlers from accessing the javascript versions of the page. Currently, webmasters create a "parallel universe" of content. Users of JavaScript-enabled browsers will see content that is created dynamically, whereas users of non-JavaScript-enabled browsers as well as crawlers will see content that is static and created offline. In current practice, "progressive enhancement" in the form of Hijax-links are often used. Option: 2
Industry News | | webbroi
In order to make your AJAX application crawlable, your site needs to abide by a new agreement. This agreement rests on the following: The site adopts the AJAX crawling scheme. For each URL that has dynamically produced content, your server provides an HTML snapshot, which is the content a user (with a browser) sees. Often, such URLs will be AJAX URLs, that is, URLs containing a hash fragment, for example www.example.com/index.html#key=value, where #key=value is the hash fragment. An HTML snapshot is all the content that appears on the page after the JavaScript has been executed. The search engine indexes the HTML snapshot and serves your original AJAX URLs in search results. In order to make this work, the application must use a specific syntax in the AJAX URLs (let's call them "pretty URLs;" you'll see why in the following sections). The search engine crawler will temporarily modify these "pretty URLs" into "ugly URLs" and request those from your server. This request of an "ugly URL" indicates to the server that it should not return the regular web page it would give to a browser, but instead an HTML snapshot. When the crawler has obtained the content for the modified ugly URL, it indexes its content, then displays the original pretty URL in the search results. In other words, end users will always see the pretty URL containing a hash fragment. The following diagram summarizes the agreement:
See more in the....... Getting Started Guide. Make sure you avoid this:
http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Here is a few example Pages that have mostly Javascrip/AJAX : http://catchfree.com/listen-to-music#&tab=top-free-apps-tab https://www.pivotaltracker.com/public_projects This is what the spiders see: view-source:http://catchfree.com/listen-to-music#&tab=top-free-apps-tab This is the best resources I have found regarding Google and Javascript http://code.google.com/web/ajaxcrawling/ - This is step by step instructions.
http://www.google.com/support/webmasters/bin/answer.py?answer=81766
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
Some additional Resources: http://googlewebmastercentral.blogspot.com/2009/10/proposal-for-making-ajax-crawlable.html
http://www.seomoz.org/blog/how-to-allow-google-to-crawl-ajax-content
http://www.google.com/support/webmasters/bin/answer.py?answer=357690