If google ignores links from "spammy" link directories ...
-
Then why does SEO moz have this list:
http://www.seomoz.org/dp/seo-directory ??
Included in that list are some pretty spammy looking sites such as:
<colgroup><col width="345"></colgroup>
| http://www.site-sift.com/ |
| http://www.2yi.net/ |
| http://www.sevenseek.com/ |
| http://greenstalk.com/ |
| http://anthonyparsons.com/ |
| http://www.rakcha.com/ |
| http://www.goguides.org/ |
| http://gosearchbusiness.com/ |
| http://funender.com/free_link_directory/ |
| http://www.joeant.com/ |
| http://www.browse8.com/ |
| http://linkopedia.com/ |
| http://kwika.org/ |
| http://tygo.com/ |
| http://netzoning.com/ |
| http://goongee.com/ |
| http://bigall.com/ |
| http://www.incrawler.com/ |
| http://rubberstamped.org/ |
| http://lookforth.com/ |
| http://worldsiteindex.com/ |
| http://linksgiving.com/ |
| http://azoos.com/ |
| http://www.uncoverthenet.com/ |
| http://ewilla.com/ | -
Sounds like a loophole to me. But i'll take it!
Thanks for the advice!
-Storwell
-
I know what you mean and I agree but the distinction lies when the directory charges for there time to review your listing and site.
so it isn't technically a paid link
Just like how could Google penalize you if you sponsored your local football team and they gave you a banner on there site as part of the deal.
-
But surely google frowns on paid links no?
100% of the directories listed above are paid.
-
The problem is no directory is ever going to contain reams of pages full of excellent content.
Definition - A book listing individuals or organizations alphabetically or thematically with details such as names, addresses, and telephone numbers.
So from another point of view - Google's How can Google rank a directory....
Out going links has to be massive - If the directory does what it says on the tin and contains site links in the correct category I don't see the problem.
-
Wow! i should have asked this question months ago!
As for "Define spammy" how about this:
A site that provides no actual service to the public, and purely exists to make money from manipulating search results.
Most of the sites in that list, including Joe Ant look pretty useless to me. If someone sent me a link to one of those sites i would assume they had a virus in their computer or something of the likes. What actual purpose do these sites serve?
Do you honestly imagine a non-seo'r ever to visit one of these sites and say to themselves "Wow, i've found an excellent resource, i'm going to bookmark this page to help me find things in the future" ??
-
I suppose Ryan the problem is how does one classify something "spammy" as with all these things it can be sometimes quite obtuse and a few directories will fall in a potential grey area.
But by and large dodgy directories are easy to spot.
Common sense rules...
-
I agree with Gary.
What method did you use to classify these sites as "spammy". JoeAnt is not spammy at all to my knowledge. I grabbed another directory from your list, anthonyparsons.com, and it does not seem even the slightest bit spammy.
-
I think the answer has to be - How do you judge what is and what isn't a crummy directory.
1. If the directory gives a full check of all inclusions.
2. The site doesnt contain out going links to - Viagra, Cialis etc (you get the picture)
3. Joe Ant - Good right ?
4. How relevant is that directory to your industry so lets say I sell Football kits. Look for sports and football related directories. Listing your webpage on a directory that is related to pharmaceuticals when you sell football kits is bad right ?
USE Common sense and logic when you land on the directory look for the warning signs..
Don't use directories as your main source of links but a few good ones on a link profile in my opinion can be good. It adds to the diversity of your link profile.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Pages excluded from Google's index due to "different canonicalization than user"
Hi MOZ community, A few weeks ago we noticed a complete collapse in traffic on some of our pages (7 out of around 150 blog posts in question). We were able to confirm that those pages disappeared for good from Google's index at the end of January '18, they were still findable via all other major search engines. Using Google's Search Console (previously Webmastertools) we found the unindexed URLs in the list of pages being excluded because "Google chose different canonical than user". Content-wise, the page that Google falsely determines as canonical instead has little to no similarity to the pages it thereby excludes from the index. False canonicalization About our setup: We are a SPA, delivering our pages pre-rendered, each with an (empty) rel=canonical tag in the HTTP header that's then dynamically filled with a self-referential link to the pages own URL via Javascript. This seemed and seems to work fine for 99% of our pages but happens to fail for one of our top performing ones (which is why the hassle 😉 ). What we tried so far: going through every step of this handy guide: https://mza.bundledseo.com/blog/panic-stations-how-to-handle-an-important-page-disappearing-from-google-case-study --> inconclusive (healthy pages, no penalties etc.) manually requesting re-indexation via Search Console --> immediately brought back some pages, others shortly re-appeared in the index then got kicked again for the aforementioned reasons checking other search engines --> pages are only gone from Google, can still be found via Bing, DuckDuckGo and other search engines Questions to you: How does the Googlebot operate with Javascript and does anybody know if their setup has changed in that respect around the end of January? Could you think of any other reason to cause the behavior described above? Eternally thankful for any help! ldWB9
Intermediate & Advanced SEO | | SvenRi1 -
Do Google webmaster tool and other backlinks analysis tool ignore the disavow data ?
Hello, Lots of site i have disavow so if i download backlinks of my site from google webmaster so google will ignore the disavow data and give me backlinks other than disavow data? Same if i use backlink tools like moz or semrush or ahref etc for checking backlinks of my site or competitor site so will this tool ignore the disavow data? If such tools not aware of disavow then it is worthless to check competitor links? Thanks! dev
Intermediate & Advanced SEO | | devdan0 -
Site appears with ".com" but not without it
Hi, When I search for my site www.docslinc.com as "docslinc.com" the results on the SERPS have the home page and the site map but not the other indexed pages. The other issue occurs when I search for the company name alone "docslinc", the homepage does not show up at all, and some of the other pages show up. I have looked all over the place and cannot find an answer. I have checked the onsite optimization and it all seems to be correct. Any suggestions would be amazing. Thanks, zulumanf
Intermediate & Advanced SEO | | zulumanf0 -
First attempt at manual penalty removal fails - all example links provided by Google not in Majestic, GWT, Ahrefs, LinkDetox, or OSE.
Hello all, I am trying to recover a site from a manual penalty. I already submitted once. Here's what we did. We took the link profile from webmaster tools, majestic seo, ahrefs, link detox, and ose. We manually looked at every link to exclude good links. Then used a tool to run the removal campaign. Submitted a disavow file and reconsideration request. Google came back with a denial. When I looked at the three example links that Google provided, they were definitely spammy (forum profile and comment spam). But none of them were in any of the original csv downloads from GWT, Ahrefs, Majestic, OSE, or LinkDetox. What can I do? Thanks in advance for any help.
Intermediate & Advanced SEO | | NicoleDeLeon0 -
Fixed "lower-case/mixed-case" Internal Links causing duplicate- Now What?
Hi, So after a site re-launch, Moz crawled it and reported over 150 duplicate content errors. It was determined that it was because of incorrect uses of capitalization in internal links. Using screaming frog, I found all (500+) internal links and fixed them to match the actual URL. Now the site is100% consistent across the board as best I can tell. I am unsure what to do next though. We launched the site with all the internal link errors, and now many of the pages that are indexed and ranked are with the incorrect URL form. Some have said to use a canonical tag. But how can I use a canonical tag on a page doesn't even exist? Same thing with 301. Can I redirect /examplepage to /ExamplePage if only /ExamplePage actually exists? I would really appreciate some advice on what to do. After I fixed the internal links, I waited a week and Moz crawled the site again and reported all the same errors, and then even more. All capitalization. Seems like it's a mess. After I did another Screaming Frog crawl, it showed no duplicates, so I know I was successful in fixing the internals. Help!!
Intermediate & Advanced SEO | | yogitrout10 -
Building "keyword" backlinks
Looking for some opinions here please. Been involved in seo for a couple of years mainly working on my websites and picking up the odd client here and there through word of mouth. I must admit that up until a few months back I was guilty of using some grey methods of link building - linkvana, unique article wizard and the such. While no penalties were handed out to my domains and some decent rankings gained, I got tired of always being on the lookout for what the next Google update will do to my results and which networks were being hit, and so I moved a lot more into the 'proper' way of seoing. These days my primary sources for backlinks are much more respectable... myblogguest bloggerlinkup postjoint Guest Blog Finder http://ultramarketer.com/guest-blogger-finder/ - not sure where i came across this resource but it's very handy I use these sources alongside industry only directories and general word of mouth. Ironically I have found that doing the word by hand not only leads to results I can happyily show people (content wise) but also it's much quicker and cheaper. The increased authority of the sites means far fewer links are needed. The one area I still am having a little issue with is that of building keyword based backlinks. I now find it fairly easy to get my content on a reasonable quality site - DA to 40 and above, however the vast majority of these sites will allow the backlink only as the company name or as a generic read more type thing. This is fine and it is improving my website performance and authority. The trouble I am finding is that while i am ranking for the title tag and some keywords in the page, I am struggling to get backlinks for other keywords. In an ideal world every page on the site would be optimised for a different keyword and you could then just the site name as anchor text to build the authority of that page and make it rank for it's content, but what about when you (or the client) wants to rank the home for a number of different keywords, some not featured on the page. The keywords are too similar to go to the trouble of making unique pages for, and that would also add no value to the site. My question really then, after a very long winded way of getting there, is are others finding it much more difficult to gain keyword based backlinks these days? The great thing about the grey seo tools, as mentioned above, is that it was super easy to get the backlinks with whatever anchor text you wanted - even if you needed hundreds of the thing to compensate for the low value of each!! Thanks Carl
Intermediate & Advanced SEO | | GrumpyCarl0 -
Pipe ("|") in my website's title is being replaced with ":" in Google results
Hi , One of the websites I'm promoting and working on is www.pau-brasil.co.il.
Intermediate & Advanced SEO | | Kadel
It's wordpress-based website and as you can see the html's Title is "PauBrasil | some hebrew slogan".
(Screenshot: http://i.imgur.com/2f80EEY.gif)
When I'm searching for "PauBrasil" (Which is the brand's name) , one of the results google shows is "PauBrasil: Some Hebrew Slogan" (Screenshot: http://i.imgur.com/eJxNHrO.gif ) Why does the pipe is being replaced with ":" ?
And not just that , as you can see there's a "blank space" missing between the the ":" to the slogan.
(note: the websites has been indexed by google crawler at least 4 times so I find it hard to believe it can be the reason) I've keep on looking and found out that there's another page in that website with the exact same title
but when I'm looking for it in google , it shows the title as it really is , with pipe. ("|").
(Screenshot: http://i.imgur.com/dtsbZV2.gif) Have you ever encountered something like that?
Can it be that the duplicated title cause that weird "replacement"? Thanks in advance,
Kadel0 -
Fading Text Links Look Like Spammy Hidden Links to a g-bot?
Ah, Hello Mozzers, it's been a while since I was here. Wanted to run something by you... I'm looking to incorporate some fading text using Javascript onto a site homepage using the method described here; http://blog.thomascsherman.com/2009/08/text-slideshow-or-any-content-with-fades/ so, my question is; does anyone think that Google might see this text as a possible dark hat SEO anchor text manipulation (similar to hidden links)? The text will contain various links (4 or 5) that will cycle through one another, fading in and out, but to a bot the text may appear initially invisible, like so; style="display: none;"><a href="">Link Here</a> All links will be internal. My gut instinct is that I'm just being stupid here, but I wanted to stay on the side of caution with this one! Thanks for your time 🙂 http://blog.thomascsherman.com/2009/08/text-slideshow-or-any-content-with-fades
Intermediate & Advanced SEO | | PeterAlexLeigh0