DMOZ help
-
So yesterday I got a DMOZ editor account. I would like to know if Google indexes the editor profile pages on DMOZ:
http://www.dmoz.org/public/profile?editor=
here are some examples
http://www.dmoz.org/public/profile?editor=thehelper
http://www.dmoz.org/public/profile?editor=raph3988
http://www.dmoz.org/public/profile?editor=skasselea
I would like to know if it is worth while to build up this page so it will pass link juice. And can anyone tell me how frequently Google crawls for new editors (if that's possible?)
-
Hello,
I wouldn't bet on it, but there's no harm in trying
-
You can confirm this yourself.
First, do a Google search for site:http://www.dmoz.org/public/profile?editor=
You see the meta descriptions aren't indexed in the results? Instead, Google puts a default message, with a link to this page.. https://support.google.com/webmasters/bin/answer.py?hl=en&answer=156449 - check that out. Note the paragraph:
"While Google won't crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web. As a result, the URL of the page and, potentially, other publicly available information such as anchor text in links to the site, or the title from the Open Directory Project (www.dmoz.org), can appear in Google search results."
So whilst they may appear in Google's index (and indeed the OSE one) because of the links pointing to them, the content isn't crawled at all (by any spiders that obey robots.txt).
-
Oh yes he is correct, good call Neil, I had no idea that the robot.txt would be publicly accessible. I actually never seen a site have their robot txt visible.. I guess it's the "open source"...
-
Can anyone co firm this?
-
Take a look at their robots.txt - http://www.dmoz.org/robots.txt
They disallow the /public and /editors subfolders. The editor pages, whilst indexed by Google, aren't crawled.. so whilst the location of the pages themselves is indexed (because of links to those pages), the contents of the pages aren't indexed. This means any links on the page too, obviously..
For this reason, I don't agree with Reload Media. For me, there's no point expending any effort promoting the page for link equity benefit.
The fact they show good authority on OSE is something of an anomoly. They can accrue authority (and indeed Google PR) from their inbound links, however, they are a bit of a dead end, due to the fact that no actual content is indexed.
-
Hi Raphael,
Well done on getting an editor account. Remember with great power comes great responsibility
Yes they do get indexed. The way to check this is to Google the url in "" i.e. "http://www.opensiteexplorer.org/links?site=www.dmoz.org%2Fpublic%2Fprofile%3Feditor%3Dthehelper"
Some of those editor pages have great Authority. http://www.opensiteexplorer.org/links?site=www.dmoz.org%2Fpublic%2Fprofile%3Feditor%3Dthehelper
If it's related to your niche, then would be worth pursuing.
Hope that helps
Iain - Reload Media
-
Using http://pro.seomoz.org/tools/on-page-keyword-optimization, you can check individual pages, in the keyword field I put the helper and in the URL I put http://www.dmoz.org/public/profile?editor=thehelper... So it seems like it does get indexed : )
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Please help us undertsand the things we need to improve so that google crawler visit us more often to reindex pages from our domain
we are currently in the process of a massive project which involves us migrating our domain, we realised that Google crawlwer has not been crawling our pages Quiet often. i have observed some cases where google crawled these pages about 6 months back and then never visited the pages again
Intermediate & Advanced SEO | | bhaskaran
and we had to manually submit these pages for reindexing in some geographies. can you please help us undertsand the things we need to improve so that google crawler visit us more often to reindex pages from our domain0 -
DO outbound links to manufacture specs, pdfs help or hurt SEO?
I am creating an e-commerce site. All the products have product certification documents/images, PDF docs for instructions, manufacture specs, etc. Should I host all this content or simply link to the original documents and content? What is the best for SEO? Thank you,
Intermediate & Advanced SEO | | Jamesmcd030 -
HTTP HTTPS Migration Gone Wrong - Please Help!
We have a large (25,000 Products) ecommerce website, and we did an HTTP=>HTTPS migration on 3/14/17, and our rankings went in the tank, but they are slowly coming back. We initially lost 80% of our organic traffic. We are currently down about 50%. Here are some of the issues. In retrospect, we may have been too aggressive in the move. We didn't post our old sitemaps on the new site until about 5 days into the move. We created a new HTTPS property in search console. Our redirects were 302, not 301 We also had some other redirect issues We changed our URL taxonomy from http://www.oursite.com/category-name.html to https://www.oursite.com/category-name (removed the .html) We changed our filters plugin. Proper canonicals were used, but the filters can generate N! canonical pages. I added some parameters (and posted to Search Console) and noindex for pages with multiple filter choices to cut down on our crawl budget yesterday. Here are some observations: Google is crawling like crazy. Since the move, 120,000+ pages per day. These are clearly the filtered pages, but they do have canonicals. Our old sitemaps got error messages "Roboted Out". When we test URLs in Google's robots.txt tester, they test fine. Very Odd. At this point, in search console
Intermediate & Advanced SEO | | GWMSEO
a. HTTPS Property has 23,000 pages indexed
b. HTTP Property has 7800 pages indexed
c. The crawl of our old category sitemap (852 categories) is still pending, and it was posted and submitted on Friday 3/17 Our average daily organic traffic in search console before the move was +/-5,800 clicks. The most recent Search Console had HTTP: 645 Clicks HTTPS: 2000 clicks. Our rank tracker shows a massive drop over 2 days, bottoming out, and then some recovery over the next 3 days. HTTP site is showing 500,000 backlinks. HTTPS is showing 23,000 backilinks. I am planning on resubmitting the old sitemaps today in an attempt to remap our redirects to 301s. Is this typical? Any ideas?0 -
Avoiding Duplicate Content with Used Car Listings Database: Robots.txt vs Noindex vs Hash URLs (Help!)
Hi Guys, We have developed a plugin that allows us to display used vehicle listings from a centralized, third-party database. The functionality works similar to autotrader.com or cargurus.com, and there are two primary components: 1. Vehicle Listings Pages: this is the page where the user can use various filters to narrow the vehicle listings to find the vehicle they want.
Intermediate & Advanced SEO | | browndoginteractive
2. Vehicle Details Pages: this is the page where the user actually views the details about said vehicle. It is served up via Ajax, in a dialog box on the Vehicle Listings Pages. Example functionality: http://screencast.com/t/kArKm4tBo The Vehicle Listings pages (#1), we do want indexed and to rank. These pages have additional content besides the vehicle listings themselves, and those results are randomized or sliced/diced in different and unique ways. They're also updated twice per day. We do not want to index #2, the Vehicle Details pages, as these pages appear and disappear all of the time, based on dealer inventory, and don't have much value in the SERPs. Additionally, other sites such as autotrader.com, Yahoo Autos, and others draw from this same database, so we're worried about duplicate content. For instance, entering a snippet of dealer-provided content for one specific listing that Google indexed yielded 8,200+ results: Example Google query. We did not originally think that Google would even be able to index these pages, as they are served up via Ajax. However, it seems we were wrong, as Google has already begun indexing them. Not only is duplicate content an issue, but these pages are not meant for visitors to navigate to directly! If a user were to navigate to the url directly, from the SERPs, they would see a page that isn't styled right. Now we have to determine the right solution to keep these pages out of the index: robots.txt, noindex meta tags, or hash (#) internal links. Robots.txt Advantages: Super easy to implement Conserves crawl budget for large sites Ensures crawler doesn't get stuck. After all, if our website only has 500 pages that we really want indexed and ranked, and vehicle details pages constitute another 1,000,000,000 pages, it doesn't seem to make sense to make Googlebot crawl all of those pages. Robots.txt Disadvantages: Doesn't prevent pages from being indexed, as we've seen, probably because there are internal links to these pages. We could nofollow these internal links, thereby minimizing indexation, but this would lead to each 10-25 noindex internal links on each Vehicle Listings page (will Google think we're pagerank sculpting?) Noindex Advantages: Does prevent vehicle details pages from being indexed Allows ALL pages to be crawled (advantage?) Noindex Disadvantages: Difficult to implement (vehicle details pages are served using ajax, so they have no tag. Solution would have to involve X-Robots-Tag HTTP header and Apache, sending a noindex tag based on querystring variables, similar to this stackoverflow solution. This means the plugin functionality is no longer self-contained, and some hosts may not allow these types of Apache rewrites (as I understand it) Forces (or rather allows) Googlebot to crawl hundreds of thousands of noindex pages. I say "force" because of the crawl budget required. Crawler could get stuck/lost in so many pages, and my not like crawling a site with 1,000,000,000 pages, 99.9% of which are noindexed. Cannot be used in conjunction with robots.txt. After all, crawler never reads noindex meta tag if blocked by robots.txt Hash (#) URL Advantages: By using for links on Vehicle Listing pages to Vehicle Details pages (such as "Contact Seller" buttons), coupled with Javascript, crawler won't be able to follow/crawl these links. Best of both worlds: crawl budget isn't overtaxed by thousands of noindex pages, and internal links used to index robots.txt-disallowed pages are gone. Accomplishes same thing as "nofollowing" these links, but without looking like pagerank sculpting (?) Does not require complex Apache stuff Hash (#) URL Disdvantages: Is Google suspicious of sites with (some) internal links structured like this, since they can't crawl/follow them? Initially, we implemented robots.txt--the "sledgehammer solution." We figured that we'd have a happier crawler this way, as it wouldn't have to crawl zillions of partially duplicate vehicle details pages, and we wanted it to be like these pages didn't even exist. However, Google seems to be indexing many of these pages anyway, probably based on internal links pointing to them. We could nofollow the links pointing to these pages, but we don't want it to look like we're pagerank sculpting or something like that. If we implement noindex on these pages (and doing so is a difficult task itself), then we will be certain these pages aren't indexed. However, to do so we will have to remove the robots.txt disallowal, in order to let the crawler read the noindex tag on these pages. Intuitively, it doesn't make sense to me to make googlebot crawl zillions of vehicle details pages, all of which are noindexed, and it could easily get stuck/lost/etc. It seems like a waste of resources, and in some shadowy way bad for SEO. My developers are pushing for the third solution: using the hash URLs. This works on all hosts and keeps all functionality in the plugin self-contained (unlike noindex), and conserves crawl budget while keeping vehicle details page out of the index (unlike robots.txt). But I don't want Google to slap us 6-12 months from now because it doesn't like links like these (). Any thoughts or advice you guys have would be hugely appreciated, as I've been going in circles, circles, circles on this for a couple of days now. Also, I can provide a test site URL if you'd like to see the functionality in action.0 -
Website is not indexed in Google, please help with suggestions
Our client website was removed from Google index. Anybody could recommend how to speed up process of re index: Webmaster tools done SM done (Twitter, FB) sitemap.xml done backlinks in process PPC done Robots.txt is fine Guys any recommendations are welcome, client is very unhappy. Thank you
Intermediate & Advanced SEO | | ThinkBDW0 -
End of March we migrated our site over to HubSpot. We went from page 3 on Google to non existent. Still found on page 2 of Yahoo and Bing. Beyond frustrated...HELP PLEASE "www.vortexpartswashers.com"
End of March we migrated our site over to HubSpot. We went from page 3 on Google to non existent. Still found on page 2 of Yahoo and Bing under same keywords " parts washers" Beyond frustrated...HELP PLEASE "www.vortexpartswashers.com"
Intermediate & Advanced SEO | | mhart0 -
Category Pages up - Product Pages down... what would help?
Hi I mentioned yesterday how one of our sites was losing rank on product pages. What steps do you take to improve the SERPS of product pages, in this case home/category/product is the tree. There isn't really any internal linking, except one link from the category page to each product, would setting up a host of internal links perhaps "similar products" linking them together be a place to start? How can I improve my ranking of these more deeply internal pages? Not just internal links?
Intermediate & Advanced SEO | | xoffie0 -
Reverse Proxys - Lost On It's Purpose To Help Seo
Reverse Proxys - Lost On It's Purpose To Help Seo - read an article on seomoz check link below. When should we use this reverse proxy and is it really worth the trouble at all ? Why create subdomains vs subfolders when organizing different sections of the website ? http://www.seomoz.org/blog/what-is-a-reverse-proxy-and-how-can-it-help-my-seo
Intermediate & Advanced SEO | | helpwanted0