Ok, domain scored a 93.3% so you are fine. The triangle is just confusing UI by Google.
Posts made by sprynewmedia
-
RE: Google Webmasters DNS error
-
RE: Link juice and max number of links clarification
Agreed, the extreme repetition of the brand keywords and anchor text was one of my first arguments for dropping the section.
Think, from everything I've read so far, there appears to be an additional juice loss at one point but it would highly dependent on the trust of the page and the nature of the links. Certainly not a strong enough correlation to make part of my case however.
-
RE: Google Webmasters DNS error
Hmm, not sure I follow. Perhaps you could PM me the domain then I could take a look?
-
RE: Google Webmasters DNS error
Excellent, on the 8th you can see the pages indexed dropped to 0 but then picked up again the next day. This aligns with the DNS error graph. If your DNS report is above 80% (email scores are not relevant to indexing) , you are fine.
-
RE: Google Webmasters DNS error
Could you attach a crawl stats for the same period?
Also, what does this site http://www.solvedns.com say about your domain?
Set up a monitor at pingdom.com and it will tell you about any DNS issues that happen 24 hours a day.
-
RE: Google Webmasters DNS error
If the triangle is on the message, it is historical and you don't have to worry anymore.
Check Health -> Crawl Stats. You will see what periods Googlebot was unable to index the site. If there is a positive number most recently then the issue has been fixed.
-
RE: Link juice and max number of links clarification
The question come from a circumstance where 100's of links are contained in a supplemental tab on a product detail page. They link to applications of the product - each being a full product page. On some pages, there are only 40 links, other can be upwards of 1000 as the product is used as a replacement part for many other products.
I am championing the removal of the links, if not the whole tab. On a few pages, it would be useful to humans but clearly not on pages with 100s.
But if Google followed them all, then conceivably it would build a stronger "organic" structure to the catalogue as important products would get 1000's of links - others only a few.
Whatever value this might have, it would be negated if juice leaked faster after 100+ links.
From Matt's article above, "Google might choose not to follow or to index all those links." He also mentions them being a spam signal so I think it still wise to keep them low even if the 100kb limit has been lifted. Clearly there are still ramifications - a concept reinforced by this site's reports and comments.
To my question...from what both of you have said, it doesn't appear there is strong evidence a very high number of links directly causes additional penalty as far as link juice is concerned.
For the record, I'm not calculating PR or stuck on exact counts - my focus always starts with the end user. But, I'd hate to have a structural item that causes undue damage.
-
RE: Link juice and max number of links clarification
The context is a parts page where potential hundreds of link could be associate with other parts the item fit. I looking to firm up my argument against the concept so I want to understand better the true impact of the section.
If it was accelerating the decay of link juice, all the more reason. If not, they may actual help certain products appear organically stronger (i.e. a part that fits on a greater number of products will have more incoming links).
Navigation is actually quite tight (under 20 links) by modern standards.
-
RE: Link juice and max number of links clarification
I used 'PR' mainly because 'juice points' sounded stupid.
I'm more interested in what happens past the ~100 links.
Does the remaining juice get reallocated or does the page leak at a higher rate?
-
RE: Google Webmasters DNS error
Is the message dated? Perhaps this occurred before and fixed now but the message keeps popping up?
-
RE: Rich Snippets
As to rich snippet mark up, I recommend starting with:
http://schema.org/BedAndBreakfast or
http://schema.org/LodgingBusinessThe rest of the question seems to be more about Google Sitelinks and Authorship.
Site Links are up to Google and you need a certain amount of traffic before they will appear. When they do, their based on linking structure and page popularity. The only thing you can do is tell it what pages you don't want in the list.
Authorship may not apply here but there are many good articles about it on SEOmoz.
http://www.seomoz.org/ugc/google-authorship-and-the-fast-track-to-better-rankings-a-case-study
-
Link juice and max number of links clarification
I understand roughly that "Link Juice" is passed by dividing PR by the number of links on a page. I also understand the juice available is reduced by some portion on each iteration.
- 50 PR page
- 10 links on page
- 5 * .9 = 4.5 PR goes to each link.
Correct?
If so and knowing Google stops counting links somewhere around 100, how would it impact the flow to have over 100 links?
IE
- 50 PR page
- 150 links on the page
- .33 *.9 = .29PR to each link BUT only for 100 of them.
After that, the juice is just lost?
Also, I assume Google, to the best of its ability, organizes the links in order of importance such that content links are counted before footer links etc.
-
RE: If you were guest blogging would you prefer a link or revenue share
Whoa, I should clarify.
I was not talking about news but the tons of other "entertainment" shows that have all sorts of intrusive sponsors. Even if money isn't exchanged, there are tons of ways to "pay" people in media.
I too wouldn't suggest taking money for back links directly but that doesn't shut the door on paid content. Again, back to the sponsorship concept where the high value content is made possible by taking money from a company that then gets credit for it.
I respect your attempt, Diane, to compensate authors more directly but through a piece rate method still. Cracked uses the concept quite successfully: http://www.cracked.com/article_19955_we-want-to-pay-you-to-write-us.html
I don't think swapping GA accounts would be the vehicle I would use however.
-
RE: Outreach person different than guest post write
Sorry, I didn't see where you mentioned privacy but that's fair.
-
RE: If you were guest blogging would you prefer a link or revenue share
That is far from a universal rule. If there is a mutual benefit, then the barter system works just fine.
The same thing happens on TV shows: sometimes the story is sought after and the guest is paid, other times the publicist pays the show for the exposure.
Back links are a currency of the Internet (even if it annoys Google) and I would be happy to provide high value content to the right site for the links/exposure.
-
RE: Outreach person different than guest post write
What if they just want to use a pen name to protect their privacy? Hard to remember with modern life being so exposed and annotated but there are lots of people that prefer to remain anonymous.
So long as the persona is well developed, I wouldn't have a problem with it given the content was high quality.
-
RE: If you were guest blogging would you prefer a link or revenue share
I would prefer the link only because it's simple and fundamental to the Web.
What happens if you decide to drop GA or their account gets banned? Too many things taken out of your control there. The poster too, has to trust you will always use their account which isn't going to be an easy sell.
As to quality content, what topics?
-
RE: HTML5 Nav Tag Issue - Be Aware
Two weeks is pretty short time for a new site to get accurate reports from GWT. The back links I found weren't valuable - none with a page authority over 1.
I would secure at least one high quality link and wait a few more weeks.
-
RE: HTML5 Nav Tag Issue - Be Aware
Appears I broke the site... sorry
-
RE: HTML5 Nav Tag Issue - Be Aware
Here is how one could test this to be sure:
- Create site on a throw away domain that includes:
- home page
- sub page (containing unique text in title and body)
- orphaned sub page
- Place the nav tag on all pages with links to only the first two pages.
- Add some dummy content but don't create any other links.
- Link to the orphaned page from a decently trusted and ranked page on another site.
- Wait 2-4 weeks.
- Search for the unique string and write a YouMOZ post about your findings.
-
RE: HTML5 Nav Tag Issue - Be Aware
While I have found it does, you could always use a logo link to accomplish this.
-
RE: HTML5 Nav Tag Issue - Be Aware
To be sure I understand; you have a site-wide header ,
<nav>section but you are not seeing the backlinks from all the pages in the GWT internal links report?
(Incidentally, my experience has shown these links do count.)
Could we see the site?
How long ago did you post the nav element?
</nav>
-
RE: HTML5 Nav Tag Issue - Be Aware
This seems reasonable and a good way to ensure the link is allocated correctly.
I presume your issue is you have external links inside a
<nav>container?
Follow up: it appears the specifications do suggest the nav element is for internal links - the element is primarily intended for sections that consist of major navigation blocks. External links are generally not considered major navigation, no?
http://www.whatwg.org/specs/web-apps/current-work/multipage/sections.html#the-nav-element
</nav>
-
RE: Html text versus a graphic of a word
Short answer: fair amount and/or no point.
Alt tags have been abused so they'd be on the short list of places to watch for keyword stuffing. H1s are also on that list but fine within normal use (1-2 a page, highly visible, title length content).
With modern web fonts the only reason to use an image to replace words is a really fancy treatment of the font. Even then, use an image replacement technique so that Google has a easier time understanding the content.
http://www.zeldman.com/2012/03/01/replacing-the-9999px-hack-new-image-replacement/
Save the alt text for describing actual content images.
-
RE: Website Design Structure
Responsive.
But... I prefer a server AND client approach. Pure media query techniques result in really heavy pages that make too many design compromises. I use server side client capabilities tables to narrow the range of stuff different clients get. However they all use the same URL.
-
RE: Robots review
Thanks Aaron.
I will add the rules back as I want Roger to have nearly the same experience to Google and Bing.
Is it best to add one at a time? That could take over a month to figure out what's happening. Is there an easier way to test? Perhaps something like the Google Webmaster Tools Crawler Access tool?
-
RE: Robots review
All urls are rewritten to default.aspx?Tabid=123&Key=Var. None of these are publicly visible once the re-writer is active. I added the rule just to make sure the page is never accidentally exposed and indexed
-
RE: Robots review
Actually, I asked help this question (essentially) first then the lady said she wasn't a web developer and I should ask the community. I was a little taken back frankly.
-
RE: Robots review
Can't. Default.aspx is the root of the CMS and the redirect will take down the entire website. Rule exists for only a small period where Google indexed the page incorrectly.
-
Robots review
Anything in this that would have caused Rogerbot to stop indexing my site? It only saw 34 of 5000+ pages on the last pass. It had no problems seeing the whole site before.
User-agent: Rogerbot
Disallow: /default.aspx?*
//Keep from crawling the CMS urls default.aspx?Tabid=234. Real home page is home.aspxDisallow: /ctl/
// Keep from indexing the admin controlsDisallow: ArticleAdmin
// Keep from indexing article admin pageDisallow: articleadmin
// same in lower caseDisallow: /images/
// Keep from indexing CMS imagesDisallow: captcha
// keep from indexing the captcha image which appears to be a page to crawls.general rules lacking wildcards
User-agent: * Disallow: /default.aspx Disallow: /images/ Disallow: /DesktopModules/DnnForge - NewsArticles/Controls/ImageChallenge.captcha.aspx
-
RE: Markup reference data using Scheme.org
Near as I can tell, reference material has not yet been addressed.
This means I have to extend the schema myself as outlined here: http://www.schema.org/docs/extension.html
Not 100% sure how to go about that in a way that will actually register properly. Good chance I will get it wrong.
-
RE: Markup reference data using Scheme.org
Thanks but that's not that related to schema.org
-
RE: Markup reference data using Scheme.org
Sorry, but that is literally no help. I've read the site a fair bit - a link to it's sitemap doesn't get me any closer.
-
RE: Do drop caps impact the search value of your content?
Can't see it especially if the span is added with JS. But, if you are worried, try pure CSS with pseudo class "first-child": http://css-tricks.com/snippets/css/drop-caps/
-
Markup reference data using Scheme.org
Can anyone point me to a page showing how to mark up reference data according to schema.org ? Ie glossary or dictionary page.
-
RE: Block an entire subdomain with robots.txt?
Fact is, the robots file alone will never work (the link has a good explanation why - short form: all it does is stop the bots from indexing again).
Best to request removal then wait a few days.
-
RE: Block an entire subdomain with robots.txt?
You should do a remove request in Google Webmaster Tools. You have to first verify the sub-domain then request the removal.
See this post on why the robots file alone won't work...
http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
-
RE: Advice regarding Panda
Wow, you have really highlighted how far the coping of the site has gone. This site was created in '96 and many of the definitions are over 10 years old now. I can guarantee the author didn't copy the content unless cited. But, I can see how the definitions of the same thing would come out very similar.
Take http://www.poole.it/cassino/ARCHIVE/TEXTS_legal/duhaime%20online%20legal%20dictionary.htmshows
even with the single (wrong) back link, this would be competing for original content title, right?
Looks like we have to get busy with DCMA requests.
The next step for the citations is a separate domain which is a shame - Google really needs to catch up to reference sites and stop treating all pages on the web the same. Sure, the citations pages don't have to rank at the top but they shouldn't be hurting the rest of the content either.
-
RE: Block an entire subdomain with robots.txt?
Option 1 could come with a small performance hit if you have a lot of txt files being used on the server.
There shouldn't be any negative side effects to option 2 if the rewrite is clean (IE not accidently a redirect) and the content of the two files are robots compliant.
Good luck
-
RE: Should me URLs be uppercase or lowercase
The fact a Caps create a 404 error on LAMP site is a pet peeve of mine - so is the fact Google thinks mix cases on IIS are separate (thus duplicate) URLs.
Too arbitrary to be picky about and cause user frustration.
Thanks goodness at lease DoMaInS can be what ever.
-
RE: Removing Duplicate Page Content
You can also tell Google to ignore certain query string variables through Webmaster Tools.
For instance, indicate that "dir" and "mode" have no impact on content.
Other SE's have simular controls.
-
RE: Site recently updated- are my campaign results really accurate?
- Try logging out or using a completely different computer - Google modifies your results according to your past search behavior.
- Double check you are comparing the same country version of Google. You may be searching Google.co.uk and tracking Google.com
- It could be getting a boost from local search - try the search with a friend in a different part of the country.
-
RE: From your perspective, what's wrong with this site such that it has a Panda Penalty?
- Light content is an issue for many pages by the nature of the content. This is why we moved the entire Citations section to a sub domain. Combining them would be near impossible without diminishing the value to the human visitors - lawyers rarely have time to wade through arbitrary lists. I really can't think of a way to combine the page in a meaningfull way.
We have combined other areas such as the law quotations and I will search for more canditates.
I will note, pages below a certain character threshold have a noindex tag on them now.
-
Above.
-
Actually, the pages have around 35 links per page according to GoogleBot. The menu and the footer are loaded via AJAX after the visitor interacts with the site. The home page is an anomaly.
-
RE: From your perspective, what's wrong with this site such that it has a Panda Penalty?
Hehe, caught me.
Just, duplicate content isn't that big a factor for Panda that I can see. It appears focus on the quality of the content (as judge by humans in a study).
It may well be hurting the site in general however.
-
RE: From your perspective, what's wrong with this site such that it has a Panda Penalty?
The site is a legal dictionary and reference so literally,1000's of legal definitions, topics and terms.
A targeted case would be made for "Legal Dictionary" but the site still gets OK results from that search. It was much better before Panda - most keywords are off by about 60% in terms of traffic
-
RE: Block an entire subdomain with robots.txt?
Sounds like (from other discussions) you may be stuck requiring a dynamic robot.txt file which detects what domain the bot is on and changes the content accordingly. This means the server has to run all .txt file as (I presume) PHP.
Or, you could conditionally rewrite the /robot.txt URL to a new file according to sub-domain
RewriteEngine on
RewriteCond %{HTTP_HOST} ^subdomain.website.com$
RewriteRule ^robotx.txt$ robots-subdomain.txtThen add:
User-agent: *
Disallow: /to the robots-subdomain.txt file
(untested)
-
From your perspective, what's wrong with this site such that it has a Panda Penalty?
For more background, please see:
http://www.seomoz.org/q/advice-regarding-panda
http://www.seomoz.org/q/when-panda-s-attack
(hoping the third time's the charm here)