Continued Lack of Google Indexing
-
I run a baseball site (http://www.mopupduty.com) that is in a very good link neighbourhood. ESPN, The Score, USA Today, MSG Network, The Toronto Star, Baseball Prospectucs, etc etc.
New content has not been getting indexed on Google ever since the last update. Site has no dup content, 100% original. I can't think of any spammy links, we get organic links day after day. In the past Google has indexed the site in minutes. It currently has expanded site links within Google search.
Bing & Yahoo index the site in minutes.
Are there any quick fixes I can make to increase my chance to get indexed by Google. Or just keep pumping out content and hope to see a change in the upcoming future?
-
I'm not sure how to remove answered.
I came across this link to my site via google:
Is this auto generated by my wordpress blog? I assume it is just a regular link, letting me know the source is Day Life.
I also ran my article text through Copyscape and came up with 10 different matches
-
Yes we do.
We get numerous retweets from regular readers + team specific twitter feed aggregators.
-
Have you tried seeding your new content through social media to see if that helps? (As soon as the new article is released post it to your Facebook and Twitter accounts, along with getting a few RT's / likes)...
-
if you're on WP, then I definitely recommend the Google XML Sitemaps plug-in. Does all the heavy lifting for you.
-
Another good idea. Thanks again
-
Thanks.
I was thinking about getting a wordpress plugin for the sitemap. It hasn't been an issue in the past but then again things are different now.
Meta trick is new, I'll give that a shot.
-
Regarding the source ownership factor, I'd also recommend using the Google Source Attribution tag on all your articles.
-
Your sitemap.xml file has three entries. I recommend automating the URL generation in it to include new content as it's posted, with lastmod set to the date/time of posting. I'd then get that file pinged out to Google at least once a day if you've got new content every day, or even each time a new article is written.
I'd also put a date/time stamp tag in the header using the HTML standard meta tag format for dates
That isn't guaranteed to help, however it has helped some sites I've worked on in the past.
-
Sorry, I can't seem to edit. I should add that the syndication/image situation has been the same for the past two years.
Links structure is very very good.
For example, whenever the site updates a small syndication is updated on Yahoo! Sports (http://sports.yahoo.com/mlb/teams/tor/blogs)
-
Good news!
-
I can see where you're going, so I'll say:
Our content is written 100% by our writers. Our content is syndicated everywhere (USA Today, MSG, Yahoo Sports!, etc). Could the samples sentences be an issue?
Image licensing is handled by my sports network (The Score is cable TV channel in Canada).
Everything is on the up and up on the site. Anyone else
-
If I take random sentences from the site I see those same sentences word-by-word on other websites. Are these your original content?
Do you own or have a license to display the great images that I see in many of the articles. There are no attributions.
These may not be the cause of your lack of Google indexing, but they could be and they could also be a larger problem.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
My Website stopped being in the Google Index
Hi there, So My website is two weeks old, and I published it and it was ranking at about page 10 or 11 for a week maybe a bit longer. The last few days it dropped off the rankings, which I assumed was the google algorithm doing its thing but when I checked Google Search Console it says my domain is not in the index. 'This page is not in the index, but not because of an error. See the details below to learn why it wasn't indexed.' I click request indexing, then after a bit, it goes green saying it was successfully indexed. Then when I refresh the website it gives me the same message 'This page is not in the index, but not because of an error. See the details below to learn why it wasn't indexed.' Not sure why it says this, any ideas or help is appreciated cheers.
Technical SEO | | sydneygardening0 -
Do you Index your Image Repository?
On our backend system, when an image is uploaded it is saved to a repository. For example: If you upload a picture of a shark it will go to - oursite.com/uploads as shark.png When you use a picture of this shark on a blog post it will show the source as oursite.com/uploads/shark.png This repository (/uploads) is currently being indexed. Is it a good idea to index our repository? Will Google not be able to see the images if it can't crawl the repository link (we're in the process of adding alt text to all of our images ). Thanks
Technical SEO | | SteveDBSEO0 -
Should you use google url remover if older indexed pages are still being kept?
Hello, A client recently did a redesign a few months ago, resulting in 700 pages being reduced to 60, mostly due to panda penalty and just low interest in products on those pages. Now google is still indexing a good number of them ( around 650 ) when we only have 70 on our sitemap. Thing is google indexes our site on average now for 115 urls when we only have 60 urls that need indexing and only 70 on our sitemap. I would of thought these urls would be crawled and not found, but is taking a very long period of time. Our rankings haven't recovered as much as we'd hope, and we believe that the indexed older pages are causes this. Would you agree and also would you think removing those old urls via the remover tool would be best option? It would mean using the url remover tool for 650 pages. Thank you in advance
Technical SEO | | Deacyde0 -
Duplicate pages in Google index despite canonical tag and URL Parameter in GWMT
Good morning Moz... This is a weird one. It seems to be a "bug" with Google, honest... We migrated our site www.three-clearance.co.uk to a Drupal platform over the new year. The old site used URL-based tracking for heat map purposes, so for instance www.three-clearance.co.uk/apple-phones.html ..could be reached via www.three-clearance.co.uk/apple-phones.html?ref=menu or www.three-clearance.co.uk/apple-phones.html?ref=sidebar and so on. GWMT was told of the ref parameter and the canonical meta tag used to indicate our preference. As expected we encountered no duplicate content issues and everything was good. This is the chain of events: Site migrated to new platform following best practice, as far as I can attest to. Only known issue was that the verification for both google analytics (meta tag) and GWMT (HTML file) didn't transfer as expected so between relaunch on the 22nd Dec and the fix on 2nd Jan we have no GA data, and presumably there was a period where GWMT became unverified. URL structure and URIs were maintained 100% (which may be a problem, now) Yesterday I discovered 200-ish 'duplicate meta titles' and 'duplicate meta descriptions' in GWMT. Uh oh, thought I. Expand the report out and the duplicates are in fact ?ref= versions of the same root URL. Double uh oh, thought I. Run, not walk, to google and do some Fu: http://is.gd/yJ3U24 (9 versions of the same page, in the index, the only variation being the ?ref= URI) Checked BING and it has indexed each root URL once, as it should. Situation now: Site no longer uses ?ref= parameter, although of course there still exists some external backlinks that use it. This was intentional and happened when we migrated. I 'reset' the URL parameter in GWMT yesterday, given that there's no "delete" option. The "URLs monitored" count went from 900 to 0, but today is at over 1,000 (another wtf moment) I also resubmitted the XML sitemap and fetched 5 'hub' pages as Google, including the homepage and HTML site-map page. The ?ref= URls in the index have the disadvantage of actually working, given that we transferred the URL structure and of course the webserver just ignores the nonsense arguments and serves the page. So I assume Google assumes the pages still exist, and won't drop them from the index but will instead apply a dupe content penalty. Or maybe call us a spam farm. Who knows. Options that occurred to me (other than maybe making our canonical tags bold or locating a Google bug submission form 😄 ) include A) robots.txt-ing .?ref=. but to me this says "you can't see these pages", not "these pages don't exist", so isn't correct B) Hand-removing the URLs from the index through a page removal request per indexed URL C) Apply 301 to each indexed URL (hello BING dirty sitemap penalty) D) Post on SEOMoz because I genuinely can't understand this. Even if the gap in verification caused GWMT to forget that we had set ?ref= as a URL parameter, the parameter was no longer in use because the verification only went missing when we relaunched the site without this tracking. Google is seemingly 100% ignoring our canonical tags as well as the GWMT URL setting - I have no idea why and can't think of the best way to correct the situation. Do you? 🙂 Edited To Add: As of this morning the "edit/reset" buttons have disappeared from GWMT URL Parameters page, along with the option to add a new one. There's no messages explaining why and of course the Google help page doesn't mention disappearing buttons (it doesn't even explain what 'reset' does, or why there's no 'remove' option).
Technical SEO | | Tinhat0 -
Crawling and indexing content
If a page element (div, e.g.) is initially hidden and shown only by a hover descriptor or Javascript call, will Google crawl and index it’s content?
Technical SEO | | Mont0 -
Google and QnA sites
My website has a QnA site - a bit like this one except it's not private to premium members. It is a page with a left colomn for category links and it has a list of recently asked questions, each question is a link to view the full question and answers etc. Does google know this is a QnA ? Or will it say - hey, there are far too many links on this page, tut tut. Is there anything I can do to help it understand what the page is.
Technical SEO | | borderbound0 -
About Google Spider
Hello, people! I have some questions regarding on Google spider. Many people are saying that "Google spiders only have US IP address." Is this really true? But I also saw video from Google's offical blog and it said "Google spider come from all around the world." At this point I am really confused. Q1) I researched and it seems like Google spiders have only US IP address. THen what does exactly mean by "Google spider come from all around the world."? Q2) If Google spider have only US IP address, what happen to site which use IP delivery? Is this means that Google spider always redirect to us site since they only have US IP? Can anyone help me to understand?? One more questions! When Google analyzing for cloaking issue, do you think Google analyze when spider crawls the site or after they crawled the site?
Technical SEO | | Artience0 -
Will using http ping, lastmod increase our indexation with Google?
If Google knows about our sitemaps and they’re being crawled on a daily basis, why should we use the http ping and /or list the index files in our robots.txt? Is there a benefit (i.e. improving indexability) to using both ping and listing index files in robots? Is there any benefit to listing the index sitemaps in robots if we’re pinging? If we provide a decent <lastmod>date is there going to be any difference in indexing rates between ping and the normal crawl that they do today?</lastmod> Do we need to all to cover our bases? thanks Marika
Technical SEO | | marika-1786190