Is this organic search sketchiness worth unwinding?
-
Started working on a site and learned that the person before me had done a fairly sketchy maneuver and am wondering if it's a net gain to fix it.
The site has pages that it wanted to get third party links linking to. Thing is, the pages are not easy to naturally link to boost them in search.
So, the woman before me started a new blog site in the same general topic area as the first/main site. The idea was to build up even the smallest bit of authority for the new blog, without tipping Google off to shared ownership. So, the new blog has a different owner/address/registrar/host and no Google Analytics or Webmaster Tools account to share access to.
Then, as one method of adding links to the new blog, she took some links that originally pointed to the main site and re-directed them to the blog site.
And voila! ...Totally controllable blog site with a bit of authority linking to select pages on the main site!
At this point, I could un-redirect those links that give the blog site some of its authority. I could delete the links to the main site on the blog pages.
However, on some level it may have actually helped the pages linked to on the main site.
The whole thing is so sketchy I wonder if I should reverse it.
I could also just leave it alone and not risk hurting the pages that the blog currently links to.
What do you think? Is there a serious risk to the main site in this existing set up? The main site has hundreds of other links pointing to it, a Moz domain authority of 43, thousands of pages of content, 8 years old and Open Site Explorer Spam Score of 1. So, not a trainwreck of sketchiness besides this issue.
To me, the weird connection for Google is that third party sites have links that (on-page-code-wise) still point to the main site, but that resolve via the main site's redirects to the blog site. BTW, the blog site points to other established sites besides the main site. So, it's not the exclusive slave to the main site.
Please let me know what you think. Thanks!
-
I agree with the two methods that both you and Gaston have pointed out.
The downside to reversing those links is that the domain authority could drop a bit—which could impact their rankings on the SERPs. If this happens, the client might think you are doing something wrong and causing their rankings to rank when, in theory, you were trying to help get rid of any sketchy links. In my opinion, I’d keep them. They’ll make your work perform better. Disavowing them could yield worse results than what their former SEO provided. If that happens, you're playing defense and blaming.
Hope this helps!
-
Well, I like Gaston's answers on these boards and at the same time was curious if that seemed like the concensus.... leave it cause no real risk.
-
Hi 94501! Did Gaston answer you question, and if so, would you mind marking his response a "Good Answer?"
Otherwise, how else can we help?
-
Thanks, Gaston!
Any other insights, folks?
Mike
-
Hi there,
There are 2 exits here, and you've pointed them:
- Reverse those links
- Leave all as it is now.
On one hand, if you aren't confortable with those links, just reverse all.
On the other hand, you've said that the main site has a lot of links and it those 'unnatural links' will not make harm and that the satellite blog has really few conections to the latter. I'd say that there isnt, almost nothing, risk. So, i'd leave as it is now.
Hope it helps.
GR.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Organic Traffic Drop of 90% After Domain Migration
We moved our domain is http://www.nyc-officespace-leader.com on April 4th. It was migrated to https://www.metro-manhattan.com Google Search Console continues to show about 420of URLs indexed for the old "NYC" domain. This number has not dropped on Search Console. Don't understand why Google has not de-indexed the old site.
Intermediate & Advanced SEO | | Kingalan1
For the new "Metro" domain only 114 pages are being shown as valid. Our search volume has dropped from about 85 visits a day to 12 per day. 390 URLs appear as "crawled- currently not indexed". Please note that the migrated content is identical. Nothing at all changed. All re-directs were implemented properly. Also, at the time of the migration we filed a disavow for about 200 spammy links. This disavow file was entered for the old domain and the new one as well. Any ideas as to how to trouble shoot this would be much appreciated!!! This has not been very good for business.0 -
What to try when Google excludes your URL only from high-traffic search terms and results?
We have a high authority blog post (high PA) that used to rank for several high-traffic terms. Right now the post continues to rank high for variations of the high-traffic terms (e.g keyword + " free", keyword + " discussion") but the URL has been completed excluded from the money terms with alternative URLs of the domain ranking on positions 50+. There is no manual penalty in place or a DCMA exclusion. What are some of the things ppl would try here? Some of the things I can think of: - Remove keyword terms in article - Change the URL and do a 301 redirect - Duplicate the POST under new URL, 302 redirect from old blog post, and repoint links as much as you have control - Refresh content including timestamps - Remove potentially bad neighborhood links etc Has anyone seen the behavior above for their articles? Are there any recommendations? /PP
Intermediate & Advanced SEO | | ppseo800 -
I think Google Analytics is mis-reporting organic landing pages.
I have multiple clients whose Google Analytics accounts are showing me that some of the top performing organic landing pages (in terms of highest conversion rates) look like this: /cart.php /quote /checkout.php /finishorder.php /login.php In some cases, these pages are blocked by Robots.txt. In other cases they are not even indexed at all in Google. These pages are clearly part of the conversion process. A couple of them are links sent out when a cart is abandoned, etc. - is it possible they actually came in organically but then re-entered via one of these links which is what Google is calling the organic landing page? How is it possible that these pages would be the top performing landing pages for organic visitors?
Intermediate & Advanced SEO | | FPD_NYC0 -
Do search engines crawl links on 404 pages?
I'm currently in the process of redesigning my site's 404 page. I know there's all sorts of best practices from UX standpoint but what about search engines? Since these pages are roadblocks in the crawl process, I was wondering if there's a way to help the search engine continue its crawl. Does putting links to "recent posts" or something along those lines allow the bot to continue on its way or does the crawl stop at that point because the 404 HTTP status code is thrown in the header response?
Intermediate & Advanced SEO | | brad-causes0 -
Number of Links for Internal E-commerce Search Pages (and Anchor Text)
Hello! We have an internal search engine for different email, postal, and phone data products on our website (75,000 product pages... calling all direct marketers!), I've noindexed all our dynamic search pages, but I'm wondering how else I can improve these pages. Should I reduce the amount of links on each page?
Intermediate & Advanced SEO | | Travis-W
Currently there are 20 search results per page. " <variable>Mailing List" has been a pretty good source of traffic for our product pages.
Should I change the anchor text for all the product pages listed to include the added long-tail keyword, or would that be extremely spammy, having the word "Mailing List" 20+ times on my page? We have both static and dynamic search pages - here is one of static ones: http://www.consumerbase.com/direct-marketing-mailing-lists.html
My main problem with adding the long tail KWs to the anchor text is that we still want our static search pages indexed.</variable> Thanks!0 -
Location appearing on search result. how can this be achieved?
I'm pretty sure this site is not doing any SEO but i think what made them no. 1 is the location. I already tried adding a google publisher tag to my site that points to my google page which contains my address but i still can't have the location appear.. here's a screenshot of the search result that i want to achieve: https://www.dropbox.com/s/tbdv3121rrs6zp5/Screen Shot 2013-04-15 at 9.39.30 AM.png Screen%20Shot%202013-04-15%20at%209.39.30%20AM.png
Intermediate & Advanced SEO | | optimind0 -
Why Does Ebay Allow Internal Search Result Pages to be Indexed?
Click this Google query: https://www.google.com/search?q=les+paul+studio Notice how Google has a rich snippet for Ebay saying that it has 229 results for Ebay's internal search result page: http://screencast.com/t/SLpopIvhl69z Notice how Sam Ash's internal search result page also ranks on page 1 of Google. I've always followed the best practice of setting internal search result pages to "noindex." Previously, our company's many Magento eCommerce stores had the internal search result pages set to be "index," and Google indexed over 20,000 internal search result URLs for every single site. I advised that we change these to "noindex," and impressions from Search Queries (reported in Google Webmaster Tools) shot up on 7/24 with the Panda update on that date. Traffic didn't necessarily shoot up...but it appeared that Google liked that we got rid of all this thin/duplicate content and ranked us more (deeper than page 1, however). Even Dr. Pete advises no-indexing internal search results here: http://www.seomoz.org/blog/duplicate-content-in-a-post-panda-world So, why is Google rewarding Ebay and Sam Ash with page 1 rankings for their internal search result pages? Is it their domain authority that lets them get away with it? Could it be that noindexing internal search result pages is NOT best practice? Is the game different for eCommerce sites? Very curious what my fellow professionals think. Thanks,
Intermediate & Advanced SEO | | M_D_Golden_Peak
Dan0 -
Block search engines from URLs created by internal search engine?
Hey guys, I've got a question for you all that I've been pondering for a few days now. I'm currently doing an SEO Technical Audit for a large scale directory. One major issue that they are having is that their internal search system (Directory Search) will create a new URL everytime a search query is entered by the user. This creates huge amounts of duplication on the website. I'm wondering if it would be best to block search engines from crawling these URLs entirely with Robots.txt? What do you guys think? Bearing in mind there are probably thousands of these pages already in the Google index? Thanks Kim
Intermediate & Advanced SEO | | Voonie0