How to add specific Tumblr blogs into a disavow file?
-
Hi guys,
I am about to send a reconsideration letter and still finalizing my disavow file. The format of the disavow is
Domain:badlink.com (stripping out to the root domain)
but what about those toxic links that are located in tumblr such as: badlink.tumblr.com? The issue is that there are good tumblr links we got so I don't want to add just tumblr.com so do you guys think I will have issues submitting badlink.tumblr.com and not tumblr.com?
Thank you!
-
Thanks richard!
Yes I did try to remove them first.
-
You should be fine submitting the subdomain link. I assume you have taken steps to have the links removed already?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to optimize Wordpress.org hosted blogs
I've got a blog I'm working on that isn't self-hosted. I've attached a screenshot of the message I get if I try to install any plugins. I really want to block archive, author, tag, category, etc pages but I can't seem to find a way since there's no server for me to block via robots.txt or .htaccess and I can't actually install these plugins which would give me the ability to do it. Any suggestions? vQfnt
Technical SEO | | ShawnW0 -
Referencing links in Articles and Blogs
Hi I am wondering if the <sup>tag in html is picked up by google as a reference point?</sup> I.e when you put a superscript in word it puts a small number next to your sentence. Then you have a list of reference at the end of the blog/article does google recognise this?
Technical SEO | | Cocoonfxmedia0 -
Log files vs. GWT: major discrepancy in number of pages crawled
Following up on this post, I did a pretty deep dive on our log files using Web Log Explorer. Several things have come to light, but one of the issues I've spotted is the vast difference between the number of pages crawled by the Googlebot according to our log files versus the number of pages indexed in GWT. Consider: Number of pages crawled per log files: 2993 Crawl frequency (i.e. number of times those pages were crawled): 61438 Number of pages indexed by GWT: 17,182,818 (yes, that's right - more than 17 million pages) We have a bunch of XML sitemaps (around 350) that are linked on the main sitemap.xml page; these pages have been crawled fairly frequently, and I think this is where a lot of links have been indexed. Even so, would that explain why we have relatively few pages crawled according to the logs but so many more indexed by Google?
Technical SEO | | ufmedia0 -
How Does Dynamic Content for a Specific URL Impact SEO?
Example URL: http://www.sja.ca/English/Community-Services/Pages/Therapy Dog Services/default.aspx The above page is generated dynamically depending on what province the visitor visits from. For example, a visitor from BC would see something quite different than a visitor from Nova Scotia; the intent is that the information shown should be relevant to the user of that province. How does this effect SEO? How (or from what location) does Googlebot decide to crawl the page? I have considered a subdirectory for each province, though that comes with its challenges as well. One such challenge is duplicate content when different provinces may have the same information for some pages. Any suggestions for this?
Technical SEO | | ey_sja0 -
Google ranks my sitemap.xml instead of blog post
Hello, For some reason Google shows sitemap results when i search for my blog url website.com/blog/postwhy is Google ranking my sitemap but not a post, especially when i search for full URL? Thanks
Technical SEO | | KentR0 -
Blog RSS behind https
Hi, I'm finding our blog's RSS feed won't be recognized by various websites. The feed is on a https url. Is this the most likely problem? Thanks!
Technical SEO | | OTSEO0 -
Backlink density & disavow tool
I am cleaning up my backlink profile for www.devoted2vintage.co.uk but before I start removing links I wanted some advice on the following: I currently have over 2000 backlinks from about 200 domains. Is this a healthy ratio or should I prune this? Is there a recommended max number of backlings per domain? Should I delete links to all or some of the spun PR articles (some of the article web pages have over 40 articles with links back to us)
Technical SEO | | devoted2vintage0 -
Job/Blog Pages and rel=canonical
Hi, I know there are several questions and articles concerning the rel=canonical on SEOmoz, but I didn't find the answer I was looking for... We have some job pages, URLs are: /jobs and then jobs/2, jobs/3 etc.. Our blog pages follow the same: /blog, /blog2, /blog/3... Our CMS is self-produced, and every job/blog-page has the same title tag. According to SEOmoz (and the Webmaster Tools), we have a lots of duplicate title tags because of this problem. If we put the rel=canonical on each page's source code, the title tag problem will be solved for google, right? Because they will just display the /job and /blog main page. That would be great because we dont want 40 blog pages in the index. My concern (a stupid question, but I am not sure): if we put the rel=canonical on the pages, does google crawl them and index our job links? We want to keep our rankings for our job offers on pages 2-xxx. More simple: will we find our job offers on jobs/2, jobs/3... in google, if these pages have the rel=canonical on them? AND ONE MORE: does the SEOmoz bot also follow the rel=canonical and then reduce the number of duplicate title-tags in the campaigns??? Thanx........
Technical SEO | | accessKellyOCG0