RSS Hacking Issue
-
Hi
Checked our original rss feed - added it to Google reader and all the links go to the correct pages, but I have also set up the RSS feed in Feedburner. However, when I click on the links in Feedburner (which should go to my own website's pages) , they are all going to spam sites, even though the title of the link and excerpt are correct.
This isn't a Wordpress blog rss feed either, and we are on a very secure server.
Any ideas whatsoever? There is no info online anywhere and our developers haven't seen this before.
Thanks
-
Thanks so much for your help - I think this should fix it. You've saved me hours of time. It's our own cms so I should be able to fix it today.
-
I don't think you're being linked to spam, specifically. What you're seeing is the Feedburner page linking your post titles to feeds.feedburner.com/[whatever the guid of the post is] -- URLs of different feeds from different sites entirely.
I believe this is the problem referenced in the FeedBurner FAQ - http://www.google.com/support/feedburner/bin/answer.py?hl=en&answer=79014&topic=13190 - "Why don't my feed content item links work?"
In which case, the isPermalink attribute on the feed guids should be false. I'd post about this on the support forum for your CMS.
-
Mmm, actually maybe if I change that guid entry that came up in the validator to false that will fix it?
-
Some answers to your checks:
- Feed is correct - still my feed
- No FeedMedic reports -says everything is fine
- Feedburner url and url people are directed to from the blog are the same
- No malware reports
- Ran tool on blog article page, rss, feedburner page, and feedburner article link page - doesn't pick up any malware
- Validity check brings up one issue: guid must be a full URL, unless isPermaLink attribute is false:
129
- Current entry for guid for one article is <a id="l16" name="l16">
<guid ispermalink="true">129</guid>
</a>
Sure, here's the feed: http://feeds.feedburner.com/EnjoyTravelBlog (check in Chrome or IE as for some reason someone looking in Firefox didn't see them)
Here are screencasts of what I see if I click on any of the article titles:
- http://screencast.com/t/PNvrItea3ky - see articles 1 & 2
- http://screencast.com/t/bZI8qlg74 - what I see if I click on article 1 - clicking on link goes to spam site
- http://screencast.com/t/cER9Fm9RTunm - what I see if I click on article 2
Like this for every single article - even got some links to Baidu, Ebay and all sorts in there.
Would welcome suggestions on other forums to post on if this goes beyond technical seo!
-
A few avenues to check out:
- Log into your feedburner account and make sure the feed it's processing is still your blog's actual feed.
- Under feedburner's "Troubleshootize" tab, check if there are any FeedMedic reports, and under Tips and Tools run the feed validity checks.
- Check and make sure the Feedburner URL shown in your account is the same one people are being directed to on the blog.
- Go to Google Webmaster Tools. Under Diagnostics, check and see if there are any malware reports.
- Run a malware scan on the site URL and the Feedburner URL through a tool like http://sitecheck.sucuri.net/scanner/
Can you provide us more information? Screenshots showing links and the URLs they direct you to?
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Website blog is hacked. Whats the best practice to remove bad urls
Hello So our site was hacked which created a few thousand spam URLs on our domain. We fixed the issue and changed all the spam urls now return 404. Google index shows a couple of thousand bad URLs. My question is- What's the fastest way to remove the URLs from google index. I created a site map with sof the bad urls and submitted to Google. I am hoping google will index them as they are in the sitemap and remove from the index, as they return 404. Any tools to get a full list of google index? ( search console downloads are limited to 1000 urls). A Moz site crawl gives larger list which includes URLs not in Google index too. Looking for a tool that can download results from a site: search. Any way to remove the URLs from the index in bulk? Removing them one by one will take forever. Any help or insight would be very appreciated.
Technical SEO | | ajiabs1 -
Internal Links issue in webmaster
we implemented our website on the basis of WordPress, then we migrate our website to PHP (YII Framework). after a while, we found out an issue around internal links which they were increasing severely. when we check our landing pages in webmaster (for example Price list), in contains 300 internal links but the reality is that there are no href tags on this page. it seems that webmaster calculate most of our links with the links of a single page and show them to us. is it natural or a mis configuration has been happened? Yh1NzPl
Technical SEO | | jacelyn_wiren0 -
Woocommerce Duplicate Page Content Issue
Hi, I'm receiving a duplicate content error. It says that this url: https://kidsinministry.org/childrens-ministry-curriculum/?option=com_content&task=view&id=20&Itemid=41 is a duplicate of this: http://kidsinministry.org/childrens-ministry-curriculum I'm using wordpress, woocommerce, and not really sure how to even address this. I tried adding this to .htaccess but it didn't redirect the url: 301 Redirects Redirect 301 https://kidsinministry.org/childrens-ministry-curriculum/?option=com_content&task=view&id=20&Itemid=41 http://kidsinministry.org/childrens-ministry-curriculum/ Anyone have any ideas? Thanks!
Technical SEO | | a_toohill0 -
Possible duplicate content issue with my Blog and archive pages . Any help greatly appreciated
Dear Mozzers, I have been looking at my news section on my eCommerce site and I think I may have a duplicate content issue and wanted some advice on whether I do and if so , how best I handle this. http://www.website.co.uk/news
Technical SEO | | PeteC12
http://www.website.co.uk/news/page:1
http://www.website.co.uk/news/page:2
http://www.website.co.uk/news/page:3
http://www.website.co.uk/news/limit:9999 (This is show all) I also have the ability of showing articles by month : http://www.website.co.uk/news/archive/2015/04 (April)
http://www.website.co.uk/news/archive/2015/03 (March)
http://www.website.co.uk/news/archive/2015/02 (Feb)
http://www.website.co.uk/news/archive/2015/01 (Jan) I am wondering if there's a duplicate issue here or not given that I also articles by month as well and if so how best I handle this.? I already do pagination on my news pages (page 1 , page 2) by using rel=next and rel=Prev but I don't have an canconical or anything as yet. I enclose a couple of links if this would help and would appreciate if someone could take a browse. I have a View All link on my homepage for for all news items - http://goo.gl/JPPIvQ I which have a different urls - March 2015 Articles - http://goo.gl/0O1wYD and April 2015 articles - http://goo.gl/GdW2oK On another note, These articles are also linked to from the relevant category landing pages on my website to help with SEO. I have not used H tags on the article links in my landing pages , just displaying the weblink back to the news article.I've done this to try and improve the PR and rankings of my landing pages. Just wondered if anyone has any comments as to whether thats a good or bad idea and whether I could improve it in any way - An example is here (scroll down the page to the pressure washing guides) - http://goo.gl/nnRE49 Thanks Pete0 -
A/B testing entire website VS Seo issues
I'm familar with A/B testing variations of a page but I'd like to A/B test a new designs version of a e-commerce site. I´m wondering about the best way to test with SEO concerns... this is what I´ve in mind right now, any suggestion? Use parameters to make version B different from A version. Redirect 50% of the users with 302 ( or javascript would be a better way?) Use noindex in the B pages. Use rel=canonical in the B pages pointing to A version. In the end use 301 redirect to all B pages to A urls. PS: We can´t use subdomain and i don´t wanna use robots.txt file to protect the new design from competitors. I´d love any suggestions and tips about it - thanks folks 🙂
Technical SEO | | SeoMartin10 -
RSS Feed - Dupe Content?
OK so yesterday a website agreed to publish my RSS feed and I just wanted to check something. The site in question is far more established than mine and I am worrying that with my content appearing on their website pretty much at the same time as mine, will Google index theirs first and therefore consider mine to be dupe? They are linking back to each of my articles with the text "original post" and I'm not sure whether this will help. Thanks in advance for any responses!
Technical SEO | | marcoose810 -
Canonical URL Issue
Hi Everyone, I'm fairly new here and I've been browsing around for a good answer for an issue that is driving me nuts here. I tried to put the canonical url for my website and on the first 5 or 6 pages I added the following script SEOMoz reported that there was a problem with it. I spoke to another friend and he said that it looks like it's right and there is nothing wrong but still I get the same error. For the URL http://www.cacaniqueis.com.br/video-caca-niqueis.html I used the following: <link rel="<a class="attribute-value">canonical</a>" href="http://www.cacaniqueis.com.br/video-caca-niqueis.html" /> Is there anything wrong with it? Many thanks in advance for the attention to my question.. 🙂 Alex
Technical SEO | | influxmedia0 -
How to use internal tracking without causing duplicate content issues
Hi, We've been testing internal tracking for 4 weeks on a couple of pages using the basic string ?internalcampaign=X, but hese pages have started appearing in the search results. We don't currently have the facility to add canonical tags to correct this. Does anyone have any other solutions to this problem other than deleting the internal tracking or adding filters on the server? Thanks!
Technical SEO | | NSJ780