Wordpress Plugin Causing Mobile Switcher 404 Errors
-
Hi All,
Has anyone seen this 404 error before? I installed a wordpress plugin that would allow users to switch between a mobile and desktop theme. This was about 6 months ago and I thought nothing of it.
Since I am new to SeoMoz, I have become aware of this lovely problem. After setting up my campaign, I now have 1600 something 404 errors due to this plugin. It looks like this plugin creates 4 to 5 links for each one of my posts and they all return up as a 404 error.
Example:
http://frogfanreport.com/football/page/36/
or
http://frogfanreport.com/football/page/36/?wpmp_switcher=desktop
I just noticed this morning in Google Webmaster tools that the errors are starting to show up there.
Has anyone seen this? Or know if this is a problem and what to do?
zach
-
Works perfectly thanks. Guess we will see over time if it stops errors from occuring.
-
We use WPTouch, which is pretty easy to use and seems to work well. It was having issues with WP Super Cache.
-
I was using Wordpress Mobile Pack. It is currently deactivated but would like to find something that works. Any recommendations?
Also, the only cache that I know off the top of my head is with XML Site Map generator. I can enable that. Is that what you are talking about or should I look somewhere else?
Thanks for responding...
z
-
Not sure what plug-in you're using, but my non-profit's website had a similar issue earlier this year where some of the mobile traffic would randomly 404 or show a non-mobile person the mobile site. We found that it was a conflict between that plug-in and another we were using to cache the site. We went ahead and got rid of the cache plug-in, which had numerous issues including slowing down our site, and that seemed to solve it.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Htaccess and robots.txt and 902 error
Hi this is my first question in here I truly hope someone will be able to help. It's quite a detailed problem and I'd love to be able to fix it through your kind help. It regards htaccess files and robot.txt files and 902 errors. In October I created a WordPress website from what was previously a non-WordPress site it was quite dated. I had built the new site on a sub-domain I created on the existing site so that the live site could remain live whilst I created on the subdomain. The site I built on the subdomain is now live but I am concerned about the existence of the old htaccess files and robots txt files and wonder if I should just delete the old ones to leave the just the new on the new site. I created new htaccess and robots.txt files on the new site and have left the old htaccess files there. Just to mention that all the old content files are still sat on the server under a folder called 'old files' so I am assuming that these aren't affecting matters. I access the htaccess and robots.txt files by clicking on 'public html' via ftp I did a Moz crawl and was astonished to 902 network error saying that it wasn't possible to crawl the site, but then I was alerted by Moz later on to say that the report was ready..I see 641 crawl errors ( 449 medium priority | 192 high priority | Zero low priority ). Please see attached image. Each of the errors seems to have status code 200; this seems to be applying to mainly the images on each of the pages: eg domain.com/imagename . The new website is built around the 907 Theme which has some page sections on the home page, and parallax sections on the home page and throughout the site. To my knowledge the content and the images on the pages are not duplicated because I have made each page as unique and original as possible. The report says 190 pages have been duplicated so I have no clue how this can be or how to approach fixing this. Since October when the new site was launched, approx 50% of incoming traffic has dropped off at the home page and that is still the case, but the site still continues to get new traffic according to Google Analytics statistics. However Bing Yahoo and Google show a low level of Indexing and exposure which may be indicative of the search engines having difficulty crawling the site. In Google Analytics in Webmaster Tools, the screen text reports no crawl errors. W3TC is a WordPress caching plugin which I installed just a few days ago to speed up page speed, so I am not querying anything here about W3TC unless someone spots that this might be a problem, but like I said there have been problems re traffic dropping off when visitors arrive on the home page. The Yoast SEO plugin is being used. I have included information about the htaccess and robots.txt files below. The pages on the subdomain are pointing to the live domain as has been explained to me by the person who did the site migration. I'd like the site to be free from pages and files that shouldn't be there and I feel that the site needs a clean up as well as knowing if the robots.txt and htaccess files that are included in the old site should actually be there or if they should be deleted... ok here goes with the information in the files. Site 1) refers to the current website. Site 2) refers to the subdomain. Site 3 refers to the folder that contains all the old files from the old non-WordPress file structure. **************** 1) htaccess on the current site: ********************* BEGIN W3TC Browser Cache <ifmodule mod_deflate.c=""><ifmodule mod_headers.c="">Header append Vary User-Agent env=!dont-vary</ifmodule>
Moz Pro | | SEOguy1
<ifmodule mod_filter.c="">AddOutputFilterByType DEFLATE text/css text/x-component application/x-javascript application/javascript text/javascript text/x-js text/html text/richtext image/svg+xml text/plain text/xsd text/xsl text/xml image/x-icon application/json
<ifmodule mod_mime.c=""># DEFLATE by extension
AddOutputFilter DEFLATE js css htm html xml</ifmodule></ifmodule></ifmodule> END W3TC Browser Cache BEGIN W3TC CDN <filesmatch ".(ttf|ttc|otf|eot|woff|font.css)$"=""><ifmodule mod_headers.c="">Header set Access-Control-Allow-Origin "*"</ifmodule></filesmatch> END W3TC CDN BEGIN W3TC Page Cache core <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteCond %{HTTP:Accept-Encoding} gzip
RewriteRule .* - [E=W3TC_ENC:_gzip]
RewriteCond %{HTTP_COOKIE} w3tc_preview [NC]
RewriteRule .* - [E=W3TC_PREVIEW:_preview]
RewriteCond %{REQUEST_METHOD} !=POST
RewriteCond %{QUERY_STRING} =""
RewriteCond %{REQUEST_URI} /$
RewriteCond %{HTTP_COOKIE} !(comment_author|wp-postpass|w3tc_logged_out|wordpress_logged_in|wptouch_switch_toggle) [NC]
RewriteCond "%{DOCUMENT_ROOT}/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" -f
RewriteRule .* "/wp-content/cache/page_enhanced/%{HTTP_HOST}/%{REQUEST_URI}/_index%{ENV:W3TC_PREVIEW}.html%{ENV:W3TC_ENC}" [L]</ifmodule> END W3TC Page Cache core BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /index.php [L]</ifmodule> END WordPress ....(((I have 7 301 redirects in place for old page url's to link to new page url's))).... #Force non-www:
RewriteEngine on
RewriteCond %{HTTP_HOST} ^www.domain.co.uk [NC]
RewriteRule ^(.*)$ http://domain.co.uk/$1 [L,R=301] **************** 1) robots.txt on the current site: ********************* User-agent: *
Disallow:
Sitemap: http://domain.co.uk/sitemap_index.xml **************** 2) htaccess in the subdomain folder: ********************* Switch rewrite engine off in case this was installed under HostPay. RewriteEngine Off SetEnv DEFAULT_PHP_VERSION 53 DirectoryIndex index.cgi index.php BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
RewriteBase /WPnewsiteDee/
RewriteRule ^index.php$ - [L]
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule . /subdomain/index.php [L]</ifmodule> END WordPress **************** 2) robots.txt in the subdomain folder: ********************* this robots.txt file is empty **************** 3) htaccess in the Old Site folder: ********************* Deny from all *************** 3) robots.txt in the Old Site folder: ********************* User-agent: *
Disallow: / I have tried to be thorough so please excuse the length of my message here. I really hope one of you great people in the Moz community can help me with a solution. I have SEO knowledge I love SEO but I have not come across this before and I really don't know where to start with this one. Best Regards to you all and thank you for reading this. moz-site-crawl-report-image_zpsirfaelgm.jpg0 -
Canonical URLs all show trailing slash on main site pages - using Yoast SEO for Wordpress - how to correct
We are using Yoast for a number of our sites. We use naked domain as the canonical. I have noticed in the header tags that all our sites show the canonical URLs as having a trailing slash: Example: http;//foxspizzajc.com, when I look at the source code, it shows the canonical as http;//foxspizzajc.com/ Of course, it is much more likely that all sites that link to us will not use the trailing slash - so preferably we do not want that to be the canonical - among other reasons. Does this need to be fixed so the trailing slash is removed? I cannot see how to do this in Yoast SEO or in Permalinks structure for Wordpress. Sorry for my ignorance. Thanks for any help.
Moz Pro | | Adam_RushHour_Marketing1 -
How to Avoid Duplicate Page Content errors when using Wordpress Categories & Tags?
I get a lot of duplicate page errors on my crawl diagnostics reports from 'categories' and 'tags' on my wordpress sites. The post is 1x link and then the content is 'duplicated' on the 'category' or 'tag' that is added to the page. Should I exclude the tags and categories from my sitemap or are these issues not that important? Thanks for your help Stacey
Moz Pro | | skehoe1 -
The crawl report shows a lot of 404 errors
They are inactive products, and I can't find any active links to these product pages. How can I tell where the crawler found the links?
Moz Pro | | shopwcs0 -
Error on SEOMoz When Trying to Track Website. Please Advise
Hi, I'm trying to start a new campaign for a root domain, but I'm getting the "Roger found an error" and am not sure what to make of it. Error #1: "You've decided to set up a root domain campaign, but entered the subdomain path: www.siteurl.com. Don't worry, we'll switch that for you and crawl everything on the subdomain: www.siteurl.com. If you meant to set this up to only crawl pages in the root domain, click 'Go back and Change" and enter a root domain URL in step 1." Error #2: "Oops! The root domain siteurl.com redirects to a domain that is not within the specified root domain (www.siteurl.com). This will cause us to stop crawling as the first discovered page falls outside of the root domain you've defined. Please make sure you enter a root domain that resolves to a page that is under the root domain." What does this mean? Is there something I am doing wrong? The first error is what returned when I input www.siteurl.com. The second was returned when I put just siteurl.com. I didn't put up the exact URL for privacy reasons, but if you really do want to help me out, PM me and I can give you the real URL. Thanks in advance!
Moz Pro | | locallyrank0 -
Duplicate Content being caused by home page?
Hello everyone, I am new to SEOmoz and SEO in general and I have a quick questions. When running a SEO Web Crawler report on my URL, I noticed in the report that my home page (also known as my index page) was listed twice. Here is what the report was showing: www.example.com/ www.example.com/index.php So are these 2 different urls? If so, is this considered duplicate content and should I block crawler access to the index.php? Thanks in advance for the help!
Moz Pro | | threebiz0 -
Crawl went from a few errors to thousands when I added Blog
I am new here. I recently got the errors from SEOmoz crawl on my site down to just a handful from a couple hundred. So I took the leap and moved my blog to www.mysitename.com/blog (which I see recommended here) and now my errors are in the thousands. My blog which was a separate url has pages back to 2007. I am not sure if it is appropriate to post my site url in a question here? One error that really stands out is this: Description <dd>Using rel=canonical suggests to search engines which URL should be seen as canonical.</dd> On my root page I have: rel="canonical" href="http://www.mysitename.com"/> Thanks for any help...
Moz Pro | | CMCD0 -
Fixing errors from SEOmoz diagnostic survey
I just ran a report from the SEOmoz diagnostic survey and was surprised to see errors. How did I fix these errors? Thanks in advance for your help, I have been pleasantly surprised at the thoughtfulness and responsiveness of this community: Errors: 5XX (server error) Overly Dynamic URL 302 Temporary Redirect Too many on page links (how many is ideal?)
Moz Pro | | TheVolkinator0