Experience with 307 HTTP status code
-
Hello,
is there anybody how has got some experience with 307 HTTP status code?
We would like to use 307 HTTP status code (temp redirect) for disabling temporary some of our shop-categories where all products are out of stock.
Generally a few days or hours later products are back in stock and category page is also back. Is it a good idea to use 307 beccause link sould be disabled temorary or should we use 301 instead?
Best regards
Steffen
-
Hello,
It is definately a bad idea. It is simply because google can not detect that your page is not present due to your supply shortage, it yust sees that page is present sometimes and sometimes it is gone. Now ask yourslef: would you refer somebody to your friend who sometimes does the work that he is asked to do and sometimes he just quites in the middle. I think you should rather refer one who is always there and can be trusted. The same with webpages. No matter how good your page is is it unavailable for shorter and longer periods on a regular basis than it is not trustworthy. You should rather simply wite to the product that it is out of stock. Not to mention that it is much easier to solve this in your cms.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Does link position matter in the content/html code
My question is that if I have several links going to different landing pages will the one at the top of the content pass more value than ones at the bottom. Assuming that there are not more than 1 of the same link in the content. The ultimate question is whether or not link position in the content/html code make a difference if it passes more value. This question comes in response to this whiteboard Friday https://www.youtube.com/watch?v=xAH762AqUTU Rand talks about how if there are 2 links going to the same URL from the same content page then google will only inherit the value of the anchor text from the first link on the page and not the both of them. Meaning that google will treat that second link as if it doesn’t exist. There are lots of resources that shows this was true but there isn’t much content newer than 2010 that say this is still true, We all know that things have changed a lot since then Does that make sense?
Intermediate & Advanced SEO | | 97th_Floor0 -
Description tag in code is different from what is shown in SERPS...
Hi there: We have a client whose website we built in WP, using Yoast Pro as our SEO plugin. I was reading some reports (actually coming out of SEMrush but we use Moz as well) and I am getting really varying results in the description are of the SERPS. Even though I'm seeing the copy we wrote in Yoast in the description tag code, the SERP is showing an excerpt from the copywriting on the site. What's even weirder is that SEMrush is pulling an entirely DIFFERENT description. I'm obviously missing out on the finer points of description tags, as Google clearly does not always choose to feature what is actually written in the description tag itself. Can someone explain to me what might be going on here? Thanks in advance,
Intermediate & Advanced SEO | | Daaveey1 -
Domain Level Redirects - HTTP and HTTPS
About 2 years ago (well before I started with the company), we did an http=>https migration. It was not done correctly. The http=>https redirect was never inserted into the .htaccess file. In essence, we have 2 websites. According to Google search console, we have 19,000 HTTP URLs indexed and 9,500 HTTPS URLs indexed. I've done a larger scale http=>https migration (60,000 SKUs), and our rankings dropped significantly for 6-8 weeks. We did this the right way, using sitemaps, and http and https GSC properties. Google came out recently and said that this type of rankings drop is normal for large sites. I need to set the appropriate expectations for management. Questions: How badly is the domain split affecting our rankings, if at all? Our rankings aren't bad, but I believe we are underperforming our backlink profile. Can we expect a net rankings gain when the smoke clears? There are a number of other technical SEO issues going on as well. How badly will our rankings drop (temporarily) and for how long when we add the redirect to the .htaccess file? Is there a way to mitigate the rankings impact? For example, only submitting partial sitemaps to our GSC http property? Has anyone gone through this before?
Intermediate & Advanced SEO | | Satans_Apprentice0 -
Help with force redirect HTTP to HTTPS
Hi, I'm unsure of where I should be putting the following code for one of my Wordpress websites so that they redirect all HTTP requests to HTTPS. RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] This is my current htaccess file: *missing
Intermediate & Advanced SEO | | Easigrass0 -
Domain Authority... http://www.domain.com/ vs. http://domain.com vs. http://domain.com/
Hey Guys, Looking at Page Authority for my Site and ranking them in Decending Order, I see these 3 http://www.domain.com/ | Authority 62 http://domain.com | Authority 52 http://domain.com/ | Authority 52 Since the first one listed has the highest Authority, should I be using a 301 redirects on the lower ranking variations (which I understand how works) or should I be using rel="canonical" (which I don't really understand how it works) Also, if this is a problem that I should address, should we see a significant boost if fixed? Thanks ahead of time for anyone who can help a lost sailor who doesn't know how to sail and probably shouldn't have left shore in the first place. Cheers ZP!
Intermediate & Advanced SEO | | Mr_Snack0 -
Faulty title, meta description and version (https instead of http) on homepage
Hi there, I am working on a client (http://minibusshuttle.com/) whose homepage is not indexed correctly by Google. In details, the title & meta description are taken from another website (http://planet55.co.uk/). In addition, homepage is indexed as https instead of http. The rest of the URIs are correctly indexed (titles, meta descriptions, http etc). planet55.co.uk used to be hosted on the same server as minibusshuttle.com and an SSL certificate was activated for that domain. I have tried several times to manually "fetch by Google" the homepage, to no avail. The rest of the pages are indexed/refreshed normally and Google responds very fast when I perform any kind of changes there. Any suggestions would be highly appreciated. Kind regards, George
Intermediate & Advanced SEO | | gpapatheodorou0 -
Importance of Unique Content Location in Source Code
How much does Google value placement of unique content in the source code vs where it is visually displayed? I have a case where my unqiue content visually displays high on page for the user, but in the source code the unique quality content is below duplicate type content that appear across many other domains (think e-commerce category thumbs on left side of screen and 80% right side of screen unique stuff). I have the impression I am at a disadvantage because these pages have the unique / quality content lower in source code. Any thoughts on this?
Intermediate & Advanced SEO | | khi50 -
Use "If-Modified-Since HTTP header"
I´m working on a online brazilian marketplace ( looks like etsy in US) and we have a huge amount of pages... I´ve been studing a lot about that and I was wondering to use If-Modified-Since so Googlebot could check if the pages have been updated, and if it is not, there is no reason to get a new copy of them since it already has a current copy in the index. It uses a 304 status code, "and If a search engine crawler sees a web page status code of 304 it knows that web page has not been updated and does not need to be accessed again." Someone quoted before me**Since Google spiders billions of pages, there is no real need to use their resources or mine to look at a webpage that has not changed. For very large websites, the crawling process of search engine spiders can consume lots of bandwidth and result in extra cost and Googlebot could spend more time in pages actually changed or new stuff!**However, I´ve checked Amazon, Rakuten, Etsy and few others competitors and no one use it! I´d love to know what you folks think about it 🙂
Intermediate & Advanced SEO | | SeoMartin10