If it's 2-3 months the 302s might decay. Personally, I'd probably 301 and then 301 again (especially if it's nearer the full 12 weeks / 3 months). Wait to see what others say though
Posts made by effectdigital
-
RE: I am temporarily moving a site to a new domain. Which redirect is best?
-
RE: 304 "If Modified Header" Triggers Error in Google Ads?
This means that Google believe there is an error with the landing pages which you have entered for your ads. 9 times out of 10, this comes down to someone putting the wrong URL into the Google Ads back-end and then having issues. This is especially the case for 3XX errors which usually indicate that redirects are at play. In your case, the story seems to be a bit different
Google is telling you that when they visit your landing URLs for your ads, instead of the ad connecting straight to the page, Google gets redirected:
"The HTTP
**304 Not Modified**
client redirection response code indicates that there is no need to retransmit the requested resources. It is an implicit redirection to a cached resource."So basically, Google cannot load your landing page because they are being redirected to a cache of the landing page instead of the live URL. This is unlikely to be down to someone entering the URL incorrectly in Google Ads (as that would be more likely to spawn a 301, 302 or 307). If you keep serving Google a cached version of your landing page, they can't know what's underneath of that redirect. As such, they can't trust the contents of the web-page
Some unscrupulous marketers, might use a 304 to get Google to evaluate a fake landing page, then send actual users to a completely different landing page (other than the one which Google evaluated). My guess is, this is a vulnerability which Google are unwilling to tolerate on their ads platform
-
RE: Can Yoast plugin change back old title automatically?
Does it change back to the old title just on Google, or on your actual CMS?
-
RE: New websites issues- Duplicate urls and many title tags. Is it fine for SEO?
For the first issue, you don't have multiple Page Titles. Only the text inside of is your Page Title. On the other lines you have an OG title (which is for OpenGraph / social and messenger sharing). You also have a Meta title, which is just another form of title. It should be fine to have these three essentially the same, though you'd probably want the ability to customise each of them if desired (sometimes the same link text that compels people to click on Google, isn't so effective on FaceBook or WhatsApp, so at the least you'd want to be able to specify a custom OG title vs your regular <title>tag)</p> <p>The second issue could be a real problem, especially if there are internal links which point to paginated URLs which don't exist, which also don't 404. If you can only create these infinite pagination URLs via manual browsing, it shouldn't be too big of a deal. If the internal link structure 'creates' them, that's another kettle of fish (and I'd take immediate action)</p></title>
-
RE: I'm not ranking locally.....
What keyword are you searching to try and find your site? I am assuming your site is this one. If we can recreate the search and see the issue, we might be able to scan for stuff that the competitors are doing which you aren't doing (e.g: more local links / citations, schema, GMB usage)
-
RE: Disallow doesnt disallow anything!?
In this case you would do well to read this documentation:
https://mza.seotoolninja.com/help/moz-procedures/crawlers/rogerbot
This forum sees a lot of general SEO queries and is a bit of a broader community, not just for Moz products (though we do see lots of Moz product questions as well!)
Can you just deploy Meta no-index through the HTTP header instead of through the HTML?
-
RE: Duplicate content in Shopify - subsequent pages in collections
The advice is no longer current. If you want to see what Google used to say about rel=next/prev, you can read that on this archived URL: https://web.archive.org/web/20190217083902/https://support.google.com/webmasters/answer/1663744?hl=en
As you say Google are no longer using rel=prev/next as an indexation signal. Don't take that to mean that, Google are now suddenly blind to paginated content. It probably just means that their base-crawler is now advanced enough, not to require in-code prompting
I still don't think that de-indexing all your paginated content with canonical tags is a good idea. What if, for some reason, the paginated version of a parent URL is more useful to end-users? Should you disallow Google from ranking that content appropriately, by using canonical tags (remember: a page that uses a canonical tag cites itself as non-canonical, making it unlikely that it could be indexed)
Google may not find the parent URL as useful as the paginated variant which they might otherwise rank, so using canonical tags in this way could potentially reduce your number of rankings or ranking URLs. The effect is likely to be very slight, but personally I would not recommend de-indexation of paginated content via canonical tags (unless you are using some really weird architecture that you don't believe Google would recognise as pagination). The parameter based syntax of "?p=" or "&p=" is widely adopted, Google should be smart enough to think around this
If Search Console starts warning you of content duplication, maybe consider canonical deployment. Until such a time, it's not really worth it
-
RE: Is there a way to get a list of all pages of your website that are indexed in Google?
And if people don't have Search Console access, they can always try queries like:
https://www.google.com/search?q=site%3Abbc.co.uk
... however, due to the difficulty in exporting all the listed URLs, and due to some anti-scraping measures by Google, this data tends to be inferior to that supplied by Search Console. Always use Google's official tools first
-
RE: Is there any set benefit in using a URL tracking engine on a domain for passing link juice?
For SEO it's always better to link directly, rather than linking to a page which then redirects. Redirects only pass large amounts of SEO authority to 'similar' pages (there are checks and balances on this). You could experiment with something like this, but I doubt it will result in much uplift really
-
RE: URLs too long, but run an eCommerce site
So in July (2019) John Mu from Google stated that URLs are generally ok up to 1,000 characters:
Google 'can' crawl and index URLs over 1,000 characters long (up to about 2k characters) but best practice seems to be up to 1,000 characters
Due to this, I personally don't agree with Moz's evaluation of, when a URL is getting too long. Your example URL, is nowhere close to 1,000 characters long. Where Moz and Google disagree I tend to side with Google info
That being said, your URL has redundant layers. Why even have "/category/" in the URL? Just go like this:
mysite.com/downloads/premium-downloads/sub-category/
People aren't so stupid that they need a fake URL layer called "/category/", to know that the following URL layer 'is' a category. IMO that's redundant architecture which is getting in your way for no reason
If you don't perform your redirects properly and you change the architecture of the site, you absolutely could see a negative impact on your rankings. Unless you're confident in terms of crawling your whole site, performing very granular redirects with high accuracy and missing nothing - I'd just leave it as it is
-
RE: Display Networks Less Expensive Than the Google Display Network/GDN
I'm not a massive display network person but AFAIK (as far as I know) Google's is one of the cheapest
-
RE: Redirects and site map isn't showing
That attachment shows that non HTTPS and non WWW URLs are being 301 redirected to the HTTPS-WWW version(s). That's what you want right? From your screenshot it seems like it is working how you want
Just so you know, when you put one architecture into Screaming Frog (e.g: you put in HTTP with no WWW), it doesn't limit the crawl to that specific architecture. If the crawler is redirected from non-WWW non HTTPS to HTTPS with WWW, then the crawler will carry on crawling THAT version of the site
If you wanted to crawl all of the old HTTP-non-WWW URLs, you would need to list all of them for SF in list mode and alter the crawlers settings to 'contain' it to just the list of URLs which you entered. I'm pretty sure then, you would see that most of the HTTP-non-WWW URLs are properly redirecting as they should be
As for the XML thing it's very common especially for people using Yoast. I think Yoast is really good by the way, but for some reason, on some hosting environments the XML sitemap starts blank-rendering. Most of the time hosting companies say they can't fix it and it's Yoast's fault but I don't really believe that. If a file (e.g: sitemap.xml) cannot be created, it's more likely they went in via FTP and changed some file read/write permissions and due to it being more locked down, the XML cannot be created anymore. If you were hacked by malware, they were likely over-zealous when locking your site back down and it's causing problems for your XML feed(s)
-
RE: What are SEO best practices for Java Language Redirections?
Quite an old style of architecture, it's a shame it cannot be changed. Just so others understand a bit more, what you refer to as a 'suffix' is actually called parameters. In a URL, anything following "?" is parameters
If the language on the root is dynamic (it changes) then it's very difficult for you to hreflang to it effectively as it will conflict the the parameter URLs (which are language based) AND additionally, you won't know for certain what language to hreflang to. That also makes canonicals to the root quite tricky IMO
I think what you are already doing, is the best of a bad situation. At least the parameter-based URLs set a designated language which you can rely on to be the same
If you look at this official URL from Google: https://support.google.com/webmasters/answer/182192?hl=en
Scroll down until you find the heading "Using locale-specific URLs" and look at the table underneath of that heading
Parameter-based geo-targeting, is actually the only one of of multiple architectures, which Google put in red text and explicitly warn people away from. Since the site you are looking at has crossed that red line, you may need to manage expectations about results. If they're going to pick the worst possible format and stick with it, without asking you as a consultant what is best, they've kind of shot themselves in the foot there
P.S: Regarding 'actual' redirects, not canonicals. For sites that have proper sub-folder structure, usually you redirect users based on their location, but allow them to flag select to 'escape' the redirections (which can sometimes go wrong). You also usually exempt Google's user-agent ('googlebot') from regional redirects, as they can only crawl from one location at once and otherwise they think areas of your site keep going up and down due to all your redirects. But with your structure, I'm not sure I would even touch redirects. It's in enough of a state as it is without rolling those dice
-
RE: Bogus Search Results
These invasive URL hacks can be very tricky to get under control. Sometimes if you're lucky, it's just a matter of erasing the pages and stuff. In other circumstances scripts are injected which recreate the bad URLs (and links to them) when you take them down. You certainly want to be on the lookout for unknown scripts on your site, or links to external scripts which didn't exist before
-
RE: Which product URL to include in Sitemaps?
It could happen, but it's unlikely to happen. If Google hasn't discovered that URL at all and then you make it easier for Google to discover, they may then apply internal authority to the page through your internal link structure. That could cause the page to rank higher. But the SEO authority wouldn't come 'from' the XML sitemap, only the 'discovery' of the URL would be impacted by your XML feed. If Google has already known of the page for a while (as is probably the case), then XML tweaks likely won't make any difference
-
RE: Which product URL to include in Sitemaps?
Nope, Sitemap.XML is just for indexation. It doesn't ever boost any page's authority in any way
-
RE: Which product URL to include in Sitemaps?
As far as most of us know in SEO, canonical tags don't consolidate SEO authority (like 301 redirects do). They merely alleviate duplicate content risk. Usually having a divided structure is the issue, not the usage of canonical tags (or not)
-
RE: Should I disable the indexing of tags in Wordpress?
Personally I usually do this as well as robots.txt blocking them to save on crawl allowance, but you should no-index first as if Google is blocked from crawling (robots.txt) then how will they find the no-index tags? So it needs to be staggered
I find that the tag URLs result in quite messy SERPs so I prefer to de-index those and then really focus on adding value to 'actual' category URLs. Because categories have a defined structure they're better for SEO (IMO)
Categories are usually good for SEO if you tune and tweak them up (and if their architecture is linear) but tags are very messy
-
RE: Multiple CMS on one website / domain & SEO
When you blend different architectures you WILL get contextual tag collisions (e.g: canonical tags don't work the same across both architectures, or Meta robots tags, or hreflangs). What you are suggesting is probably the best thing you can do, I stand behind WordPress quite firmly, even when it's used to augment another CMS (instead of replacing it). That being said, if you believe you are going to be able to predict all the issues which arise in advance - think again!
Do your best on that front (obviously) but you need to manage your client's expectations. There is going to be some fallout. There is going to be analysis needed. There is going to be dev work needed to create a seamless integration. It's good that you're being diligent and trying to predict everything, but with the complexities of the modern web (the same thing can be coded in thousands of different ways) you really need to accept there will be some problems that will need fixing. You'll need to hold some budget back for that stuff, or you'll be up a gum tree
-
RE: Bogus Search Results
Are you sure that this:
http://mechfieldfm.com.au/JCB-BACKHOE-IGNITION-SWITCH-WITH-2-KEYS-PART-NO-70180184-70145500-372796/... is an example of a bogus or hacked page? Because when I check Google's cache for the URL:
https://d.pr/i/6sioLy.png (screenshot)
It looks like it was once a 'real' page. Google's cached version only looks so messed up as they failed to capture the CSS properly. It just seems weird that someone would hack your client's site and just leave info for construction-related products on your client's domain. Usually with hacks it's a phishing / political agenda or something like that (or some kind of scam)
Still, having said that; although your client is in construction they don't seem to list any products of any kind (your client appears to be service-based). You have to admit though, to an outsider, it looks more like 'dev-gone-wrong' than a hack. It's just really weird, at the very least
Can't you just write a redirect rule that 301s or 302s all of those bogus URLs somewhere else? Those URLs shouldn't be accessible anyway. Your client has a valid SSL certificate and the site loads just fine on HTTPS: https://mechfieldfm.com.au - so another question is, why was HTTP left exposed? The bogus URLs are on HTTP, maybe they found an entry-point because that architecture was left exposed instead of being properly redirected
Even Google mostly link to the site on HTTPS:
https://www.google.co.uk/search?q=site%3Amechfieldfm.com.au
https://d.pr/i/XfbNF7.png (screenshot)
Where they're not doing this, it's because they are linking to more examples of 'bogus' URLs. Personally I'd just slap no-index tags and status code 410 (gone) on all of those problematic URLs. If you're worried you can't use no-index as the HTML of those pages is gone, you can still fire Meta no-index directives through the HTTP header using X-Robots, so be sure to look that up
It will probably take a while (some weeks) for Google to digest all the disruption and fixes. In the future, ensure that disused architectures (like HTTP) are made physically inaccessible via redirects. Leaving old architectures active is a recipe for disaster. Once this is all cleaned up, be sure to properly implement HTTP -> HTTPS redirects
Here's something odd. On your client's homepage, the HTTP->HTTPS redirect is properly implemented. Somehow these 'hacked' URLs are evading that redirect. You need to at least lock that down or it could happen again
-
RE: Links to sub pages
No, a domain is something like yoursite.com. Referral means coming from outside. If you have a link from yoursite.com pointing to another page on yoursite.com, in terms of SEO that is not counted as a 'referring' page (or domain). I don't know how Moz's tool views this, but in terms of how you should look at it, it's not to be considered a referring domain when you are doing domain-internal linking
Internal site-link structure can help to push more deeply nested URLs up Google's rankings for more niche keywords (to the point where they should rank appropriately, not beyond that point). Keep in mind though, that links move SEO authority from one place on the web to another. A fraction of the linking URL's PageRank will be lost as it is transmitted to lower-level pages, so don't overplay this tactic or run with it too far, or your linking address(es) may start to slip down (slightly)
If your site is big and well known with lots of SEO authority and trust, then there's not a lot to worry about. If your site is relatively green, you might see no benefit from the work or you might see higher level pages bleeding out
-
RE: H1 and H2 copy
That depends upon the page which you have produced. If for example, your target keyword was "mortgage calculator" and you had created an actual mortgage calculator for your site, it would be fine to just have the keyword by itself. Let's imagine a different situation, where you still want to target the same keyword ("mortgage calculator") but you have instead written a blog post comparing different mortgage calculators. Someone searching for "mortgage calculator" would want to find an actual calculator (quickly) and not your post, so using the keyword on its own would be a little manipulative. Instead you'd use an H1 / H2 more like "Mortgage Calculators 2019: Ten Calculators Compared and Reviewed"
Basically it comes down to how closely your content and URL(s), match the intent of the searcher - and how much effort has been invested in your attempt to rank. If you're targeting big, competitive keywords with blog posts then your targeting is wrong and you should look to make it more long-tail in terms of targeting. If you have something new and exciting to add to the web (where new functionality forms part of the content, remember content ISN'T 'just text') - then maybe targeting larger keywords is somewhat feasible
-
RE: Duplicate Footer Content Issue
If you're searching for external duplicate content, like if someone has stolen your content:
If you're looking for internally duplicated content:
You can use SEMRush - https://www.semrush.com/ - their on-site (technical) auditing tool, also commonly comments on internal content duplication 'as' a technical SEO issue. You have to create a project for your / your client's site and then run an on-site crawl, 20,000 URLs limit recommended
http://www.siteliner.com/ - Built specifically for checking internally duplicated content on your site
... or do it yourself in Excel. Commonly involves creating a grid of your site's / your client's site's content, then running Boolean string similarity with conditional formatting to generate a heat-map of content duplication. Doing the last one is a little more complex but it can come up with some super cool visuals. Right now, it relies upon the pwrSIMILARITY formula which is not native to Excel (it needs this plugin: https://officepowerups.com/downloads/excel-powerups-premium-suite/). If you don't want to pay for that I'm pretty sure that someone will probably have made a VBA script that does something similar
-
RE: Google-selected canonical: the homepage?
That is a very weird situation and I think that, adding some canonical tags may not meet with your expectations. If Google have strong views about what is / is not a canonical URL, they will usually override canonical tags anyway (which is usually when you get the error you have described). I think we'd really need to see some examples of actual URLs to help you
-
RE: For an e-commerce product category page that has several funnels to specific products, for SEO purposes does it matter whether the category page's overview content is above or below those funnels?
That depends on your prior setup and backlink distribution. If most of your backlinks are connected to the pages which you moved, then those links will now be pushed through redirects (if you even did those). Depending on your deployment that could potentially disturb some of your rankings. In reality, when you make larger structural changes like this, you often plan for a small dip in the hopes the the efficiency / UX boosts will cause the pages to rise higher than their previous level(s) of performance. A lot of SEO is long-term thinking, making small concessions for larger future gains. Sometimes when you just add to a site without moving or removing stuff, the gains seem more immediate - but that doesn't seem to be the position which you are in
-
RE: What you think is thje best way to build links for my site?
I back what EGOL has said
-
RE: Duplicate Footer Content Issue
It might bee seen as a spammy attempt to manipulate search rankings. The fact that the content is in your footer, will tell Google that you don't really want users to read it and don't value it much at all. The content that is most prominent for users, is also the content which makes the largest difference in SEO
Since the weighting of footer links and content is so low, I doubt that you'd get a duplicate content penalty or anything like that. It's more likely that Google would just de-value the content and its links or (instead of seeing it as duplication) see it as spam instead (which could be potentially negative)
If you don't really want users to see something and cram it in your footer, there's a good chance that you did it 'just for SEO'. That's exactly the kind of thing Google doesn't like. It's not about duplicate content though, it's about trying to mislead Google with blocks of text that users probably won't see and read
-
RE: Disallow doesnt disallow anything!?
In addition to what Dalerio-Consulting has said, be wary of your deployment. Robots.txt doesn't affect indexation, it affects crawling. If Google can't crawl the pages, how can they find the contained no-index tags?
-
RE: PDF best practices: to get them indexed or not? Do they pass SEO value to the site?
PDF documents aren't written in HTML so you can't put canonical tags into PDFs. So that won't help or work. In-fact, if you are considering any types of tags of any kind for your PDFs, stop - because PDF files cannot have HTML tags embedded within them
If your PDF files have landing pages, just let those rank and let people download the actual PDF files from there if they chose to do so. In reality, it's best to convert all your PDFs to HTML and then give a download link to the PDF file in case people need it (in this day and age though, PDF is a backwards format. It's not even responsive, for people's pones - it sucks!)
The only canonical tags you could apply, would be on the landing pages (which do support HTML) pointing to the PDF files. Don't do that though, it's silly. Just convert the PDFs to HTML, then leave a download button for the old PDFs in-case anyone absolutely needs them. If the PDF and the HTML page contain similar info, it won't affect you very much.
What will affect you, is putting canonical tags on the landing pages thus making them non-canonical (and stopping the landing pages from ranking properly). You're in a situation where a perfect outcome isn't possible, but that's no reason to pick the worst outcome by 'over-adhering' to Google's guidelines. Sometimes people use Google's guidelines in ways Google didn't anticipate that they would
PDF documents don't usually pass PageRank at all, as far as I know
If you want to optimise the PDF documents themselves, the document title which you save them with is used in place of a <title>tag (which, since PDFs aren't in HTML, they can't use <title>). You can kind of optimise PDF documents by editing their document titles, but it's not super effective and in the end HTML conversions usually perform much better. As stated, for the old fossils who still like / need PDF, you can give them a download link</p> <p>In the case of downloadable PDF files with similar content to their connected landing pages, Google honestly don't care too much at all. Don't go nutty with canonical tags, don't stop your landing pages from ranking by making them non-canonical</p></title>
-
RE: Will YouTube Popularity Improve Website Google Rank?
It could well positively benefit your SEO, but the effect is likely to be indirect. If the videos gain enough exposure, then there will surely be some kind of digital press coverage. If they (editors) link to your site, in addition to your YouTube channel (some will, some won't) then you'll likely see some positive impact. The way you can influence things more directly, is getting in contact with people linking to the videos and asking the editorial teams if they wouldn't mind also linking to some pages on your website. Some will be annoyed you even contacted them, some won't reply, some will help you out. "Don't ask, don't get"
-
RE: Would you get a backlink from a site that doesn't have many ranking keywords?
First I'd confirm that the site actually has no ranking keywords. Moz's keyword index isn't as strong as its backlink index, so you might want to scan the site (at the domain level) in tools like Ahrefs keyword explorer or in SEMRush (to help confirm whether there's a real lack of ranking keywords or not). If the site really doesn't rank for anything, I'd question whether they had manipulated their DA and I probably wouldn't go for the backlink
-
RE: Empty href lang
That is technically an error. If a page or post doesn't exist in a certain language, hreflangs pointing to it (from other pages) shouldn't exist. When the post is published, then the hreflangs should be written. It's probably not going to destroy your sites rankings by itself, but in SEO, the second you start making concessions on one front - people argue that you should make more concessions in other areas. Before you know it, you have a big tangle of loads of different errors. Optimisation, is about making things 'optimum'. It's not optimum to have hreflang errors like that
-
RE: Images on their own page?
You need to clarify whether you mean images on their own page, or images on their own URL (two different things)
This is an image on its own page:
https://www.bloodstock.uk.com/events/boa-2019/gallery
Depending upon the nature of the page, you may or may not want to de-index URLs like this. In the example of Bloodstock festival, it would be crazy to de-index their gallery images which many people are explicitly looking for. In other circumstances, you get 'weird' pages which end users are never meant to see. Fragments which have minimal styling and just the image, those can usually be de-indexed. Sometimes an image on a single page is very useful for users (imagine if Pinterest banned all actual pins from the SERPs) but other times they're just back-end fragments which have escaped. Know the difference
This is an image on its own URL:
When you load this up, there's no container. No HTML, no site at all - JUST the image on its own. Don't de-index these, or Google can't see your images - even when they're embedded on a web-page!
Hope that helps
-
RE: Creating multiple GMB listings same address?
In my experience, Google quite often see this (rightly or wrongly) as an illegitimate attempt to increase your search footprint. Quite often some listings get banned, denied, removed, or increased scrutiny is put on your original listing
Google really wants serrate listings for separate businesses, not separate listings for different services. That's why it's called Google My Business and not Google My Service. The core remit of the platform is to list businesses, not fragmented services. To those who try to derail this, sometimes it can work pretty well - but usually (in the end) it causes more problems than it solves
You could label your services as different businesses and give them different internal addresses (e.g: Unit 1, Unit 2 etc). If that seems really inaccurate, then it probably says that your attempt to diversify online in jumping the gun a little (and Google may not embrace it)
-
RE: Moz reports way fewer backlinks than google search?
Firstly, PA / DA are shadow metrics. In the old days SEOs used to use TBPR (Toolbar PageRank) to evaluate the worth of pages, which was a publicly available (yet very watered down) variant of Google's official PageRank algorithm metric
When Google took away TBPR, the search industry needed a new metric to evaluate the worth of web pages. As such data suppliers like Moz (PA / DA), Majestic SEO (CF / TF) and Ahrefs (URL Rating / Domain Rating) stepped up to fill that gap. It's important to remember that Google still uses 'actual' PageRank (which SEOs have never seen) in their ranking algorithm, and not Moz's PA / DA metrics. They are meant to give a rough approximation of web-page worth, but Moz doesn't have the same digital infrastructure which Google has. Google can find way more links, way more web-pages
Moz's index of backlinks is probably 1/10 or less than what Google can see. But despite having loads of lucrative data, Google have been very stubborn in terms of making it usable for webmasters. As such we (in SEO) usually have to aggregate metrics from loads of backlink data suppliers (PA, DA, CF, TF, Domain Rating, URL Rating) and boil it down as we see fit. If you just go with one single data supplier you are usually operating your insights along very narrow data samples. It's only by drawing it all together, that you can get real conclusions!
Most ranking factors are relatively equally balanced. There's stuff like EAT, Google penalties (which can annihilate your ranking power, even with strong links), content quality etc. To say that one ranking factor is significantly larger than the others is, IMO, totally inaccurate. SEO is more about effort than it is about isolating magic bullet solutions (which pretty much don't exist in modern SEO!)
If your DA is under-inflated as a result of Moz's smaller backlink index (smaller than Google's) then your competitor's should be suffering from the same thing and it should (roughly speaking) balance out. At the end of the day though, DA is merely an indicator. Google don't use Moz's DA in their ranking algorithms. What really matters is your SEO traffic intake and what you do with it (CRO / UX)
-
RE: Do self referencing links have any SEO importance?
Self-referencing hyperlinks ( <a>tags) are pretty much pointless for SEO and in general</a>
<a>For specific sets of Meta links (like hreflangs or canonical tags) - self-referencing URLs are often a requirement and are part of SEO best practice</a>
-
RE: Gradual traffic drop of personal finance website in the last three months
I think you tried to attach two charts there but for me at least, they look like exactly the same chart. Entrances isn't the metric you usually use to evaluate what people call 'traffic', that's actually more commonly associated with the sessions metric
Let's look at what you said you did:
"1. Tried contacting website owner which we think spam and add all such domains to our disavow list" - this can and usually will only make your results go down further. Your view of what spam is, will not be perfectly aligned with Google's view. As such, some of the links which you think are spam, may have been giving you ranking power - which you have killed off using the disavow tool. If Google think a link is spammy, they nullify it themselves without disavow. If you mark a link as disavow, Google do not give you back the ranking power. Why would they? You are agreeing with them that the link is bad, so why would they give you ranking power for it? Disavows can (under normal circumstances) only make results dip or drop further. The use-case for a disavow, is if you believe your backlink profile is SO bad, that Google are about to give you a manual link penalty (which will kill ALL of your results). If you think that is about to happen, you can stop it from happening using a disavow submission. Disavow trades away (loses) some of your current performance, in exchange for future insulation against manual actions (which are WAY worse than algorithmic devaluation)
"2. We found little duplicate content on sites like Quora, we made those answers down by reporting to Quora"
This probably won't do anything for your SEO. Quora probably still won't delete the posts. If they're still there, it's still duplicate content (down vote or not)
"3. Reported to DMCA on 3 articles articles(partial) from our website."
This is a good move keep doing this. Actually focus more on this if you can find more stolen content! Use CopyScape to track it down
"4. We are trying improving user experience"
This is always good and you should always be doing this
"5. Removed one of our page that shared by many people but our page was not indexed by Google."
I mean, it won't really help to do this as a one off. If you have loads of pages that Google refuses to index, then action might be needed. But if it's just one page and people liked to share it, it seems kind of crazy to erase it to me
"6. Checked and modified content if any our articles are having more keywords than what SEO experts recommend."
Just so you know keyword density is not an accepted measurement in modern SEO. If your articles previously read like they were really spammy, you did the right thing. If they were reading fine anyway AND getting the keywords in, you may have hurt yourself more by doing this
"7. We are working on researching more and figuring our what else can might have gone wrong with our traffic."
Focus on R&I is healthy
"8. Working on improving EAT "
A lot of sites got stung, are still getting stung and will continue to be hurt by this. If you're writing for finance this massively concerns you and should probably be your #1 thing, not your #8 thing. Really focus on this a lot
"I attached our traffic drop graph. I believe this drop is not natural it happened because of some issue at our end and we are not able to figure out the exact reasons."
It could be many things but poor EAT on a finance site will kill your site in 2019
"S_urprisingly another site with not so high quality content started ranking now in the top._"
That's your opinion and you are welcome to it, but the quality of your sites and other sites is being determined by mathematical algorithms and not by human minds. What you think is quality content, may be very far removed from what Google's mechanical mind perceives as high quality. Another thing, older sites which are more established can rank above your with lower quality content than you have (as their SEO authority and trust is higher). You need to think about winning trust and links. Maybe some crappy sites do rank well, but when they were first made they filled a hole (in the query-verse) or they were good for their time. You have to be good in YOUR time. What they did to earn their success (which they ride along on) may have been drastically less than what you have to do in 2019. Never forget that. Comparative analysis-paralysis doesn't get you ahead, it holds you back. Vision is what's needed now
"I am here to get community members/experts help on this. I could provide you if you need any further details. Thanks a lot for your time. We really appreciate any tips that you can share with us. "
Not a problem hopefully some of my comments have proven useful to you guys
-
RE: Landing page separate from product page
It's bad for SEO because Google might start ranking your landing pages instead of the actual product pages. Since it would take a user fewer clicks to convert from the product pages (shorter user journey) you really want to think - do I need these special landing pages? If my product pages aren't ranking, why? How can I take the best parts of my new landing pages and my product pages, and make one super product page that 'just works' for everyone?
Usually when people start producing additional versions of the same page, it's because something about their website or product template isn't wrong. Do you want multiple averagely ranking pages, or fewer pages which rank more highly? The second option there (smaller footprint better ranking positions), is almost always better. To some degree, when you start writing loads of different pages about the same thing it's a form of 'giving up' on the original, instead of working hard to fix it up. It seldom yields good results
There is a caveat here. Sometimes you might create landing pages which exist for other traffic sources - other than SEO. You might create PPC landing pages or FaceBook Ads landing pages, which are highly tailored to your paid ads. That's fine and you should create paid landing pages, but you shouldn't make them accessible to your normal users or allow them to be indexed on Google
Hope that helps
-
RE: Hreflang - Is it needed even if the site is only one language
Hreflang means the tags that go between different pages and show Google the way to the same content in other languages. There are also your standard language tags, which perform a very similar function to the self-referencing hreflang. IMO, it should be enough to just use language tags e.g:
^ for French. You can even specify a region and a language like:
I would have thought this would be enough to tell Google what language and region a page is targeted at (you can come up with any number of combinations using the HTML ISO country codes and the HTML ISO language codes)
However, according to SEO best practice this is wrong. Even if you only target one language / country, and you do not require the utility of hreflang tags (pointing Google to other language variants of pages) - you are apparently still supposed to have a self-referencing hreflang tag on your pages (confirming the active language)
Since this is what language tags (which existed long before hreflangs) are supposed to do, it seems really stupid to me. But that's the way it is, for whatever reason
In summary, yes do use self referencing hreflangs AND the appropriate language tags in concert
-
RE: For FAQ Schema markup, do we need to include every FAQ that is on the page in the markup, or can we use only selected FAQs?
I would gravitate to marking everything up and letting Google decide what they want to show. Most of the time when you try to 'sculpt' what Google can see in terms of structured data, it usually results in a structured data spam action. Sometimes it can take weeks, months or years for that to happen - but Google always want to be given the full picture.
Google don't take too kindly to being funneled in a certain direction. Schema and rich snippet spam have been a big headache for Google since they started utilising structured data more, some stuff (like author avatars for posts in SERPs) has been entirely taken away in the past (though someone has told me recently, they have been seeing these again for Google mobile layout only).
Google do have some official guidance here:
https://developers.google.com/search/docs/data-types/faqpage
They give a microdata example of implementation: https://search.google.com/test/rich-results?utm_campaign=devsite&utm_medium=microdata&utm_source=faq-page
In their example, nothing is missing or has been left out. Since that's how Google have illustrated their example, that's what I'd aim for myself
-
RE: Should I buy an expired domain that has already been redirected to some other website? Can I use it or redirect it to my own site if I purchase it?
If the domain has already been redirected somewhere else and if the redirects were accepted by Google, much of the authority for that domain may now have moved to a new location. In modern times the practice of buying domains and 301 redirecting them for extra link juice, is ineffective unless you are operating under very specific circumstances (and even then it's usually considered black-hat)
Nowadays Google often checks to see if the new content and pages are similar to the old ones. If they're not then quite often the redirect doesn't work for SEO purposes
-
RE: Www or https only
Check which architecture receives your highest quality backlinks and then force that structure using 301 redirects
-
RE: Does having a subdomains UK affect SEO in UK google results?
Only if it's marked up properly and has distinct, unique content. Even if you do it right, the difference is only slight (depending on industry of course). If you're just thinking of clone stamping a US site with exactly the same content to a UK sub-folder and then expecting to get loads of free rankings, think again
-
RE: Hard to rank difficulty level 25 - 35 keywords for brand new blog?
Depends on how important Google thinks your site and blog are, what the PageRank of those feeds are (not that you'll be able to get that information). Some content feeds exist on news sites and some posts gain thousands of keyword rankings, even for very competitive terms. It's usually because the content becomes popular and generates online buzz
If you don't have anything particularly unique, exciting and newsworthy to share - you can even struggle to rank (on page / scroll 1) for keywords with as little as 50 monthly searches
To be honest, it's getting harder and harder for blog posts to rank, as people create more in-depth, more adventurous content pieces like this one:
https://www.google.co.uk/search?q=ferrari+v12+story (the search)
https://jbrcapital.com/ferrari-finance/v12-ferraris-the-engine-that-made-ferrari/ (the content)
I picked up on this one a while ago as (at least temporarily) it was holding numerous Ferrari related search rankings
When people are creating stuff like this, 500 words of 'just some text' isn't very inspiring anymore (unless it's critically newsworthy or from a very inspiring person / source)
-
RE: A grade optimised posts not showing in SERPs
I heartily agree with this answer