At the time of writing, about 87% of UK adults have received at least one dose of a Covid-19 vaccine. The huge majority of mainstream scientific or journalistic sources report the vaccine efficacy as being very high, up to 95% depending on the specific vaccine and specific measure in question. It may be somewhat lower with respect to some of recent virus variants, particularly after just a single dose. But overall, we seem to be making great strides into getting some level of management over the pandemic.
Or are we? A post that was doing the rounds on Instagram, and no doubt all other similar avenues of distribution, made this claim:
The Lancet peer-reviewed study confirms vaccine efficacy, not as 95% stated by the vaccine companies, but as: AstraZeneca 1.3%; Moderna 1.2%; J&J 1.2%; Pfizer 0.84%. They deceived everyone by reporting Relative Risk Reduction (RRR) rather than Absolute Risk Reduction (ARR).
First, let’s be clear, the way this post is presented is highly misleading, potentially dangerously so. It has officially been marked as “partly false” by Instagram, although versions of it are still perfectly available for viewing.
Obviously, “they deceived everyone” aims to give the impression of some big conspiracy, with the manufacturers deliberately spreading lies about the efficiency, presumably either to make big piles of cash or for even more nefarious reasons.
The lack of an actual citation to a study on a social media post can often be a sign that it’s pure and simple misinformation. Just because I can say “A peer-reviewed study in Nature says that humans are made of cheese 75.973% of the time” doesn’t make it true. But as it turns out, in this case the Lancet study they’re talking about does actually exist; and it does refer to the sort of numbers the post is talking about, even if it takes rather less of a conspiratorial bent.
Firstly, we might note that the “study” is actually a comment, rather than a new peer-reviewed study. And it doesn’t accuse anyone of lying about the 95% figure. It’s more talking about whether the calculation that leads to this 95% figure – known as the relative risk reduction (RRR) – is the only one that is of interest. They go on to calculate another measure of efficacy, the absolute risk reduction (ARR), which results in the much lower percentages from the Instagram post. So the debate here, to the extent that there is one, is really “should we also highlight the absolute risk reduction?”, and not “why are the evil vaccine companies lying to us!?”.
Interestingly enough, the reason that the authors of the Lancet comment can calculate the ARR without conducting an independent study is that the data needed to calculate it is published in the same vaccine studies that were the source of the 95% claims. It’s not even a complicated calculation. So if there was a conspiracy to hide these facts then…it wasn’t very good.
The situation does however present an interesting study of how what you choose to measure and how you present it can result in people getting very different impressions, so I was interested to dig into it a bit. I am not an epidemiologist so anything in the below should be double-checked by people who have the relevant expertise if there’s any danger of it affecting anyone’s decisions. But it’s also a simple enough concept that I feel like it’s worth detailing it, even if I likely will make some simplifications along the way.
Let’s take the example of the Pfizer vaccine study which the Lancet paper cites. To summarise briefly, the study followed two randomised cohorts of volunteers for a median of two months after either receiving the Pfizer vaccine or a placebo.
- 18,198 participants received the vaccine, of which 8 got Covid in this period.
- 18,325 participants did not get the vaccine, of which 162 got Covid in the same period.
So the chances of a vaccinated participant getting Covid in this period were 8 divided by 18198, which is around 0.04%. This is the “absolute risk” of this group getting Covid when vaccinated. As this is the the group that was treated with the vaccine, we might call this the Absolute Risk of the Treated, or ART.
The chances of a non-vaccinated participant contracting the same disease were 162/18325, which is around 0.88%. This is the Absolute Risk of the Control, ARC.
So what was the % increase in the risk of getting Covid if you weren’t vaccinated? That’s just 88/4, so you quickly see that in this group you were roughly 22x more likely to get Covid if you had not been vaccinated than if you were.
To get to the relative risk ratio, per the British Medical Journal’s best practice guidelines, we do this calculation: (ARC – ART) / ARC = (0.88 – 0.04) / 0.88 = ~ 95%.
This is where the 95% efficacy figure comes from. It’s calculated according to best practice and a very standard measure to report, with little-to-no mainstream controversy as to its worth.
So where does the 0.84% come from? That’s the “Absolute Risk Reduction”. This is looking at what the difference in absolute risks were between the treatment and the control group. We saw that 0.88% of the control group got Covid over the studied 2 months vs 0.04% of the vaccinated group. So the difference in absolute risk was 0.88 – 0.04 = 0.84%. So here what we’re saying was that if you consider the entire group, there was 0.84 percentage points more risk of contracting Covid in the control group as opposed to the vaccinated group.
The intuition here should be along the lines of “if you wouldn’t have got the disease anyway then the vaccine doesn’t technically help you”. At the extreme, the reason we don’t vaccinate people against imaginary or eliminated diseases is because there’s no chance of benefit. A vaccine only helps someone at an individual level if it stops them getting a disease they would otherwise have gotten. In this study the majority of people didn’t get Covid even when unvaccinated. For a person that wouldn’t have contracted Covid anyway, having a vaccine, all else being equal, doesn’t help them. But, still, for the people who we can assume would have developed Covid, they were 95% less likely to develop it if they were vaccinated.
ARR and RRR are different measures. They are calculated differently and will usually produce very different numbers. RRR can be thought of more of a biochemical property of the substance, at least to my amateur mind. If you get exposed to Covid in a way that would normally infect you, then the virus doesn’t care how likely you were to have gotten it. You 100% get it, unless you’re protected. The RRR in theory remains the same then whether 1 in a million people get exposed to the disease, or whether everyone on the planet does.
The ARR though is sensitive to the background rate of exposure to the disease, amongst other factors. If more people get Covid, or it becomes more contagious, or you look over a longer time frame than 2 months, then the ARR is likely to increase.
If we make the (extremely flawed) assumption that had this study followed the participants for 4 months instead of 2 it would have seen double the number of infections in each group, then 0.08% of the vaccinated group would have gotten Covid vs 1.76% of the control group. In that scenario the ARR would be 1.76-0.08 = 1.68% instead of 0.88%. But the RRR remains the same as before: (1.76 – 0.08) / 1.76 = 95%.
The ARR seen in the study is therefore only truly applicable to the participants of the study for the period of the study. If you live in a different place, through a different time period, exhibit different behaviours, see different people, intend to live for more than number of months the study ran for (median 2), or otherwise exist in a world that from the point of view of Covid exposure isn’t identical to the 2 month snippet of the lives of the participants then the ARR may not apply to you. That’s unless you can find a way to develop a personalized one, which would likely have a different value than seen in the study. Whereas the RRR likely still does apply to you. At least if we simplify by removing the added complication at least of how e.g. demographics et al might affect it in some cases – but either way it’s stable within you, whereas ARR depends more on both you and your environment.
As with any two different measures, they are answering different questions. Taking values from a trial like this:
- The RRR answers something roughly akin to “How much less likely am I to get Covid if I am exposed to it?”
- The ARR is more along the lines of “How much less likely am I to be exposed to Covid and go on to contract it if I live the same life as the study participants for the same amount of time that the study followed them for?”.
Simplistically, the RRR is a property of the vaccine. The ARR is a mix of that and the parameters of the society and population that receive it.
The ARR concept certainly isn’t useless, and it may well legitimately influence your decisions. For instance, I, a work-from-home data analyst in a small village, would have estimated “my personal ARR” to be lower than my wife’s, a teacher forced to travel and be inside buildings in the vicinity of hundreds of people in the course of doing her job. So, all things being equal, it would likely have been a good idea to prioritise her vaccination over mine. Which actually did not happen in the UK – unlike in some other places, teachers were not one of the prioritised populations.
But my RRR is likely to be roughly the same as my wife’s. So because I have no realistic way of knowing whether I am going to be one of the lucky people who by happenstance will never be exposed to the Covid virus in the indefinite future, I eagerly received my vaccine as soon as possible, noting that if I was ever exposed – and over 5 million people in the UK have received positive diagnoses at this point in time – I would rather reduce my likelihood of developing the illness, any deleterious long-Covid effects, plus the risk of spreading the disease to other people, by 95% than 0%.
Especially considering that becoming infected with Covid risks other people as well as yourself, I would certainly recommend the same decision for everyone else – with exception of the small percentage of the population who do have a genuine medical reason not to do so. These people are real and the rest of us should do our best to protect them from harm, part of which is ensuring those of us who can reduce our chance of transmitting the disease go ahead and do so.
Sure, if you 100% know you are never ever going to be exposed to the virus, even in the case that it unfortunately turns out to be with us for the indefinite future, then there’s no reason to get the vaccine. But, honestly, unless you’re a self-sufficient hermit, that’s almost certainly not the case.
As a sidenote, I found the poster’s message that accompanied the version of this post I first saw rather questionable, but almost in the opposite direction. It stated:
“First off, this research paper declared no competing interests, meaning that they had no bias.”
This seems to illustrate an lack of understanding as to, firstly, whether a declaration of no competing interest means the authors in fact had no competing interests. It’s…interesting to have someone believe this self-reported claim verbatim whilst at the same time implying that the claim of the huge number of other professionals who have talked about the 95% figure is not to be trusted. There’s some previous work detailing the limitations of competing interest claims, even when they are made perfectly in line with accepted standards.
But, perhaps more importantly, it suggests a lack of awareness that competing interests are the not only way of introducing bias into the scientific literature. This is false, as has been repeatedly demonstrated over time. A quick Google finds a paper that lists 11 potential sources of bias in clinical research, none of which refer to competing interests. For sure competing interests are important to know about, and have historically been a problematic source of bias in some cases – perhaps most famously the tobacco companies corruption of science for their own interests. But especially for someone who seems to want to allege that mainstream science is “deceptive” on one key point, it’s odd to claim that for other articles being published in equivalent forums a declaration of no declared competing interests equals no bias.
All this is more to hint at the presence of, perhaps, motivated reasoning in the original post rather than to sow further seeds of distrust in science. There are certainly flaws in how we have practiced science to date. Output at times has been problematically biased. Even when the process is followed well, there’s a small chance anything “statistical” may lead us in unfruitful directions, until the self-correcting nature of the practice works its magic.
Scientists surely suffer from the same biases as the rest of us do; but the scientific method itself largely exists precisely to try and do its best to reduce them. It remains, at least in my view, by far the best way of systematically generating critical knowledge; of answering important questions; of learning what’s true about the world.
The development of the Covid-19 vaccine simply serves to highlight how important it is, and the near-miracles of what can be done when research is prioritised, funded and shared. Nonetheless, without trying to cast doubt on the whole enterprise, we can all support efforts such as the open science movement which aims to help improve the veracity of scientific output ever further over time.