One Question Quiz
measles vaccine ad

ScienceDecember 8, 2016

How to take the fight to bad science? By singing good science’s praises

measles vaccine ad

In the face of everything from anecdote posing as evidence to bias peddlers to outright quackery, the best riposte is to champion good science. But how? Dr Jessica Berentson-Shaw offers seven tips.

Science and evidence gets a pretty bad rap these days. Some of this bad rap is the science community’s responsibility to fix – we are not always great at communicating effectively with the non-science community. Sometimes, people can doubt our agendas if scientists visibly align themselves to the powerful. If we are gagged in what we can say then again, the public gets nervous when we do speak, wondering if it is the whole truth. These things can make the public a little suspicious and erode trust in what we say.

In an age when we are drowning in information, some people are simply overwhelmed by the deluge, especially when one study seems to contradict the one published last week. For some it seems safer to make their own judgments, to prioritise personal experience, to seek out studies that support how they already understand the world to work, or just to dismiss science as too biased. The net effect is unfortunately the rise and reliance on “bad science” (see also “no science”).

The MMR vaccine: prevents measles, has no link to autism, according to actual, real scientists. Photo: Istock
The MMR vaccine: prevents measles, has no link to autism, according to actual, real scientists. Photo: Istock

The good news: we can help support Good Science by spreading the truth about the nerdy disciplines

Good Science, when effectively communicated and shown to be relevant to everyone’s life, is an empowering tool – one that can be used to eliminate inequality, cut through prejudice and bring a much fairer, kinder, progressive and frankly more awesome society out of the shadows.

So how can we help those who may be naïve about the difference between Good Science and bad science and gently shepherd them to the lush pastures of understanding? First – don’t lose your shit. Instead, use these seven helpful conversation openers to empathise and talk about why scientific evidence can be trusted to help us make important decisions. It won’t always work and it won’t always feel good or satisfying, but sometimes it just might.

1. Anecdote or fortune cookie – both are unlikely to give you accurate answers”

In a climate of fear and mistrust, trusting only your own experience (or a friend’s) seems reasonable. The problem is that the “N of 1” (as we call it) has a high chance of giving you wildly inaccurate information. Yep, using anecdote to establish whether an intervention/thing you are going to spend a lot of money on will work has about as much chance of being accurate as consulting a fortune cookie: always a possibility of being right, but very unlikely.

For example, if a friend tells you they used a series of coffee enemas and their bowel problems “cleared up” – therefore coffee enemas fix a dodgy tum – they are most likely to be inaccurate (as well as amped up on caffeine) and here is why: your friend has no way of knowing whether it was the coffee up the bum or something else that led to the symptoms disappearing. Only a scientific experiment can tell us that for sure.

The reason we developed scientific experimentation was to test if sticking stuff up your bum was helpful to health (well obviously it wasn’t, but you would be surprised at how many people think the cure to an ill requires an enema of some sort). Actually, one of the reasons was to test if a relationship we think exists between two events that we observed once or twice is real. In this case it is to test whether an intervention (a coffee enema) causes an outcome (bowel symptom improvement) or whether it is due to some other factor entirely. Other factors include the placebo effect (surprisingly powerful) and something we call “confounding variables”. The placebo effect is when someone can have a perceived or real improvement in their condition because they believe they are undertaking an effective treatment. A confounding variable is some other measured or unmeasured thing that is actually affecting the outcome, for example the person stopped smoking and the smoking was the real cause of the symptoms.

It is impossible for individuals to do such a tests on themselves, so we use good scientific methods (we explain these methods below) to know if a, sometimes widely observed, relationship is real. A good example of a widely believed anecdote disproven by scientific methods is “miasma theory”.

Until the 1800s diseases like cholera were understood to be caused by a noxious or poisonous vapour that was identified by its foul smell. Outbreaks of disease in towns were observed by many people to be accompanied by this “miasma”. Of course what we now know is that diseases like cholera are spread through water infected with the disease, and the foul smell was present because of poor sanitation (the real factor at play in the spread of the disease).

Anecdote is just far too prone to bias to be useful in informing you of the chances of something being true or not. Experience can however offer useful information in decision-making. For example, a thoughtful anecdote can help people, politicians and decision makers work through the impact of following the science.

The take-home here is that when making informed decisions, Good Science is a key factor that we should consider because it tells us what works. Experience is different type of information but it does not replace Good Science because individuals cannot know and measure all the other factors that may explain their experience.

The picture below shows how people (including those who make policy) can use Good Science to make decisions.

good-science

Next let’s deal with those who believe that science is so full of bias and big industry influence that it is best ignored.

2. “Bad science is overcome by doing Good Science, not by ignoring science altogether”

I feel quite despondent that when I write about what works there are so many people who reply that the entire scientific system is broken. People often point out bias in studies, publication problems or vested interests (I should note that these are all quite valid concerns). However they then go on to say “no science can or should be believed”. They believe all science is broken.

The broken science belief can be countered with an acknowledgment that while there certainly is bad science and bias, the alternative isn’t no science at all. I would be pretty upset if I went to the doctor with a sore head and due to some poor quality painkillers being released onto the market they suggested amputation of the affected area as the only option. Instead we can address the valid concerns about bias in science through a systematic and measured response – a counter to bad science. It is called Good Science.

3. “Did you know Good Science is a science too?”

Yep, that is right – scientists figured out how to overcome bad science using science. It works like this:

1. Use the right type of study for the question being asked.

This essentially means that for each type of question we want an answer to there are specific methods of investigation (or study designs) that should be used. Using the wrong study design to answer a question is like using phone polls to predict voters’ behaviour in the US; it just won’t give you accurate information. While different types of study design can be used to get some answers, there is however always a BEST one (though sometime pragmatics prevent that from being used).

2. Use the best study available to draw conclusions

When scientists are reviewing evidence, what they will look for is how confident they can be in the findings from a study. They will be a lot more confident if the best study design was used to answer the research. So for example if we want to know about whether a drug works to improve a particular health problem and what the side effects are, evidence from a Randomised Controlled Trial (RCT), is the best. If we want to know whether a disease is occurring at a higher rate in certain groups of people, a longitudinal study would be best. That is not to say we cannot use a longitudinal study to look at drugs effects it is just not particularly strong evidence and any findings are less likely to be accurate. We use something called the Hierarchy of Evidence to help us determine how strong a study’s findings are likely to be; we call it the “strength” of evidence. Good Science focuses on drawing conclusions from the strongest evidence out there.

3. Actively look for bias in research

Those who work reviewing evidence (these specialists are called systematic reviewers) are trained to look specifically for the different types of bias in studies, and place less weight on the findings of studies that are biased.

4. Make efforts to get all research findings into the public domain

It is important that all results of important trials are made public regardless of what they find. There are ways we are working on to counter publication bias including using unpublished data, public registers of clinical trials, and proposals to ensure all publicly funded research is freely available.

So the take home message is: Good Science follows clear methods designed to overcome bad science.

The very best way to identify what all the Good Science added together tells us about an intervention is to use what we call “a systematic review”.

4. “It’s natural to be drawn to research that confirms our views. Relying on a ‘body of evidence’ can help overcome this bias”

good-science-charlie-brown

What researchers realised some time back was that single studies could woefully misrepresent the bigger picture about whether something works and could be extremely vulnerable to “confirmation bias”. In the 1970s, doctors favoured individual studies that supported their existing opinions of what worked. It was leading to practices that were not best for patients. In the maternity system for example, all sorts of procedures for which there was no evidence were being done, often causing great distress and even harm. Routine enemas were undertaken in labour (yes people really are obsessed with bum related treatments) because clinicians believed they reduced the risk of infection (they do not). It was not until the late 1980s that a series of systematic reviews put a stop to this and many other risky procedures in labour.

The systematic review was invented to help overcome individual bias, to avoid ‘cherry picking’ and to look at a body of evidence in its entirety. It is a talented but little recognised and somewhat overly technical member of the scientific family. The systematic review is to science what Johnny Marr was to The Smiths.

I will save you death by scientific methods boredom and tell you the four things you need to know about a quality systematic review:

  1. The research question asked in a systematic review is very precisely and tightly defined. It includes set components to ensure the right studies are included, the right types of interventions and outcomes are located, and that all are compared in the same way.
  2. It needs to include a replicable analysis of both the quality of the data contained in the studies and the way studies were conducted, not just a narrative review of the selected studies.
  3. All the methods for the searching of evidence, assessing the quality, identifying bias, analysing the results and drawing conclusions must be detailed and replicable.
  4. It tells us what the whole body of evidence says, which gives us more confidence in the findings than a single study. Here is one we prepared earlier on a bit of witch doctoring touted in New Zealand to detect breast cancer.

If you want to turn up the nerd volume you can read more about Systematic Reviews here (PDF).

5. “’It works’ is different from ‘it’s risk free’. When something works it means the benefits outweigh the risks”

I sometimes hear people say because an intervention does not work for everyone all the time, or it has risks, the science must be wrong. Yet very few interventions that work do so 100% of the time or without small risks. Science works to constantly build on what we already know to improve effectiveness and reduce risk. Scientists do not (often) claim we have the complete solution- just the best that works at the moment with the least risks. Unfortunately when people don’t understand this it can lead to false dichotomies where people start to weigh up the risks of something that actually works against the imagined benefits of untested alternatives.

A great example of this is vaccinations. Just the other day a workmate complained that flu shots were a waste of time because his wife got one and she still got the flu. She would, he said, have been better he said to just take regular vitamins. The thing is, flu shots do prevent the flu or lessen the severity, but have a risk of not working and causing some side effects. When my workmate compared a flu shot to vitamins he was not comparing apples with apples. He was comparing the risks of an effective vaccination with the imagined benefits of vitamin pills. And all we know about those pills is that it will give you expensive wee.

The flip side of such risk/benefit comparisons is when people compare the risks of an intervention (like vaccination) with the risk of contracting the disease, when the important comparisons are with the outcomes you want to prevent (i.e. severe illness or death). For example the risk of serious brain inflammation from a measles vaccination is one in 1 million vaccines given while the risk of brain inflammation when a child gets measles is 2000 in 1 million.

The take-home message here is that even when there are small risks, it is important to remember that the benefits of “things that we know work” far outweigh the benefits of untested or ineffective treatments, which are zero. And the risks that come with the “things that work” are much lower than the risks that come with the things we are trying to avoid.

Which brings me to natural treatments and the idea that they are harmless or even a placebo.

6. “Did you know natural treatments have serious risks too?”

What does it matter if people recommend things that do not work if people feel better with their use? Well the reality is everything comes with a risk. People often view unproven or untested ‘natural therapies’ as a type of placebo. They are not – a placebo is a totally benign treatment (e.g. a sugar pill) with a sometimes-powerful effect and no risks whatsoever. People can incorrectly assume that natural therapies are harmless, so there is no loss if they don’t work. Such ‘treatments’ often cost people a lot of money, which given their lack of scientific proof, is just unethical. Untested alternative treatments can also mean that people who really do need help avoid getting it in the belief the alternative will work for them; there are lots of sad cases of patients who have put their faith in untested alternative therapies and have suffered severe consequences of not seeking professional medical attention. Additionally, if such treatments have not been subject to scientific study then the risks can be totally unknown and dangerous.

St John's Wort
St John’s Wort

Scientific studies have told us that many complementary treatments, while effective if used sensibly, can also have powerful side effects or interactions with other herbs or drugs. Ginger may help with nausea but also thins the blood. St Johns Wort which has been shown to help people with depression can interact with anti HIV medication rendering that medication ineffective. The way that the combined oral contraceptive pill can be absorbed by a woman’s body can be effected by the enzymes in grapefruit juice meaning that it might not work. Other interventions don’t work and come with significant risks (e.g. Amber beads for infant teething pain)

Take home message: never assume “natural treatments’ with little or no scientific testing are harmless & worth a crack.

This does not of course mean that scientists are anti non-conventional medicine, quite the opposite.

7. “Science is not ‘anti’ complementary or alternative medicine – it is just ‘anti’ stuff that does not work”

I read this recently on social media:

It is interesting in this time of wider awareness around general health, nutrition, physical activity that there is such a vocal and vociferous push back against less invasive more natural remedies.”

What is interesting is that in an era in which scientists are constantly exploring the potential effectiveness of complementary therapies, those touting untested ones for profit claim they are the victims of an organised take down. A quick scan through various systematic reviews shows a vast array of tested and proven effective complementary therapies. These include vitamin D treatments for preventing asthma attacks, magnesium supplements to reduce pain associated with periods, melatonin for sleep disorders, acupuncture in stroke recovery, Chinese mushroom may improve the efficacy of chemotherapy, and so the list goes on.

I should note that in the scientific community we ask some pretty hard questions about the need for certain tests and medications that are too commonly used. Antibiotics for viral infections, medications for infants with gastric symptoms and colic, testing for prostate cancer in men without symptoms – these are all medical technologies or treatments that scientists have argued should not be used as they are either not effective or the risks outweigh the benefits. This website helps inform people about unnecessary tests and treatments in medicine.

The take home message: science isn’t anti complementary medicine and pro conventional medicine – it’s interested in what works.

In conclusion…

Sometimes science is a bit scary; it tells us about risks we may not want to know about. It challenges our actions and beliefs. Scientists are not monsters; most of us just want to make the world a better place. We understand that evidence can be a bitter pill to swallow, but that funny taste in the mouth – of assumptions being challenged – does not render the scientific discipline broken. Bad science exists and Good Science is its alternative. When we choose to engage with it, we can use it as part of a thoughtful and effective decision-making process and gain a real sense of confidence in the choices we make without having to fight with others about theirs.

Special thanks to Anita Fitzgerald – Evidence Scientist of the highest order.


The Spinoff’s science content is made possible thanks to the support of The MacDiarmid Institute for Advanced Materials and Nanotechnology, a national institute devoted to scientific research.

Keep going!