Digital health

Digital health

Quackery and fakery: the challenge of health misinformation online



Categories:

Maeve Walsh is a strategy and government relations consultant, with a background in digital and health policy, and is a member of the AXA Health Tech and You Expert Group. Here she argues that misinformation about health is reaching dangerous levels – and something needs to be done

Miracle cures and dodgy health claims have been around for centuries. The term ‘quack’ dates back to the mid-17th century, abbreviated from a Medieval Dutch term, ‘quacksalvers’ – hawkers of salves who sold their wares noisily in markets.

So, would you trust a modern-day quacksalver, ambushing you outside your GP surgery and urging you to try his cure-all dandelion weed, rather than continuing with your planned appointment? Unlikely. Yet different rules apply in the 21st century online marketplace: the most popular article on Facebook in 2017 with ‘cancer’ in the title was: ‘Dandelion weed can boost your immune system and cure cancer’.

Just like market hawkers, those who shout loudest online get heard and liked – 1.4 million times in the dandelion case – then shared and liked some more. Authoritative, evidenced information loses out to brasher, more opportunistic claims on our attention; indeed, researchers have found that fake news is 70 per cent more likely to be retweeted than verified news.

What to believe?

With fake health news, the combination of eye-catching headlines, aggressive algorithms and bot-driven spread makes it a serious – and potentially deadly – threat to public health. For as long as it has been around, the internet has been the first port of call as a symptom checker when people are feeling unwell. But browsers frequently lead people straight to the very worst-case scenario: algorithms rank pages according to the recurrence of a keyword or how many clicks that page has got. The top-ranking sites get clicked repeatedly, and so they remain at the top of the search page, even if the diseases they suggest are rare or the information provided is from a dubious source.

Self-diagnosis-by-search-engine may fuel anxiety and worry – but, in most cases, the cure lies in an informed diagnosis made by a trusted medical professional. (Unless, of course, the searcher is a cyberchondriac – a term recently coined for hyperchondriacs whose condition is exacerbated by access to endless online information on ailments.) But when someone moves from self-diagnosis to self-treatment, the provenance and intent of health information online can get a whole lot murkier.

In recent years, unverified health information has proliferated, promoting miracle cures, dangerous diets, and alternative medical therapies – all generated without the editorial or medical oversight applied to its offline equivalent. In a high-profile recent example, Gwyneth Paltrow’s ‘Goop’ lifestyle brand was reported to Trading Standards and the Advertising Standards Authority in the UK for promoting ‘potentially dangerous’ advice related to ‘unproven’ health products. This included a supplement for pregnant women containing 110% of the ‘daily value’ dose of Vitamin A; NHS and WHO advice is explicitly to avoid supplements containing Vitamin A due to risks of harm to the unborn baby.

The end of experts

Even more worrying is the growing evidence that disinformation, spread by bots across popular online platforms, is being used to deliberately undermine official public health campaigns. In a recent report, the LSE’s Trust, Truth and Technology Commission identified ‘irresponsibility’ as one of ‘five giant evils’ fuelling the current information crisis, saying:

How medical device CEOs can navigate digital health disruption

In the first of a series of three articles, we get global leaders, McKinsey & Company's insight on the medtech market right now. They give their expert advice to medical device companies, explaining how they can navigate through digital disruption.

READ ARTICLE

‘irresponsibility arises because power over meaning is held by organisations that lack a developed ethical code of responsibility and that exist outside clear lines of accountability and transparency.’

When platforms fail to prevent the amplification of false health information by social media bots, it can have serious consequences: ‘The absence of transparent standards for moderating content and signposting quality can mean the undermining of confidence in medical authorities and declining public trust in science and research. This has been visible in anti-vaccination campaigns when Google search was found to be promoting anti-vaccine misinformation. All over Europe, the anti-vaccination movement, informed via social media, is leading to a measurable decline in the rate of vaccination.’

US research into the proliferation of anti-vaccine content online points to a self-reinforcing cycle: exposure to negative information on vaccination leads to increased hesitancy and delay amongst parents, who are then more likely to turn to the internet for information and less likely to trust healthcare providers and public health experts on the subject. In the UK, a recent Academy of Medical Sciences report found that only 37 per cent of the public trust evidence from medical research.

Maeve Walsh

Vigilance and regulation

So, what can be done to address the impact of fake health news and to reduce our willingness to believe and share it? A much greater focus on media literacy and critical thinking is needed: equipping and empowering individuals to spot, critique and fact-check fake news when they come across it online. A new project by the Wellcome Trust investigating how parents receive and interpret information about vaccinations on social media will provide valuable insight into effective solutions in this particular area.

And regulatory changes will also be required to force greater accountability on those who facilitate its spread in the first place; the huge attention now being given to the impact on democracy of the spread of disinformation online makes this inevitable and the final report from the wide-ranging, high-profile Select Committee Inquiry is expected imminently. But notably in relation to health, the Law Commission’s recent scoping report on Abusive and Offensive Online Communications pondered whether the extent to which mass online communication can spread false health information raised a ‘legitimate question’ whether the law relating to false claims should now extend beyond traditional contexts such as ‘fraud, consumer protection and the administration of justice’.

We are now far removed from a time when quackery could be relatively easily spotted and debunked, with the extent of its harms limited by physical constraints. As the Law Commission concluded: ‘false health claims might have [once] been tolerated on the basis of a broader commitment to freedom of expression. Could it now be argued that the potential harm caused by such conduct is so great that it justifies criminalisation?’

About the author

With well over 100 years experience between us, we've been around the editorial and medical blocks a few times. But we're still as keen as any young pup to root out what's new and inspiring.

Contribute

You're the expert! Write for The Engine or share your articles, papers and research

Add your content

Add your content

Keep informed

Sign up for Ignition, our regular, ideas-packed newsletter

Sign in with social media

or with a username