AI

AI

Will we ever trust algorithms more than doctors? 



Categories:

Julie Bretland, CEO of  Our Mobile Health and a member of the AXA Health Tech & You Expert Group talks to Tina Woods about trust in the world of aps and AI

Julie Bretland

Julie founded Our Mobile Health, to improve quality and provide confidence in the digital health sector by identifying, assessing, and curating health apps for app developers and enable health care organisations, to recommend, deploy and ultimately prescribe digital interventions.  She is a regular speaker on health innovation at events like Financial Times Live, was an expert advisor on digital health at the London Health Commission, an active participant on the EU health working group and set up DigitalHealth.London. Julie studied for her MBA at London Business School as a recipient of the prestigious Celia Atkin Advent scholarship. She was a winner of the Deloitte’s Institute of Innovation and Entrepreneurship Founders Award and is a founding member of the GSM Association’s ‘Connected Women’ programme. More recently Julie was named by Women in IT Excellence as ‘Innovator of the Year’ and was a finalist for the Everywoman awards for Innovative use of Technology. So she’s perfectly placed to advise on the way forward for technology in health.

‘The digital marketplace has become more mature – with entrepreneurs and developers becoming more savvy – but the change has not been radical,’ says Julie Bretland. But what do ‘trust’ and ‘trustworthiness’ mean in health technology? Julie argues there are two key elements in trust. The first one is around whether the technology works, ‘that it does what it is supposed to do and doesn’t rip you off’. The second is linked to credibility: that the innovation ‘abides by the law and complies with regulation, looking after peoples’ data, especially personal data, properly’.

Julie adds that everyone in the chain needs to be considered as well, not just the person or patient using the app. Right now, if something goes wrong, responsibility lies with the app developer from a legal standpoint. However, as apps become more integrated into the healthcare system, as supported in the new NHS Long Term Plan, ‘healthcare professionals are becoming increasingly anxious about the lines blurring over accountability for safety and effectiveness’, especially if for example, that healthcare professional has been the one recommending an app to an individual who trusts their judgement.

What is trust?

What are some examples of ‘trusted innovations’? AliveCor ECG monitoring is one, says Julie, as it offers medical grade testing, is FDA-approved and has a CE mark in Europe. It is also small and convenient and can be used in a GP surgery or by individuals at home. However, there are also a number of heart rate monitors available for free on the app stores, which aren’t classified as medical devices.

As a result of the General Data Protection Regulation (GDPR) in force since last May, and the ensuing data scandals of Facebook and Cambridge Analytica, ‘the general public has become much more aware of the issues at stake with their data, how their data is being used, and who to trust with their data too’ says Julie.  App developers are having to respond to this, and ‘manage risks and communicate benefits far better than previously’.

Consumers and health professionals often expect apps coming from mainstream sources to be free; however, if apps are free, one has to question how that app is making money to sustain itself and its ability to keep up with regulations etc. Many business models will have been based on collecting all sorts of data, but not always ethically. ‘Education is needed on cost versus benefit for both health professionals and patients but choice and convenience will play a part in engaging us as citizens in managing our own health and thus helping the prevention agenda, ’ adds Julie.

Negative unintended consequences are often cited as a downside of innovation. Julie says that ‘typically, decisions on using new technology are made in silos, when a far more holistic view is needed regarding implementation of this technology’.

Flexibility is key

With reference to the use of technology in the NHS Long Term Plan, Julie argues that ‘we can’t force everyone down the same route and need to offer choice. Different solutions – whether face-to-face, remote or digital support – will suit different people at different times. But a positive consequence of using a technology could be that GPs have more time with patients who need it, for example. However, this also means policy needs to adapt to these changing scenarios and give GPs that extra time, and also be reimbursed accordingly. A holistic view and a system approach are needed’.

How medical device CEOs can navigate digital health disruption

In the first of a series of three articles, we get global leaders, McKinsey & Company's insight on the medtech market right now. They give their expert advice to medical device companies, explaining how they can navigate through digital disruption.

READ ARTICLE

As the regulatory environment tightens up, companies are starting to group themselves into those making claims regarding health outcomes (which need to be backed by the evidence, as required by regulation) and not making claims. Julie says this ’is creating a chicken and egg situation for start-ups who need the evidence to get off the ground – but where do they go to build that evidence and what evidence is required? The new guidance from NICE can be a good starting point.

There is a need to work together to build trust and increase understanding and awareness in AI at all levels, argues Julie. ‘How can we be certain AI is working in a way that we are comfortable with, when due to the dynamic nature of AI, we may not be able to eliminate the risk entirely? The answer is that we may be able to manage risk to a standard we are comfortable with’.

The NHS Long Term Plan addresses some of this but Julie cautions against expecting the NHS to take on everything. Can we really expect the NHS to be responsible for everything in the consumer space? It is a difficult balancing act, but we need the early adopters and the NHS should not stifle innovation’.

‘There are some good frameworks available now for assessing apps says Julie, and but they don’t deal with more dynamic algorithms and black box concerns – yet. Overall, however, we are heading in the right direction. Awareness and understanding of digital health and AI is increasing all the time and this will lead to increased adoption’.

Top tips to build trusted innovations

To ensure innovations are ‘trustworthy’ and can be ‘trusted’, Julie offers the following top tips:

  1. Be transparent: about who you are as an organisation, about what data you collect and about what you do with it, and how you manage and look after it. Make it easy for users to contact you.
  2. Embrace regulations: make regulations your friend, don’t try to circumvent them. Regulations are becoming tighter all the time.
  3. Build your evidence base: You’ve got to start somewhere, and you need to make sure you only make claims regarding medical outcomes which you can support with evidence.  But in a world where we are changing behaviours, of both users and professionals, the more sound the evidence you have, the more users and healthcare professionals will trust that they are in good hands.

 

About the author

Tina Woods is founder of Collider Health, a health innovation catalyst that works with organisations to think and do differently and transform health with meaningful impact. She is also the founder of ColliderSCIENCE, a social enterprise to inspire young people in science and engineering and equip them with the skills to create their future.

Contribute

You're the expert! Write for The Engine or share your articles, papers and research

Add your content

Add your content

Keep informed

Sign up for Ignition, our regular, ideas-packed newsletter

Sign in with social media

or with a username