I am a resident doctor in hospitals for my usual day job. I’ve taken a year out to do a leadership fellowship in AI within healthcare settings. I sincerely believe we are at the cusp of seeing the technology become increasingly integrated within healthcare.

There is a common refrain that AI has no place within healthare, particularly in the UK, because our infrastructure is so poor. “We still write on paper notes!” That is true, and it is pretty abysmal that the current state of our IT infrastructure within the NHS is as poor as it is. There is a lot of work to be done, but I firmly believe in approaching AI proactively.

https://www.forbes.com/sites/cindygordon/2022/10/31/ai-in-healthcare-is-making-our-world-healthier/

We must be aware that when we integrate these technologies within the NHS, we are attempting to bridge two cultural paradigms that are incompatible in many ways. The private companies that develop digital technologies typically operate with a “move fast and break things” mindset. The consequences for mistakes are usually minimal, so this approach makes sense in most other contexts. Healthcare, on the other hand, has approximately a 17 year gap between research and implementation. There is a culture steeped in hierarchy and a distrust of change. Although this approach hampers innovation, it can be appropriate within a safety critical field where mistakes can kill someone. Both perspectives are reasonable within their given context. There are ways to bypass this incompatibility through innovation.

This has been illustrated in The Innovator’s Dilemma by Clayton Christensen. Typically, companies develop sustaining technologies – innovations that gradually improve the performance of the technology or system currently in place. This, however, leaves them vulnerable to disruptive technologies. These are technologies that are typically of lower quality or performance to the original, but are good enough. They start off by serving a niche market and then rapidly improve the performance of their product to match the needs of a larger market. We can draw a parallel to healthcare by emphasising that first and foremost the technologies we bring in must be safe. Only once a standard of safe enough (in a systems safety approach) is achieved can we think of possible financial or time savings. This sounds obvious from the outset, but it is much less so when dealing with vested interests and cultural forces that inhibit this line of thinking.

There is critical importance in listening to stakeholders. To really understand the healthcare system you are working with, you must listen to healthcare staff and patients. What do they need to do their work more effectively? How do the current systems run? Will the technology you are introducing improve their efficiency or add to the burdens they already face in their high-pressure job? That is the approach to take with sustaining innovation – and a lot of AI will be sustaining in nature. However, AI can also be disruptive. I do not believe the disruptive forms will come necessarily by design. They will be AI solutions applied elsewhere and designed for a different context – brought in because a savvy medical professional sees the potential.

I suppress an eye roll whenever I hear people say AI will replace doctors or we will be ruled by robot overlords. We have automated large portions of aviation, but we still have two pilots – and likely always will. These technologies can augment human performance rather than replace them, particularly in safety critical fields. What my bigger worry is that technology reinforces our worst habits. Our technology is merely a reflection of ourselves. It is created by us and we will unconsciously develop them in a way that highlights what we value and reinforces our biases and prejudices. Social media is an example of this – we have increased connectivity across vast distances, but we have also witnessed the promulgation of fake news, polarisation of our electorate, and the fragmentation of our attention. I dread to think how technology that reinforces the worst of us can look with the algorithmic powers of AI. The best way to understand how AI will play a role in the future is by really understanding ourselves.

AI will rarely result in catastrophic failures. Nassim Taleb’s book Black Swan classifies events into “high probability; low impact” and “low probability; high impact” events. What I think we may see is the high probability; low impact variety, but these effects are cumulative and may result in catastrophic failures that are harder to undo as they permeate through cultural practices. Anyone who works on cultural settings is aware how hard that can be to change.

There is great promise with AI in healthcare – likely by augmenting our human performance and improving efficiency, particularly with administrative tasks. However, we must approach this with our eyes wide open and create systems that maximise benefits and mitigate harm.

For your reading: MPS white paper on the AI off-switch https://www.thempsfoundation.org/news-reviews-research/article/2025/03/26/avoiding-the-ai–off-switch—make-ai-work-for-clinicians–to-unlock-potential-for-patients

Leave a comment

Trending