Artificial Intelligence (AI) is no longer knocking at the door of healthcare – it’s already in the room. From faster diagnoses to personalised treatment and streamlined administration, the potential of AI to revolutionise care is undeniable. However, as a doctor working at the intersection of product development, clinical risk, and strategy, I believe the real conversation isn’t whether AI has value. It’s whether we understand how to use it responsibly.
Because if we get that wrong, we don’t just risk inefficiency or a poor experience; we risk losing trust.
Human Judgement Will Always Matter
Clinical judgement isn’t just data-driven, it’s sensory, relational, and deeply human. When you sit in front of a doctor, you’re not just listing symptoms. You’re being seen. Your energy, body language, tone of voice and the look in your eyes are all clinical inputs too.
These are the subtleties that AI cannot yet grasp. And maybe it shouldn’t. Because part of what makes human healthcare so powerful is its ability to catch what can’t be coded.
AI, when used well, should be something complementary, not a replacement. But this depends on something critical: trust. And trust is earned through responsible use, through validated, evidence-based implementation that holds AI to the same standards we apply to medicine itself.
When Should AI Lead, and When Should It Support?
This isn’t a binary question. The answer, like so much in medicine, is context-specific.
If an AI tool has been rigorously tested and proven to outperform traditional diagnostics in a particular area—then it should lead. But if its accuracy is unclear, still under development, or has not been shown to be superior, traditional investigations and management must take the lead.
We can’t treat AI like a mystical black box. It’s just another tool in our clinical toolkit. Like any test, its usefulness depends on how well it performs—and whether the system around it is ready to implement it responsibly.
Personalisation Can’t Exist Without Patient Participation

AI has the power to personalise care faster than ever. But personalisation without patient involvement isn’t personal, it’s transactional.
As clinicians, we still need to interpret the AI’s suggestions and communicate them clearly. Especially in communities where AI can feel foreign or even threatening, transparency is key. Patients deserve the right to understand how their care is being shaped.
And if they’re not comfortable? That’s their right, too. AI must exist within a framework of informed consent, cultural sensitivity, and choice.
Efficiency Must Make Space for Empathy
AI is already reducing administrative burden, helping automate documentation, appointment management, and triage. But what we do with that saved time matters.
If we use AI to free up time, we must reinvest it into the human moments that matter most which are conversations, listening, trust-building. Efficiency is only valuable if it enhances the parts of medicine that machines can’t replicate.
Ethics Can’t Play Catch-Up
We’re moving faster than the regulations are. South Africa doesn’t yet have a comprehensive AI healthcare regulatory framework. And globally, most countries are still catching up.
This leaves us with an even greater responsibility: self-regulation grounded in professional ethics. The use of AI should not strip away our obligation to engage with empathy or respect patient dignity. If anything, it raises the stakes.
We must question not just what AI can do, but what it should do—and ensure that ethical complexity is part of the rollout, not an afterthought.
What’s Holding Us Back?
Often, it’s not the technology, it’s the infrastructure around it.
In some cases, medical aids don’t cover AI-enhanced diagnostics, making it harder to adopt even when tools outperform traditional ones. And if patients don’t understand how the tech works, or feel alienated by it, uptake slows down.
We’ve seen AI work well especially in image-based diagnostics like CT scans and X-rays. But for other conditions, like mental health or chronic lifestyle-related diseases, there is still a long road to walk. The tech may be ready but the system around it isn’t always.
The Road Ahead
I believe AI in healthcare is not just inevitable, it’s essential. We need it to address the growing pressures on our systems, to improve access, and to elevate standards of care.
But it will take time. It will take collaboration. And it will take a commitment to keeping patients and professionals at the centre of the system, not just the software.
As a doctor, a product leader, and a student of this evolving field, I’m excited about the future, but I’m also cautious. It’s also going to be very interesting because this isn’t just about adopting new tools. It’s about reimagining care – something to be done responsibly.
- Dr Jessica Hamuy Blanco – known as Dr Jess – is a Product and Clinical Risk Executive at Dis-Chem

Crédito: Link de origem