Will ChatGPT treat us in the future?
We’ve all been there: you wait in the doctor’s reception until your name is called. You’re shown into a sterile room humming with screens. You explain your symptoms. A few questions, a few more keystrokes—then a print-out and a polite farewell.
Not every doctor’s visit is like this, but it’s not uncommon to feel as if the doctor spent more time with the computer than with you.
Doctors and ethicists alike lament the decline in high-quality, person-centred care. Artificial intelligence entering healthcare raises the question: will AI help reverse this trend, or push doctors and patients even further apart?
Recent reports suggest that integrating AI into healthcare could save time and sharpen diagnostic precision. Already, it’s optimising clinical decision-making, logistics, administration, and drug development.
In 2025, the government empowered Pharmac and Medsafe to explore AI for automating regulatory tasks like assessment reports and side-effect pattern analysis, aiming to accelerate access to new medicines and deliver a 24/7 digital health service.
This sounds promising, but as with any ground-breaking technology, we must also be mindful of the risks.
Experts on AI in healthcare warn of the dangers of “datafication,” where the patient is reduced to their biodata and seen less as a person with unique experiences, emotions, and values.
The future of medicine depends not just on AI’s capabilities, but on how we decide to use them.
The tendency to overemphasise technology and data while reducing the patient to their abnormalities is not new in medicine. Michel Foucault described it in the 20th century as the “medical gaze,” a trap that AI could intensify.
Loss of empathy is another risk. Research has documented a decline in empathy among medical students, even though numerous studies show that physician and nurse empathy improves pain management and speeds healing. Widespread reliance on AI could get doctors accustomed to efficiency-driven care, where patient narratives are sidelined in favour of standardised algorithmic processes.
The caring relationship between doctor and patient is the foundation of every successful treatment. It includes more than analysing data; it requires trust, compassion, and responsibility. AI is excellent at processing large datasets, but it cannot understandwhat the data really means. This is why we always need a human in the loop to interpret it responsibly and holistically.
We should be clear about what AI should and shouldn’t do when it’s operating in the context of vulnerable human beings. Used for administrative tasks, it can bring efficiency to our health system without compromising care. But any application that outsources core human responsibilities—such as empathic patient communication or responsible decision-making—risks eroding the doctor-patient relationship, and ultimately, the quality of care.
The future of medicine depends not just on AI’s capabilities, but on how we decide to use them. If we deploy AI applications—like ChatGPT—to improve person-centred care and human connection, rather than replace them, we can have the best of both worlds: precision and efficiency alongside empathy and trust.
Patients want more than data processing. They want to be seen and treated with compassion and respect. That’s something no algorithm can do, and something we must never outsource.
go back