The More AI Knows About Your Health, The More Human Connection Matters
Blog
January 8, 2026
Yesterday, OpenAI launched ChatGPT Health—a dedicated AI assistant that can access your medical records, lab results, fitness data, and health history. Over 230 million people already ask ChatGPT health questions every week. Now those conversations will be informed by your actual bloodwork, your prescription history, your family medical background.
This is remarkable technology. And it changes nothing about what patients truly need.
Let me explain.
The Information Was Never the Problem
I’ve spent 25 years as a neonatologist, delivering news that no parent should ever have to hear. I’ve watched colleagues struggle through disclosure conversations after medical errors. I’ve seen physicians armed with every data point, every scan, every metric—and still fail to connect with the person sitting across from them.
The problem has never been information. Patients can access their lab results through a portal. They can Google their diagnosis. They can—and now will—ask an AI to synthesize their entire medical history into a coherent narrative.
What they cannot get from any technology is the moment when their physician looks them in the eye, pauses, and says: “This must be overwhelming. Let me walk through this with you.”
AI Will Handle the Data. Physicians Must Own the Delivery.
Here’s what I believe ChatGPT Health will actually accomplish: it will raise the bar for human interaction.
When a patient walks into an appointment having already reviewed their cholesterol trends with AI, already asked about drug interactions, already researched their treatment options—they’re not coming for information. They’re coming for guidance. For perspective. For someone who can sit with them in their uncertainty and help them make meaning of what the data reveals.
This is the paradox of our moment: the better AI gets at explaining medicine, the more physicians must excel at practicing it humanly.
Patients will arrive more informed than ever. They’ll have questions AI couldn’t answer—not because the information didn’t exist, but because the questions weren’t really about information at all. They’ll want to know: What would you do if this were your mother? How worried should I actually be? What aren’t you telling me?
These questions require presence. They require trust built in moments. They require what I call compassionate communication—the skill of conveying not just facts, but care.
The Conversations AI Cannot Have
There are conversations no algorithm should ever attempt.
When a test comes back and the news is devastating. When an error has occurred and trust hangs in the balance. When a family must decide whether to continue treatment or let go. When someone’s world is collapsing and they need a human being—not an interface—to witness their pain.
These moments are where medicine either earns its sacred status or loses it entirely. And they cannot be outsourced to technology, no matter how sophisticated.
What concerns me is not that AI will replace physicians in these conversations. It’s that as routine communication migrates to technology, physicians will have fewer opportunities to practice the skills these critical moments demand. The muscle of human connection atrophies without use.
This is why communication training matters more now than ever—not less. Not because we’re competing with AI, but because the territory AI cannot enter is precisely where physicians must be exceptional.
The Physician of the Future
I envision a future where AI handles what AI does best: synthesizing data, identifying patterns, flagging concerns, answering the questions that have clear answers.
This frees physicians to do what only humans can do: sit with uncertainty, bear witness to suffering, guide patients through the valley between diagnosis and acceptance, and transform the worst moments of someone’s life into opportunities for trust and healing.
But this future requires us to be intentional. If we assume human connection will simply persist because it always has, we’re wrong. The generation of physicians training now have grown up with technology mediating most of their communication. The skills of presence, of reading a room, of knowing when to speak and when to simply be—these must be taught.
Medical schools spend hundreds of hours on biochemistry and perhaps twenty on communication. That ratio made sense when information was the physician’s primary value. It makes no sense in a world where a patient’s AI assistant knows their medical history better than their doctor.
It’s All in the Delivery
ChatGPT Health represents a genuine leap forward for patient empowerment. I welcome any technology that helps people understand their own bodies, prepare better questions for their physicians, and take ownership of their wellbeing.
But let’s be clear about what it cannot do.
It cannot hold someone’s hand. It cannot match its breathing to a grieving mother’s. It cannot make the split-second decision to stop talking and simply be present. It cannot carry the weight of thirty years of clinical experience into a room where a family is falling apart.
These are the moments that define healthcare. And they will always—always—be about human beings connecting with human beings.
The technology will handle the data. The question is whether we’ll develop physicians who can handle everything else.
That’s the work ahead. And it matters more now than ever.