AI Will Replace the Physician? It's Already Replacing the Intern
- Nate Swanson
- 36 minutes ago
- 28 min read

The Invisible Intern: AI Enters Everyday Medical Practice
AI in medicine isn’t a distant storm on the horizon anymore – it’s already raining in hospital corridors. Walk into a modern clinic or hospital in 2025 and you’ll find artificial intelligence quietly embedded in the workflow. Digital scribes transcribe and draft patient notes from exam room conversations; electronic health records (EHRs) automatically suggest diagnoses, flag abnormal labs, and even pre-populate billing codes. In a recent survey, 57% of U.S. physicians reported using AI tools in their daily work, mostly for documentation assistance, billing, and some diagnostic support. This invisible coworker doesn’t sleep, doesn’t complain, and never forgets to check a detail. It’s marketed as an “assistant,” but in practice it’s starting to feel more like a junior colleague who handles the grunt work. In fact, AI’s role is evolving into something closer to a replacement for the lowest-rung physicians – the interns – who traditionally learned medicine by doing the very tasks AI is beginning to handle.
The unsettling truth is that AI is already taking over the entry-level cognitive labor that once defined early medical training, so it's already replacing physicians. On paper, this might sound like progress: automating tedious tasks should mean less drudgery and more time for patient care. And indeed, physicians hope that AI can reduce paperwork and burnout, and early data shows AI scribes can cut documentation time by roughly 20–30%. But this efficiency carries a quiet cost: the way new doctors learn to think and make decisions is being fundamentally rewired. We’re outsourcing to machines the very experiences that used to train human physicians, and those machines never had to learn the hard way in the first place.
The Disappearing Doctor-in-Training: AI Replacing Physicians - Lite
For decades, the internship year was a rite of passage in medicine – a boot camp of long hours, bleary-eyed note-writing, and learning by trial and error under supervision. Every note an intern wrote, every lab value they painstakingly checked, every differential diagnosis they crafted was part of an apprenticeship in clinical reasoning. The inefficiency was by design: by doing everything themselves (however messy and slow), interns gradually built their medical judgment.
Now, that apprenticeship model is quietly vanishing. In 2025, large hospital systems are deploying ambient AI scribes that can listen to a patient encounter and generate a polished SOAP note within minutes. By the time the physician logs into the chart, the history, exam findings, and even an initial assessment and plan are already drafted. The doctor’s role is reduced to quickly glancing at the output, making a few edits, and clicking sign. What used to take an intern an hour of cognitive effort – reviewing vital signs, reconciling conflicting lab results, distilling the patient’s narrative into a coherent plan – now happens almost automatically in the background.
On paper, that sounds like liberation from scut work. In practice, it means interns aren’t learning by writing and rewriting anymore – they’re learning by editing the machine’s work. Instead of formulating their own framework for a patient’s problem, they’re handed a framework by the AI and asked to approve it. The result is a superficial kind of efficiency that may actually impede mastery. A generation of young doctors that once learned by struggling through the process is now at risk of learning by proxy. They supervise automation more than they practice medicine. The muscle memory of clinical reasoning – honed by crafting notes and orders from scratch – is replaced with the chore of reviewing autopilot output.
Experienced physicians worry about what this means for the future of expertise. You cannot become a master chef by only tasting a finished dish and adding salt; you have to cook many meals yourself. Likewise, you can’t become a master clinician by only correcting AI-generated plans. Education through repetition is being quietly replaced by oversight of automation. It may produce physicians who are efficient at clicking “approve,” but not those who deeply understand why a plan is what it is. In short, we’re trading the intern’s messy learning process for polished machine drafts – and potentially trading away true competence for efficiency.
When Decision Support Becomes Decision Making
AI first entered medicine as a safety net – a “second pair of eyes” to catch things doctors might miss. Clinical decision support systems and diagnostic algorithms were introduced as helpful assistants: they’d pop up suggestions (“Consider sepsis in this febrile patient”) or checklists (“Did you order a β-blocker for this heart failure patient?”) intended to augment physician decision-making. But as these tools have become more sophisticated and pervasive, they are subtly changing how clinicians think. Over time, suggestions can turn into defaults, and defaults can turn into the path of least resistance.
Today’s “assistive” systems are already shaping decisions by their mere presence. For example, many EHRs will highlight likely diagnoses based on the patient’s symptoms and lab results, or rank possible causes of a complaint by probability. If an AI system consistently flags a certain diagnosis at the top of the list, physicians (especially less experienced ones) may start accepting that suggestion uncritically. Alerts fire to warn about potential sepsis or kidney injury, order sets are pre-filled to follow clinical guidelines, and predictive models nudge providers toward standardized care pathways. It often feels easier – and is often incentivized – to click “Accept” on the AI’s recommendation than to take the time to second-guess it.
The line between a suggestion and a decision blurs quickly. What began as a gentle nudge from the machine can become the default action. We’re seeing a subtle shift: the AI’s opinion, initially just one voice in the room, starts to overshadow the human’s own reasoning. Physicians, especially trainees and busy clinicians, can become habituated to following the prompt. After all, the AI is fast, usually plausible, and seemingly evidence-based. But this habituation carries a risk. Doctors may start to trust the algorithm more than their own judgment, even in cases where the algorithm might be wrong.
When the system does err, however, it’s still the human who bears the responsibility. This is the paradox of AI-assisted medicine: it erodes autonomy even as it amplifies accountability. If an AI suggests a treatment and a doctor goes along with it, any harm that results is the doctor’s fault – not the software’s. Yet, was that decision truly the doctor’s, or was it effectively made by the algorithm? The American Medical Association has voiced concern on this exact point. In a 2025 policy report, the AMA warned that when AI tools aren’t transparent or “explainable,” a clinician’s training and expertise can be “removed from decision-making,” leaving them feeling compelled to act on an algorithm’s output without understanding it. In other words, we risk becoming accountable for decisions we didn’t fully make – a troubling scenario for both patient safety and physician morale.
To be clear, decision support tools can be valuable. They often do catch overlooked problems and improve standardization. But there’s a fine line between support and substitution. If doctors become passive operators of AI outputs – merely clicking through the choices the system has already made – the very role of the physician as a thoughtful decision-maker is diminished. Automation bias (the tendency to favor suggestions from an automated system) can creep in, especially if deviating from the AI’s recommendation requires extra effort or invites scrutiny. This dynamic raises an urgent question: Are we training the next generation of doctors to be pilots or just passengers? In healthcare’s high-stakes environment, being a passive passenger to a black-box algorithm is a dangerous place to be.
A New Math for Medical Economics
Beyond training and decision-making, AI is poised to shake up the economics of the medical profession – an area many prefer not to think about. Medicine has long been relatively protected from automation. The diagnostic reasoning and complex interpersonal skills of physicians were considered irreplaceably human, which helped justify the intensive training and high salaries doctors command. In economic terms, physicians have been both scarce and valuable because what they do is hard to scale. You can’t easily double a doctor’s patient load without running into limits of time, cognition, and safety.
AI threatens to change that math. If much of a physician’s routine cognitive work – documenting encounters, formulating basic diagnoses, writing orders – can be handled or at least heavily streamlined by software, then the traditional ratio of doctors to patients can shift. A single physician, augmented by AI and perhaps teamed with a cadre of advanced practice providers (nurse practitioners or physician associates), could conceivably oversee the care of far more patients than a physician working unaided. Healthcare executives and venture capital investors have noticed. They see an enticing possibility: one doctor supervising a squad of AI-assisted practitioners might replace several doctors working in the old model. In primary care, for instance, a physician might remotely guide multiple clinics staffed by nurse practitioners armed with AI diagnostic tools. In hospitals, one specialist could virtually consult on dozens of patients flagged by an AI as needing attention, instead of personally rounding on all of them.
The implications for physician staffing and salaries are unsettling. When certain labor can be automated or scaled up with technology, the demand for that labor shifts. Hospitals and corporate healthcare systems under financial pressure may not need as many physicians on the payroll if each remaining doctor can handle more output with AI’s help. In economic terms, as efficiency rises, the “scarcity value” of physicians could decline. This isn’t theoretical – we’re already seeing hints of it. Large health systems and private equity-backed medical groups are actively experimenting with models where fewer doctors oversee more of the work. The pressure to cut costs in healthcare is relentless, and labor (especially highly-paid labor like physicians) is often the biggest target.
Consider the fields likely to feel the earliest effects. Radiology and pathology – specialties centered on interpreting images and data – have been eyed for AI disruption for years. Advances in machine learning have produced algorithms that can detect tumors on scans or classify tissue samples with impressive (and ever-improving) accuracy. While most experts believe radiologists and pathologists won’t be outright replaced (the consensus is that human oversight remains crucial), these tools can significantly augment one specialist’s reach. If an AI can handle the first pass of reading 100 chest X-rays, one radiologist can validate the AI’s findings across all those images in the time it used to take to manually read a fraction of them. That boosts productivity – and might reduce the need to employ multiple radiologists for the same volume of work. Similar dynamics apply in fields like dermatology (AI analyzing skin lesion photos) and even primary care, where a large portion of diagnostic work is pattern recognition and protocol-driven decisions that AI can learn to emulate.
None of this is to say that physicians will become obsolete. But economic pressures will force a reckoning. We may see a stratification where a smaller number of physicians are hired to do the high-level, complex tasks and supervise AI-driven routine care, while non-physician practitioners handle the front-line interactions. The value of physicians may increasingly lie in what machines cannot do – managing the toughest cases, providing the human touch, and navigating ambiguity – rather than in the routine tasks that once filled our days. This is already reflected in how some healthcare leaders talk about AI: not as a mere tool to help doctors, but as a way to “streamline” care delivery and cut costs. In blunt terms, if AI can do 70% of what a general practitioner does at a fraction of the cost, the healthcare system will find a way to leverage that. As one economist would put it, efficiency doesn’t coexist with abundance; it redefines it. The roles and numbers of physicians will adjust to the new definition of efficient care.
It’s worth noting that procedural and hands-on specialties (surgeons, anesthesiologists, emergency physicians managing acute crises, etc.) remain less directly threatened in the near term – a robot can’t (yet) perform a complex surgery or intubate a patient during a code blue without human help. But even these fields will be impacted indirectly by shifts in training, referrals, and resource allocation as AI penetrates diagnostics and outpatient care. The overall message for current and future doctors is stark: we have to prepare for a world where our cognitive skills are no longer a protected monopoly. The financial justification for our status will have to shift to the uniquely human aspects of doctoring, or we risk seeing our profession economically “downscaled” over time.
What AI Still Can’t Do In 2025
Amid all these rapid changes, it’s critical to highlight what AI cannot do – and may never be able to do – in the realm of medicine. For all its computational brilliance, current AI remains hopelessly bad at being human. It doesn’t truly understand emotion or context in the way a person does. It can simulate empathy in words, but it can’t actually feel empathy or grasp the nuance of individual human experiences. And it certainly can’t take moral responsibility. An algorithm never lies awake at night worried it missed something important about a patient’s condition; a conscientious doctor often does.
Medicine, at its core, is not just about crunching data – it’s a moral and humanistic profession. It’s about understanding what matters to each patient as a person, navigating uncertainties, and making judgments that often have profound ethical dimensions. An AI might deduce that a 90-year-old patient with multiple comorbidities has a statistically low chance of surviving a surgery, but only a human doctor (in conversation with the patient and family) can weigh that statistic against the patient’s own values and fears – for example, their willingness to take a risk for a chance to attend an important family event versus their desire for comfort. Such decisions are laden with values and emotions that no model can truly replicate.
We should remember that data in medicine is never complete or perfect. Every diagnosis is a probability, every treatment a gamble of risks vs. benefits. Patients often defy “textbook” presentations; they have preferences that don’t fit neatly into guidelines. The role of the physician is not just to process information but to translate information into meaning and action for a unique human being. That requires wisdom, compassion, and an ethical compass. AI, no matter how advanced, operates on patterns and probabilities – it lacks the commonsense understanding and lived experience that inform human judgment. For instance, AI might know that a certain drug interaction is dangerous on paper, but it doesn’t know how to gently convince a stubborn patient who doesn’t trust doctors, or how to detect the unspoken pain in a patient’s eyes during a difficult conversation. It doesn’t know what it feels like to tell a mother her child has cancer, and it can’t hold her hand or give her a hug.
Even in the realm of pure information, AI’s lack of true understanding can lead to critical errors. A striking case reported recently involved an individual asking a generative AI (ChatGPT) for health advice. The person inquired about alternatives to table salt for their diet. The AI, lacking contextual judgment, confidently recommended bromide as a substitute. Bromide is not a cooking spice – it’s a toxic substance if ingested chronically (historically used as a sedative, now known to cause severe neurological harm). The individual, trusting the AI’s authoritative tone, actually obtained bromide salts and became gravely ill with bromide poisoning, ending up hospitalized with hallucinations. No human doctor would ever have made that suggestion, because humans understand context – we know that just because bromide is chemically akin to salt doesn’t make it an edible seasoning. This story underscores that AI has no true comprehension; it can string together facts from its training data, but it doesn’t possess judgment or responsibility.
In sum, AI lacks the “human element” that is often the difference between a correct technical answer and the right thing to do for the patient. As one medical educator put it, the human aspects of care – empathy, compassion, critical thinking, ethical reasoning – remain “invaluable” for holistic patient care. These are the dimensions of medicine that make it an art as much as a science. They are what make patients trust and connect with their doctors. And these are precisely the qualities that future physicians must cultivate and protect, even as machines become ubiquitous in handling the scientific and technical aspects. In an era when machines dominate data processing, the human work of making meaning from that data becomes priceless. The most irreplaceable doctors will not be those who memorize the most guidelines or type the fastest notes – it will be those who interpret and care the most wisely, bridging the gap between algorithms and lives.
Physicians Must Reclaim the Driver’s Seat
It’s not AI itself that poses the biggest threat to physicians – it’s how AI is designed and deployed in healthcare, and who is steering that process. Right now, the development and implementation of most medical AI systems are driven by large corporations and health system administrators. Their incentives often revolve around productivity, standardization, and cost-effectiveness. These aren’t nefarious goals – a more efficient, error-free health system benefits everyone in theory – but they are not the same as the core clinical goals of individualized patient care and physician development. If doctors remain passive, we could end up with systems that prioritize billing optimization and “throughput” over quality of care and training.
To avoid that fate, doctors and clinical leaders need to take back control of the narrative and the implementation of AI. We should be at the table – ideally leading the table – when decisions are made about what an AI tool is meant to do and how its success is measured. Here are a few critical steps and safeguards the medical community should insist on:
Transparency and Explainability: We must demand that AI tools used in patient care are not black boxes. What data was used to train the model? Does it have known biases or blind spots? How exactly is it arriving at a given recommendation? The American Medical Association recently adopted a policy pushing for “explainable AI” in clinical settings – meaning any algorithm’s output should come with an explanation that physicians can interpret and act upon. This is crucial. If we don’t know how an AI came to a conclusion, we can’t fully trust it or justify it to a patient. Transparency also means being honest about the AI’s limitations.
Independent Oversight and Validation: Hospitals and regulators should require independent validation of AI systems. It’s not enough for a vendor to claim their algorithm works – an unbiased third party (whether a regulatory body, a medical specialty society, or an academic group) needs to verify that the tool is safe and effective for the intended use. And if an AI system is making semi-automated decisions, we need clearly defined accountability. Who is responsible if the AI gets it wrong? Physicians should not be left holding the bag for a bad outcome caused by a flawed algorithm they had no hand in creating. That means we need malpractice and liability frameworks that appropriately apportion responsibility when AI is involved, something the AMA and others are also calling for.
Human Override and Autonomy: Clinicians must retain the right and ability to override AI recommendations. No algorithm should have final say over patient care against a doctor’s judgment. We’ve seen alarmingly rigid protocolization in some settings even before AI (for example, “sepsis protocols” that are essentially checklists); with AI, the risk is that these become even more locked-in. Physician judgement should never become a mere rubber stamp for an AI’s decision. Ensuring that is both a technical design issue (interfaces should always allow easy opting out or modification of AI suggestions) and a policy issue (no doctor should be penalized for choosing a different course when they have sound reasoning).
Physician-Led Governance: Every hospital or practice that deploys AI tools should have a physician-led committee or governance board to evaluate those tools, monitor their performance, and decide on their continued use. These boards should include not just tech-savvy doctors but also ethicists and patient representatives. The people who actually care for patients need to be the ones defining the criteria for AI in care: for example, setting thresholds for sensitivity vs. specificity of alerts to minimize false alarms, or deciding when an AI’s error rate is too high to tolerate. If deployment is left solely to IT departments or executives, clinical considerations will take a backseat to operational ones. Doctors must assert leadership from the inside, building the guardrails before outside regulators (who often lag years behind) swoop in with one-size-fits-all rules.
Education and Proficiency: Individually, every physician should seek to understand the basics of how AI works. This is now as essential to medical training as learning pharmacology or physiology. You don’t need to be a programmer, but you should know concepts like algorithmic bias, sensitivity vs. specificity, what a false positive is, and how machine learning models are evaluated. You should be able to read a validation study of a new AI tool and spot if there are red flags. In short, AI literacy should become a core medical skill. If we bury our heads in the sand, we’ll be illiterate captains trying to steer a very complex ship. Understanding AI’s strengths and weaknesses arms us to push back when the technology isn’t serving our patients’ best interests.
We cannot afford to wait for policymakers or tech companies to “do the right thing.” The history of healthcare technology has taught us that if doctors don’t actively engage, we often end up with tools that serve someone else’s agenda. Electronic health records, for example, were largely designed to meet billing and regulatory requirements, and only later did we realize how poorly they fit clinical workflow – turning physicians into glorified data entry clerks. We’re still dealing with the burnout from that. AI could repeat those mistakes at a much larger scale if we’re not vigilant. The difference now is that the consequences aren’t just annoying extra clicks or lost hours; they could be de-skilling of our profession and misaligned care at scale. The good news is that medical organizations are waking up to this. Many are calling for exactly these safeguards. The AMA’s recent moves on transparency and oversight are one example, and specialty societies are beginning to issue ethical guidelines for AI. But high-level policies mean little unless practicing doctors insist on them day to day. Ultimately, we – the clinicians – have to build the guardrails from within, because we’re the ones who will be held responsible when the system gets it wrong.
Medical Students at a Crossroads
No group feels the tremors of these changes more acutely than medical students and residents. These are people who have invested the early years of their adult lives – and often hundreds of thousands of dollars – in training for a career that is rapidly changing before their eyes. It’s not uncommon now to hear medical students whispering anxiously about whether the jobs they’re aiming for will exist in 5 or 10 years. They see stories of AI outperforming doctors on diagnostic tests, they observe interns on the wards spending more time editing AI-generated notes than writing their own, and they can’t help but wonder: Am I training for a role that’s becoming obsolete?
The financial anxiety is real. In the U.S., the cost of medical education has skyrocketed, often leaving new doctors with debt well into the six figures. Some students will easily owe $300,000–$500,000 by the end of training, once you account for interest and lost earning years. The prospect of taking on that burden only to emerge into a job market that might be “right-sized” by AI is deeply unsettling. It’s one thing to compete against other highly trained humans for a coveted residency or attending position; it’s another to feel you might be competing against an algorithm that doesn’t need sleep or a salary. This isn’t just abstract fear – surveys and forum posts reveal real trepidation among young doctors. In one physicians’ poll, nearly 3 out of 10 doctors described themselves as “seriously concerned” that AI could replace or profoundly diminish their role. Medical students, watching from the sidelines, often feel this even more intensely as they decide what specialty to pursue or whether to pursue medicine at all.
The answer to these fears depends almost entirely on how medical education adapts. If medical schools and training programs pretended nothing is changing – continuing to drill students on rote memorization and formulaic clinical skills that AI might soon handle – they would indeed be churning out graduates prepared for a past that no longer exists. But fortunately, that isn’t what’s happening. Forward-thinking institutions are already integrating AI into the curriculum, not as a threat but as a tool to be mastered. In fact, as of 2024, 77% of medical schools in the U.S. and Canada report covering AI or machine learning in their curricula. This is a remarkable shift from just a few years ago. For example, some schools now offer electives like “AI Applications in Healthcare” or even dual-degree programs in medicine and data science. Students at these schools might learn how to use AI to help diagnose a simulated patient case, or discuss the ethics of an AI decision support system as part of their clinical rotations.
What’s driving this change in education is the recognition that tomorrow’s doctors must be fluent in working with AI, not competing against it. As one medical educator put it, “We must prepare students for technology that’s probably going to disrupt their clinical practice”. The goal is to teach the next generation how to evaluate, critique, and collaborate with algorithms. This might mean, for instance, teaching students how to interpret an AI-generated radiology report and cross-check it with clinical findings, or how to spot when an AI’s recommendation might be biased or inappropriate for a particular patient. It also means instilling an understanding of the technology’s limits – e.g., knowing that if an AI model was trained mostly on adult patients, its recommendations for a pediatric case might be unreliable.
The medical students who thrive in the coming decade will be those who understand both sides of medicine: the human and the computational. They’ll be comfortable using AI-based tools to enhance their care – whether that’s quickly synthesizing medical literature for an unusual case, or automating the mundane parts of documentation – but they’ll also have the wisdom to know when to step back and rely on human insight. They’ll learn to “stay in the loop,”checking the AI’s work, and also to bring the uniquely human touch where it’s needed. In essence, the physicians of the future will need to be bridges between two worlds: the world of data and algorithms, and the world of human healing. Medical education is beginning to embrace this by not just worrying about AI, but actively teaching it. The hope is that a doctor who understands AI will use AI as an effective extension of their skills, rather than being sidelined by it.
So to today’s medical students and residents, the message is cautiously optimistic: your career is not doomed, but it will be different. You’re not just learning how to treat patients; you must also learn how to manage, interpret, and supervise the machines that will treat patients alongside you. If you can do that, you won’t be obsolete – in fact, you’ll be indispensable, because you’ll ensure that technology is used wisely and humanely. However, if medical training fails to evolve – if you graduate without ever confronting an AI tool until your first day on the job – then the fears of being unprepared could become reality. The onus is on both educators and students to make sure that doesn’t happen. Encouragingly, the trend has begun in the right direction, with many schools revamping curricula and offering resources to get students up to speed on AI. The future of medicine belongs to those who are not just great at being doctors, but also adept at harnessing technology in service of being even better doctors.
Efficiency vs. Empathy: The Corporate Dilemma
One pattern we’ve seen over and over in healthcare is that every push for “efficiency” tends to come with a catch. The introduction of electronic health records was supposed to simplify and streamline care; instead, it often turned physicians into overworked data clerks, spending two hours typing for every hour with patients. The rollout of productivity metrics and RVU (relative value unit) targets was supposed to incentivize quality and value; instead it sometimes incentivized seeing patients in five-minute slots and checking boxes to satisfy an algorithm, to the detriment of real patient connection.
AI is poised to be the next great efficiency tool, and we must be eyes-open about whose efficiency we are talking about. When hospital executives and private equity investors tout AI-driven “streamlining of care,” they’re often envisioning cost savings and throughput gains – for example, automating billing, reducing staffing needs, standardizing treatments to what’s cheapest and quickest. These goals aren’t inherently evil. Healthcare does waste a lot of money on administrative inefficiency, and if AI can cut down a $100 billion paperwork problem, that’s great. But if profit and productivity become the primary design principles of medical AI, we risk sidelining the aspects of care that don’t fit neatly on a balance sheet – the empathy, the listening, the genuine human presence.
We have to ask: efficiency for whom, and to what end? From a patient’s perspective, efficient care means getting the right diagnosis and treatment with minimal hassle and delay – but it also means feeling heard and cared for. From a corporate perspective, efficient care often means using the cheapest mix of labor and tech to get the patient out the door in a minimal number of visits. There’s a tension here. If AI systems are implemented primarily to maximize revenue capture and reduce costs (and indeed a survey found many physicians believe companies are eyeing AI for exactly those reasons), then things like a 30-minute thoughtful conversation or a comforting hand on the shoulder start to look like “inefficiencies.” And if a doctor resists a new AI-driven protocol because they feel it’s not right for their patient, will they be seen as a noncompliant outlier slowing down the machine?
A cynical but not implausible scenario is one where AI becomes a tool of top-down control. Imagine automated systems that dictate exactly how you must treat a common condition, and if you deviate, the system flags you for “quality variances” which your employer then questions. Or a system that determines which patients get scheduled for follow-ups based on algorithmic risk scores, regardless of your clinical intuition about who actually needs more attention. In the worst case, AI could empower administrators to micromanage clinical care to an unprecedented degree, because they’ll claim “the data knows best.” Physicians might find their autonomy and professional judgment increasingly constrained by algorithm-driven workflows that prioritize consistency and cost-effectiveness. We already have hints of this in places where insurance companies use algorithms to approve or deny treatments, often with frustrating rigidity.
The only safeguard against an outcome where AI is used to squeeze humanity out of medicine is organized, principled physician leadership. This means medical professional societies, hospital medical staff, and yes, even individual practitioners speaking up. It means refusing to accept tools that measure success only in terms of patients processed per hour or dollars saved, without accounting for outcomes and satisfaction. It means insisting that quality in healthcare includes the quality of human interaction. For example, if an AI allows a doctor to see 20% more patients in a day, physicians should push to reinvest some of that efficiency into longer patient visits or better follow-ups, rather than simply cranking the assembly line faster.
We also need to be vigilant about the corporate narratives around AI. Tech companies will gladly sell AI as a solution to burnout – “Look, it will do your paperwork!” – but if the same tool is used by management to simply make you do more work in the same time, then it’s not really solving burnout; it’s potentially exacerbating it. There’s a fine line between AI helping doctors and AI replacing doctors, and that line often comes down to who controls it and why. As physicians, we should champion AI that truly helps us care better – that gives us back time or insight – and be wary of AI that seems aimed at making us interchangeable or subordinate.
Healthcare has always been a delicate balance between the art of healing and the business of medicine. AI could tip that balance dramatically. If we let the business side dominate unchecked, we might end up in a situation where the art is not just underappreciated, but actively squeezed out. The efficiency-driven model of AI in healthcare might achieve stunning metrics on paper, yet fail the people for whom the system exists – patients – as well as demoralize the clinicians who are reduced to cogs in a machine. It doesn’t have to be that way, but avoiding it will require constant advocacy and perhaps new models of physician unionization or collective voice specifically around tech implementation. We need to be able to say, as a community: we won’t trade away the soul of medicine for short-term efficiency. Any AI that helps us should serve to enhance our human capabilities, not suppress them.
Two Futures of Medicine
The future of medicine in the age of AI is not fixed; it’s being decided by the choices we make right now. Broadly, we can imagine two very different scenarios unfolding:
Best-Case Scenario – AI as Teammate: In this future, AI becomes an invisible teammate to the physician. It handles the mundane clerical tasks – writing notes, filling out forms, sifting through the EHR for trends – which frees doctors from the shackles of paperwork. It functions like an ultra-smart medical assistant: always there in the background double-checking for possible missed diagnoses, calculating risk scores, and catching medication errors, but never overstepping the physician’s judgment. In this world, doctors have more time for patients, not less. Efficiency gains are used to reduce physician overload and improve patient interaction. A doctor might walk into a room fully present and unhurried because the AI has pre-summarized the chart and written a draft note, leaving the doctor to focus on talking and listening. Healthcare becomes more personalized, because paradoxically, technology gives us the time to be more human. In this scenario, AI is like a stethoscope or an MRI machine – a powerful tool that extends our senses and capabilities but is ultimately wielded by the physician. We maintain full control over how decisions are made, using AI input as one factor. Patients benefit from faster, safer care, but also from doctors who are less burned out and more attentive. Trust in the medical system could even grow, as outcomes improve and physicians regain the space to show compassion. This scenario is essentially the augmented physician model: AI does the heavy lifting on data and routine process, doctors do what they do best – empathize, problem-solve, and guide patients through tough decisions. Importantly, in this future the physician’s role remains central and deeply meaningful; we use AI to amplify our humanity, not replace it.
Worst-Case Scenario – AI as Gatekeeper: In the darker vision of the future, AI becomes a kind of gatekeeper and taskmaster. Healthcare systems, driven by cost and efficiency, hire fewer doctors. Those doctors are expected to supervise fleets of algorithm-driven protocols. Face-to-face time with patients becomes minimal. Instead, patients interact with AI-driven symptom checkers and chatbots for most routine matters, and only the more complex or severe cases bubble up to a human doctor – who by then might be overwhelmed, overseeing hundreds of cases remotely. In this scenario, care is standardized to the algorithm. If the computer says a patient doesn’t meet criteria for an MRI, they don’t get one, even if a doctor might have spotted a subtle reason to do it. Clinicians are reduced to high-level overseers, spending their days clicking approval for AI-made decisions on dozens of patients in rapid succession. The art of medicine – the nuanced detective work, the comforting bedside conversation, the tailoring of treatment to individual lives – is largely lost. Patients become data points on a dashboard, valued mainly for throughput. And if something goes wrong, the doctor is still held responsible, which only increases stress and defensive behaviors. In this future, many aspiring doctors might decide not to go into medicine at all (why spend a decade training to be an algorithm supervisor?), exacerbating a vicious cycle of dehumanization and shortage. Medicine shifts from a calling centered on healing to an industry of managing workflows. The intangible rewards of the profession – the connections, the gratitude, the life-changing moments – wither away, making it a much less attractive career. Ultimately, patient care suffers because a system that looks efficient on paper is often less effective and less compassionate in reality.
Which future do we get? It depends entirely on what we – as a society and as a medical community – do now. If we stay passive and let tech and corporate forces steer the ship unchecked, we may drift toward the gatekeeper dystopia, not out of malice but out of inertia and profit logic. If, however, physicians, patients, and forward-thinking leaders actively shape how AI is used, we can tilt toward the teammate scenario. Policies, incentives, and culture will determine the outcome. For instance, do we incentivize using AI to enhance doctor-patient interaction, or just to increase volume? Do we measure the success of AI by patient health outcomes and satisfaction, or solely by cost savings and documentation completeness? Every hospital that implements an AI, every medical board that sets guidelines, every educator that trains new doctors – all these decisions are writing the future script of medicine.
It’s worth noting that the future likely won’t be black-or-white. Parts of healthcare could diverge. Maybe emergency rooms and ICU will lean into AI-as-teammate to cope with high stakes and need for human judgment, while some outpatient sectors slide toward more automated, retail-style medicine. But on the whole, there will be a dominant paradigm. We should strive relentlessly for that best-case paradigm. That means engaging now: building systems that keep the doctor-patient relationship at the center, setting regulations that require human review of critical AI decisions, rewarding care quality over mere quantity, and ensuring doctors are trained to use AI thoughtfully. The next few years are pivotal. We’re not just along for the ride – we’re actively laying the tracks. As the saying (attributed to various thinkers) goes, the best way to predict the future is to create it. We have to create the future we want to see in medicine, rather than react to a future imposed on us.
Why the Human Touch Still Matters
Amid all this talk of technology, disruption, and futuristic scenarios, it’s important to end with a reminder: the heart of medicine still matters, and always will. People become doctors for deeply human reasons – a desire to heal, to help, to find meaning in alleviating suffering. Those motivations are not going to be programmed into an AI anytime soon. No matter how advanced our tools become, being a doctor will remain a role that carries moral weight and emotional labor that only a human can fully bear.
One might ask, if AI becomes so powerful in diagnosis and treatment, what are doctors ultimately for? The answer lies in everything beyond the algorithm. Doctors are there to care – in the fullest sense of the word. An AI might crunch the numbers and say, “This cancer has a 5% survival rate with Treatment A and 10% with Treatment B.” But it cannot sit with a patient and grapple with what those numbers mean for that person’s life, or help them choose which path aligns with their values. It can’t counsel a family through a loved one’s end-of-life decisions with compassion and spiritual support. It can’t celebrate with a patient who beats the odds, or mourn with one who doesn’t.
Being a physician has always been about more than applying medical knowledge. It’s about guiding humans through the most challenging moments of their lives. That might mean delivering bad news with empathy, motivating a patient to fight on when they feel like giving up, or sometimes just being there as a reassuring presence. AI, by contrast, has no sense of presence. It may simulate polite words or even a kind tone in a chatbot, but patients are astute – they know when a caring human is with them versus a cold interface. Trust in healthcare is built on these human qualities: trust that your doctor sees you as more than a case, that they genuinely care about your wellbeing, that they will go to bat for you when you need an advocate.
Technology will continue to evolve at breakneck speed. We’ll soon have AI models that can pass specialty board exams or that can summarize a 1,000-page medical textbook in a second. But meaning in medicine doesn’t come from information alone; it comes from connection and purpose. The meaning of being a doctor is something no machine can replicate, because it’s rooted in a conscious commitment to the human condition. Only a human doctor can take moral responsibility for a decision – when it’s 3 AM and you choose to try a risky surgery to save a patient’s life, that weight on your shoulders is something only a human can feel and carry. And it’s that sense of responsibility and care that often drives doctors to go above and beyond, to make that phone call to a specialist friend for a second opinion, or to stay past their shift because a patient was in crisis. An AI would not “feel” anything whether the patient lives or dies, as long as the data is logged. But a doctor does, and that feeling is fundamental to why medicine is not just a technical job but a vocation.
In the end, it’s very likely that AI will replace or heavily alter certain tasks – it may replace the rote work of interns, it may replace some diagnostic number-crunching of junior physicians, it may even handle routine follow-up communications. But AI cannot replace the Doctor who remembers why they became a doctor in the first place. The doctor who sees a frightened patient and instinctively offers comfort; the doctor who considers not just “Can we do this treatment?” but “Should we, given this person’s unique situation?”; the doctor who mentors the next generation by imparting not just knowledge but values. Those roles are irreplaceable.
So, rather than resist the future, our job is to shape it while holding tight to those irreplaceable human elements. Embrace the tools – learn them, use them – but always ask how they can serve the greater goals of healing and caring. We should be neither luddites nor mindless adopters, but thoughtful integrators. If we do this right, AI won’t destroy what makes medicine special. On the contrary, it can remove the distractions and drudgeries that have pulled us away from our patients, potentially reminding us of the very reasons we pursued medicine. Imagine getting back to a place where a doctor’s day is filled with conversations and compassion rather than clicks and billing codes – that’s a future worth striving for, and one that technology, used wisely, could help enable.
In conclusion, the arrival of AI in medicine is a powerful transformative force. It will change many aspects of how we practice, but it does not have to change why we practice. The core mission of medicine – to heal, to alleviate suffering, to accompany people in their times of need – remains as vital as ever and uniquely human. The stethoscope didn’t replace doctors; it amplified what we could hear. AI, if harnessed correctly, can amplify what we can do, while freeing us to be what we need to be for our patients. The key is that we, as physicians and future physicians, stay true to our calling even as we adapt to new tools.
AI may already be replacing the intern, but it cannot replace the heart of the doctor. That heart – our empathy, our ethical judgment, our human connection – is our greatest asset. By leading the integration of AI in a way that protects and elevates those human qualities, we ensure that the future of medicine is not one of machines versus doctors, but of machines and doctors working in concert to serve the timeless needs of patients. The future is unwritten, and it’s our collective responsibility to write it well.