mAgIcal reAlIsm
By Brian Kavanagh, MD, MPH, FASTRO
The small cohort of readers who (a) attended the 2017 ASTRO Annual Meeting Presidential Address, (b) stayed awake, and (c) listened to the talk more than scrolling through TikTok — which was the newest kid on the social media block at the time, so I can’t blame anyone for their curiosity about it — might have gleaned that my favorite literary genre is Magical Realism.
Well, here we are now, living in a modern-day Macondo where we can ask for an image of something that has never existed, maybe a unicorn standing in a linac vault, and within a few seconds it appears before us. Or, for a more useful example for a radiation oncologist, you want a letter of medical necessity seeking authorization for SBRT for a patient with oligoprogressive lung cancer, and before you can say “CURB Study,” there is an annotated document on the screen before you making a logical case and citing that paper and SABR-Comet and Gomez, and so on.
If you play along when reading a work of Magical Realism, and stop worrying about the fact that not everything on the pages can actually happen in the known world, then you might be in for a lyrical journey into a faraway place, where human behaviors are surprising and mystical. But the problem with Magical Realism is that it can get creepy at times. For example, in “The Double” by Jose Saramago, two unrelated men who are absolute doppelgängers in every way cross paths and…spoiler alert…after exchanging places in their lives, at first cringeworthy and then tragically bad things happen.
There are so many easy parallels to Artificial Intelligence in its protean forms today. There is the enchantment of seeing things happen wondrously: who doesn’t like the idea of software that reduces hours-long tedious tasks down to a few mouse clicks? How about improved cancer imaging tools? Maybe refined treatment decision making tools? At the same time, we have the fear of unintended consequences, disruption to relationships and elimination of organizational roles, unknown cascading impacts on scientific integrity and educational practices, etc.
I am grateful to the authors of this issue’s content, who were charged with bringing forward into sharp relief both the astonishing and also alarming features of the AI that has arrived into the field of radiation oncology. I also thank Luq Dad for his energy as guest editor, asking good questions and sustaining momentum along the way. For the record, in the end, I am in the glass-half-full camp. I don’t mind the occasional shocking surprise, and I think AI will ultimately get us all to a better place.
AI in Clinical Practice: Landing Quintets, Guarding Humanity

Cozied up on my couch in Lower Manhattan, I just finished watching the “Quad God,” Ilia Malinin, the final skater in the men’s long-program in the 2026 Milan-Cortina Winter Olympics. I sit here astonished that he fell twice, finishing in eighth place. It is moments like these that occur in sports that remind us of the human element that makes this world we live in so special. The collapse of the “Quad God” is similar to that 10th inning of Game 6 in 1986 World Series, with the score tied between the Boston Red Sox and New York Mets, when Mookie Wilson hit a grounder that went through Bill Buckner’s legs at first base, leading to a loss that fueled the narrative of the “Curse of the Bambino.” But in the realm of victorious plays, who can forget Christian Laettner’s buzzer beater against Kentucky in the Philadelphia Spectrum in 1992? It is that human element of surprise, of the improbable, that makes sports so riveting and connects so many of us.
Similarly, what I cherish most about being a physician is that human connection — the opportunity to care for patients during some of the most vulnerable and uncertain chapters of their lives. For years, I approached artificial intelligence (AI) in radiation oncology with sincere caution. My reservations reflected those of many of my colleagues: ethical uncertainty, medical-legal responsibility, the fear of hidden errors, concerns about trainee education, and discomfort with relying on systems whose internal logic we cannot always fully explain clearly, especially to our patients. Beneath these concerns was a deeper unease — that increasing reliance on algorithms might erode the core of our profession: the human relationship between the doctor and patient.
I recall walking with colleagues during a recent ASTRO Annual Meeting. As they spoke enthusiastically about AI’s promise, I instinctively resisted. I worried that we were moving too quickly toward a future we did not fully understand. Over time, my perspective has shifted.
Not because AI has suddenly become perfect — it has not — but because I have increasingly witnessed how thoughtful, clinician-led integration of AI can meaningfully enhance our ability to deliver safe, equitable, and compassionate care. Across institutions, leaders are not waiting for AI to define the future of radiation oncology; they are actively shaping it. Over time, my skepticism has gradually given way to a sense of responsibility: if AI is here to stay, then we must steward it wisely, establishing guardrails, developing core competencies and ensuring that AI expands, rather than narrows, our clinical judgment and humanity.
During my studies at UVA’s Darden School of Business, a mentor introduced me to “Humility Is the New Smart”by Edward Hess and Katherine Ludwig. The authors of this book emphasize that the key to success in this new era is not to be more like the machines but to excel at the best of what makes us human — the precise reason that brings patients and their families to our clinic, for the human element of our care.
When thoughtfully deployed, AI might reduce administrative burden, strengthen patient education, enhance safety, support trial equity, and expand access to care globally. I would encourage colleagues across all practice settings not to sit on the sidelines, but to engage actively in the practical implementation of AI within their clinics, departments and hospitals. Meaningful leadership in this space does not require building algorithms or writing code. It requires clinical insight, accountability and a willingness to shape how these tools are integrated into real-world care. Radiation oncologists should be active voices in hospital-wide AI governance, implementation strategy and ethical oversight.
Radiation oncology is ready for its next chapter if we choose to lead it. In a world of increasingly powerful machines, it will always be the human element — the steadiness after a fall, resilience after failure, the compassion in uncertainty — that defines us. And that is worth guarding.
