AI in Radiation Oncology: The Postmodern Status Quo
By Carlos Cardenas, PhD, and Mark Waddle, MDASTRO AI Task Force, Co-Chairs

AIGRT
Jim Lamb, PhD, Jenia Vinogradskiy, PhD, and Ming Yang, PhD, MBA, along with some colleagues at UCLA and VCU, are tackling the task of improving the safety and efficiency of our current image guidance workflows. They have launched a startup called AcuityRT, with the mission of analyzing cone beam CT scans with AI-fueled error-detection algorithms.
“AcuityRT will help improve image review efficiency for clinicians,” says Dr. Lamb, adding that “a deep learning AI algorithm will help prioritize images for careful review by radiation oncologists using an ‘alignment score’ that represents the probability that an image shows a clinically significant deviation or error.”

The ELI5 version1 is this: software is trained to identify deviations in patient setup that warrant clinician attention and might call for an adjustment of some kind. In the example figure above, modified from Neylon et al.,2coronal projections of a CBCT taken during a course of treatment are compared with the expected appearance based on the planning CT images. Deviations in the expected appearance of the tumor contour and normal tissues, e.g., the trachea, are quantified, and an alert is sent when there is a predicted difference in dose distribution above a specified threshold. An early prototype is rumored to have signaled the radiation oncologist

References
- Readers who look at this footnote are not the only ones who want AI to be explained in a way that a kindergartener would understand.
- Neylon J, Luximon DC, Ritter T, Lamb JM. Proof‐of‐concept study of artificial intelligence‐assisted review of CBCT image guidance. Journal of Applied Clinical Medical Physics. 2023 Sep;24(9):e14016.
Artificial intelligence has become part of the fabric of modern medicine, often invisible until it suddenly isn’t. In radiation oncology, that transition has been underway for years: many clinics already rely on AI-enabled tools to help contour anatomy, support planning decisions, automate repetitive steps, and flag exceptions that deserve a closer look. What’s changed recently isn’t simply that “AI is coming.” It’s that the next layer of AI (i.e., large language models (LLMs)) has arrived across health care at remarkable speed, and it is beginning to interface with the workflows we use every day.
Seven years ago, ASTROnews published an issue focused on AI titled “Artificial Intelligence: The Future is Now.” The content centered on technology already in place or imminently set for implementation. Presumably, then, in 2026 we now find ourselves in a Tomorrowland of sorts,1 and we reframe the questions about AI to these: where exactly are we right now, what is already in clinical use, and what is likely to come around in the days after tomorrow?
An ASTRO AI Task Force has been charged to develop practical recommendations for responsible clinical use of AI tools, identify training needs for radiation oncology professionals and establish shared principles for evaluation, commissioning, quality assurance and ongoing monitoring. The Task Force is also engaging stakeholders, including key vendors, through structured discussions and roundtables to align expectations around transparency and accountability. A forthcoming white paper will provide a deeper, more formal roadmap.
In the meantime, though, we will here offer what we hope is an accessible preview, a practical snapshot for clinicians and practices at every scale (academic centers, large networks, and community groups alike) while keeping the conversation grounded in what matters most: quality, safety, accountability and the patient experience.
What AI is doing in radiation oncology today
If you practice radiation oncology in 2026, you are almost certainly using AI, whether or not you label it that way. In the past 10+ years, over one thousand AI-enabled medical devices have been approved by the FDA, most attributed to “Radiology.” Around 70 of these devices are directly related to Radiation Oncology, most related to auto-segmentation. It’s no surprise that one of the most visible (and broadly adopted) clinical applications is auto-segmentation for normal tissues and, in select workflows, target support. The practical value is obvious: faster first drafts, improved consistency, and more time spent on review and decision making rather than manual drawing. Importantly, auto-contouring’s success has also taught the field an important lesson: AI output is not the endpoint, but the beginning of human review. The safest and most effective implementations pair automation with clear review expectations and measurable quality checks.Many clinics routinely use knowledge-based or model-driven planning support to generate achievable dose objectives, select optimization parameters, and standardize plan quality. These tools are often the “quiet workhorses” of AI adoption: not flashy, but impactful. They help reduce variability across planners and across time, especially in busy environments where experience levels vary. As adaptive radiotherapy expands, the need to streamline workflows becomes more pressing. AI-enabled automation frequently appears in the “edges” of adaptation (i.e., supporting contour propagation, guiding re-optimization choices, or reducing friction in repetitive steps). Even when the core clinical decisions remain firmly human-led, automation can compress timelines and improve reproducibility.
Taken together, these tools form a recognizable pattern: AI is already used to draft, standardize and triage, while clinical teams remain responsible for decisions and accountability.
The broader health care wave: LLMs and ambient documentation
While the tools above are familiar within radiation oncology, the most disruptive AI trend of the past two years has largely arrived from outside our specialty: LLM-enabled systems for clinical decision making, documentation, summarization and workflow support.Ambient documentation is becoming widespread and at the core is quite simple. These tools listen to the clinician–patient conversation and produce a first draft of a structured clinical note. These tools are being adopted rapidly in several specialties because they reduce a major source of burnout: the hours of “after-hours documentation” that follow a full clinic day. Notably, health systems are often purchasing these tools even without direct reimbursement, because the value proposition is operational (i.e., physician retention, satisfaction and time reclaimed) rather than a new billing code. The value is clear and the company valuations reflect that: Abridge, a leading tool in this space, achieved a staggering valuation of $5.3 billion based on their last fundraising series.
From a radiation oncology perspective, ambient documentation may not feel immediately “core” to our technical clinical workflows. But it is highly relevant to how we practice. It reduces friction in consults and follow-ups, improves consistency of documentation, and can standardize how key clinical decisions are recorded. Perhaps more importantly, it signals a shift: AI is increasingly becoming the interface layer of clinical work, helping clinicians capture information, retrieve context, and translate complex processes into usable documentation.
Beyond documentation, LLMs are also starting to influence clinical decision making and workflows that sit adjacent to the EHR, particularly in settings like tumor boards, peer review and guideline-driven care planning. In these environments, the challenge is often not a lack of knowledge, but the friction of assembling it: extracting the key facts from a patient’s record, orienting to the relevant evidence base, and preparing a concise synthesis for a multidisciplinary discussion.
Newer clinical AI tools are emerging to support clinical decision making. For example, OpenEvidence, which has been called “the fastest-growing application for physicians in history,” has also recently achieved a multiple-billion dollar valuation. This LLM has been custom trained on top medical literature and has partnerships with key medical companies (JAMA, NEJM, NCCN and more) which results in highly relevant and accurate outputs. It is estimated that more that 40% of physicians in the U.S. use this tool daily. Used appropriately, these systems can strengthen, not replace, expert judgment by making high-quality information easier to access in the moment it’s needed.
AI 1.0 vs. AI now: Why this feels different than before
Many clinicians remember an earlier “AI moment,” when IBM’s Watson was widely promoted as a system that could transform medicine. The gap between promise and reality created understandable skepticism. While Watson’s introduction in health care did not live up to its promise, Watson-era systems were foundational for their time, but today’s models are built on fundamentally different advances in machine learning, access to compute, and the ability to work with real-world, unstructured clinical text, capabilities that were simply not mature a decade ago.A useful way to frame today’s change is that LLMs are not primarily “clinical decision engines.” Their near-term value is in language: summarizing, drafting, organizing and connecting information across messy real-world text. That’s why documentation tools and workflow assistants have progressed quickly (i.e., they lean into what modern models do well). At the same time, LLMs can be confidently wrong, can hallucinate details, and can create risks if used without guardrails. The lesson is not that the past was “bad” and the present is “good.” The lesson is that the most successful use cases match the tool to the task and build safety into the workflow.

What translates next: From tools to “agents” in the clinic
The next phase of AI in radiation oncology is likely to feel less like isolated features and more like workflow choreography. In other words, not “one AI tool for one task” but rather AI systems that coordinate multiple steps and present humans with a refined, higher-quality work product.In the near term, expect to see more AI assistance that:
- Prepares structured clinical artifacts (e.g., draft consult notes, treatment summaries, letters and standardized order sets).
- Acts as a workflow assistant, think of agents that pull relevant prior imaging, highlight key timeline events, and generate checklists tailored to the patient and technique.
- Supports contour review, not replacing physician judgment, but surfacing likely issues (missing anatomy, unexpected discontinuities, unusual shapes) and prompting systematic review.
- Accelerates planning loops by adjusting optimization parameters within predefined bounds, running multiple cycles, and presenting a shortlist of candidate plans for human selection.
This is the “agent” concept in practical terms; think of it as an AI system that performs bounded work, documents what it did, and hands off to a human for a decision. If implemented responsibly, this could increase consistency and reduce turnaround time. If implemented carelessly, it could amplify automation bias, deskill teams, and create failure modes that are harder to detect.
The difference between “good” and “bad” is rarely the model itself. It is the workflow design: where the tool is allowed to act, what it is allowed to change, what it must document, and what a human must explicitly approve.
The community practice reality: Access, staffing and vendor dependence
A central theme raised in our discussions is that the average radiation oncology practice is not a large academic center with dedicated informatics or AI-trained personnel. Many groups operate with lean staffing, heavy clinical volume, and limited capacity to build bespoke tools. That reality creates two competing truths:- AI can help. It can standardize output, reduce variability, and free time for patient-facing care, especially where staffing is tight.
- AI can concentrate power. If the primary pathway for adoption is vendor-provided tools, then transparency, responsibility and post-market monitoring become even more important.
Small and mid-sized practices often do not have the bandwidth to independently validate every algorithmic claim or reverse-engineer complex models. That does not mean they should avoid AI; it means the field needs shared principles for evaluation, commissioning, QA and longitudinal monitoring, and vendors must meet higher expectations for clarity about training, limitations, updates and performance.
In that sense, the most important question for the next couple of years may not be “Who has the best model?” It may be: “How do we ensure safe, equitable adoption across the entire practice landscape?”

Workforce and trust: A delicate balance
It would be irresponsible to pretend that AI has no potential workforce implications. Tools that reduce time spent on contouring, planning iteration, documentation, or image review might meaningfully impact how teams allocate labor. There isn’t much sign of that yet, but the situation is very fluid. Fear of sudden changes in demand for services can provoke anxiety among those worried that their positions might go the way of telephone switchboard operators.2At the same time, there is a constructive way to frame what is happening. The most durable value of clinical teams is not in repetitive production steps; it is in expertise, judgment, communication and oversight. AI can reduce the time spent on the tedious parts of work, but it does not remove the need for clear responsibility for clinical decisions, commissioning and ongoing QA, exception handling and troubleshooting, communication with patients and families, and multidisciplinary coordination.
In other words, if AI “works,” it should shift the workforce toward higher-value tasks and not eliminate the need for humans. The risk is that organizations may pursue efficiency without investing in the guardrails that protect quality. That is precisely why clinical leadership must stay at the table, and why professional societies have a role in shaping responsible norms.
What comes next: Responsibility, standards and leadership
Radiation oncology has long been a technology-forward specialty, and in many ways we have been using “AI-like” tools for decades. The difference now is pace, scale and integration.We are beyond the “future that was now” seven years ago, but we still have a lot of work to do. The next chapter will be defined by how well we design AI-enriched workflows that keep patients safe, keep humans accountable, and keep quality measurable.
ASTRO’s AI Task Force is focused on supporting that outcome: building guidance that is practical across practice settings, identifying training needs, and engaging stakeholders to align expectations around transparency and monitoring. The forthcoming white paper will provide more detail, but the message for now is simple: when adopted intentionally and with the right safety guardrails, AI can help us deliver more consistent, efficient and patient-centered care.
References
- No, not the Disneyland attraction, but there was a Buzz Lightyear easter egg in the last issue’s editor’s notes.
- On the positive side, the demographic most commonly handling that position did find new occupations as that job faded into obsolescence. Feigenbaum J, Gross DP. Answering the Call of Automation: How the Labor Market Adjusted to Mechanizing Telephone Operation. Quarterly Journal of Economics. 2024: 139 (3): 1879–193.
