Spring Issue, Vol 29, No. 2

It’s hard to imagine that just a few years ago you could attend a medical conference and artificial intelligence (AI) would not be mentioned once. Today, it’s difficult to go 10 minutes without it being raised in some shape or form. And for good reason: AI holds considerable promise to streamline workflows, support clinical decision making and reduce administrative burden. These emerging benefits have fostered a cautious but open-minded clinical discourse. Amongst educators, however, there is a much more palpable tension about AI. This is because AI is starting to challenge the premise that all education stands upon — that it is inherently beneficial to improve a person’s ability to think and act intelligently.

Traditionally, intelligence was seen as a uniquely human point of difference, so it made sense to hone it, to enhance it and to cultivate it. In fact, education is considered of such fundamental importance that the UN universal declaration of human rights explicitly states that elementary schooling should be compulsory and free-of-charge for all1 — something that is not even stated for childhood health care.

“In a world full of these AI systems, why would medical education remain important?"

However, the emergence of increasingly capable and accessible AI is prompting renewed scrutiny of education’s value. Contemporary AI systems are trained on extensive digital archives of human knowledge and can generate structured responses comparable to those of a tertiary-educated professional. The medical specialties and radiation oncology are no exception to this, with recent publications demonstrating AI’s ability to pass medical licensing and specialist board certification exams.2-5

What we find ourselves faced with are systems that know more than us, think faster than us, and can compose answers at least as well as us.

In a world full of these AI systems, why would medical education remain important?

To answer this requires a deep dive into what AI can and cannot do. The first and most important thing to remember is that Artificial Intelligence models remain Artificial. They do not think in the way that a human would. Generative AI models produce language by predicting continuations of text based on prior examples it has seen. So one can characterize current LLMs as an advanced form of text prediction, albeit at a level of complexity so sophisticated that the outputs resemble reasoning, explanation and synthesis.
 

Educational DimensionHuman BenchmarkCurrent AI capability
 Factual recall Postgraduate
 Textual synthesis Postgraduate
 Exam-style answer formulation Postgraduate
Conceptual understandingUpper secondary
Novel idea formationUpper secondary
Ethical judgementAdult professional
Idea transfer to new contextsAdult professional


Humans, on the other hand, use information to form core conceptual ideas and then make logical inferences based on this. This conceptual understanding is something AI does not possess. Therefore, skills that are predicated upon a conceptual understanding — such as the ability to autonomously exercise moral judgement, the ability to truly innovate outside of known contexts, the ability to tell truth from untruth, and the ability to act with responsibility and agency — remain fundamentally human.

Mapping AI’s strengths and limitations (Figure 1) results in a clearer understanding of the clinical scenarios where AI excels, and the ones that constitute potential blind spots. It performs most reliably in common, protocol-driven scenarios that depend on pattern recognition and probabilistic deduction. However, since AI has no conceptual or pathophysiological understanding, they are less reliable when confronted with rare, atypical or masquerading presentations — a clinical performance profile somewhat akin to those of a novice clinician. More astute practitioners are less likely to be distracted by “surface” clinical features and instead ground their decision making in the pathophysiological mechanisms of disease, which supports more accurate diagnostic reasoning and better-informed management choices, especially when presentations depart from textbook patterns.

In addition, AI lacks one further fundamental skill and hallmark of clinical competence: metacognition, the ability to reflect on one’s own thinking. A person without the insight to appreciate their own flaws and boundaries can be exceptionally dangerous in a professional environment. Collaborative care, multidisciplinary involvement and an openness to complementing one’s weaknesses with the strengths of one’s peers are core components of modern oncological care and remain outside the scope of current AI.

Understanding this, we can now return to our earlier question: In a world with AI systems, why would medical education remain important?

The answer is, because AI cannot provide ethical, accountable or responsible health care and also lacks the conceptual and metacognitive skills required for clinical excellence. Thus, medical education retains a requisite role in nurturing the skills of learners to attain these higher-order skills.

With this clarified, a secondary question becomes apparent: will medical practitioners of the future still need to memorize and operationalize the same immense quantity of medical knowledge as previous, given they will easily be able to ask AI for this information?

To answer this, we need to consider the nature of knowledge and learning. This is constructed in a pyramidal fashion, grounded with a broad base and then progressively built upwards, layer by layer, to attain the “pinnacles” of specialist knowledge. The practice of radiation oncology requires multiple interdigitated layers; anatomy, molecular biology, physics and clinical medicine remain the conceptual foundations of our practice. If we remove, or even thin, these foundations, our metaphorical “knowledge tower” quickly becomes unstable. In effect, we risk becoming like the patient who researches their “red rash” online and jumps to the most improbable of conclusions because they do not have the requisite contextual knowledge to situate their findings. Hence, whether we like it or not, residents and junior staff will still have to memorize cardinal signs and anatomical boundaries for the foreseeable future.

Medical education is thus not in the midst of the existential crisis that many fear. But this does not mean it will have no impact from AI. Many of our teaching practices will need to be reconsidered in this new era. Broadly, these changes will fall into four major categories:

1. Integrating AI into teaching and learning practices

AI provides many new opportunities for learning in an individualized, tailored and responsive way. Already students have begun to use LLMs as AI tutors, asking questions and receiving feedback. Embracing this in a considered, guard-railed fashion will be vital. AI can also analyze individual student data at a speed and scale that would not be feasible for humans, thus enabling personalization of learning. Institutions will need to actively build these capabilities into their digital education infrastructure or risk being left behind in the quickly-evolving educational environment.

2. Reconfiguration of educational assessments

There is a pressing need to reconfigure educational assessments to ensure they are accurately measuring learners’ skills. Currently, many assessment formats are vulnerable to being compromised by AI: essays, short-answer questions and multiple-choice foremost amongst these. Hence, the days of take-home assignments and open-book tests are numbered. Educational organizations will need to consider moving to a greater proportion of real-time assessments (proctored examinations, OSCEs, vivas), or workplace-based assessments, or both.

3. Maintaining learner motivation to learn “lower-order” skills

Educators will need to find new methods to motivate the learning of basic medical facts and diagnostic processes, given that these skills can now be so readily performed by AI. This is a modern-day variant of the “calculator problem” where a five-year-old questions the benefit of learning basic arithmetic when calculators are freely available. In both cases educators need to communicate the principle of pyramidal learning and explain how that learning is necessary to set the platform for future, more advanced processes.
“The core role of medical education in preparing tomorrow’s health care workforce remains unchanged. However, educators certainly have their work cut out to evolve educational practices in ways that stay current and relevant to modern clinical practice."

4. Ensuring relevant AI skills are incorporated into educational syllabi

With AI skills becoming an embedded part of clinical practice, radiation oncologists of the future will need to become familiar with basic AI skills to be proficient in their everyday work. This will require a review of educational syllabi to ensure these requisite skills are incorporated appropriately, therefore keeping the educational program relevant to contemporary practice.

In summary, if education is considered purely as information transmission, AI appears highly educated. But if we view education more wholistically as a path that leads us toward the highest forms of innovative and responsible critical thinking then AI is actually not educated at all. As such, the core role of medical education in preparing tomorrow’s health care workforce remains unchanged. However, educators certainly have their work cut out to evolve educational practices in ways that stay current and relevant to modern clinical practice in this increasingly AI augmented world. 

David Kok, MBBS, MEd, is a Radiation Oncologist at the Peter MacCallum Cancer Centre and Head of the Cancer Science Unit at the University of Melbourne. He is internationally recognized as a leader in medical education, having pioneered the integration of innovative digital technologies, including virtual reality, into undergraduate and postgraduate curricula.

References

  1. UN General Assembly. Universal declaration of human rights. UN Gen Assem. 1948;302(2):14-25.
  2. Mihalache A, Huang RS, Popovic MM, et al. ChatGPT-4: An assessment of an upgraded artificial intelligence chatbot in the United States Medical Licensing Examination. Med Teach. 2024 Mar 3;46(3):366-72.
  3. Ariyaratne S, Jenko N, Davies AM, et al. Could ChatGPT pass the UK radiology fellowship examinations? Acad Radiol. 2024;31(5):2178-82.
  4. Chow R, Zheng A, Gao C, et al. ChatGPT Versus DeepSeek: Assessing Artificial Intelligence Performance on Radiation Oncology Examination Questions. Adv Radiat Oncol. 2026;11(1):101929.
  5. Huang Y, Gomaa A, Semrau S, Haderlein M, Lettmaier S, Weissmann T, et al. Benchmarking ChatGPT-4 on a radiation oncology in-training exam and Red Journal Gray Zone cases: potentials and challenges for ai-assisted medical education and decision making in radiation oncology. Front Oncol. 2023;13:1265024.

 

ROECSG BrAIntrust Viewpoints

Katz

“AI offers an opportunity to re-engineer how we support competency development across the learning continuum,” said Leah Katz, MD, MPH, Associate Professor of Radiation Oncology at the University of Miami and Co-chair of the Radiation Oncology Education Collaborative Study Group (ROECSG) AI Education Working Group.1 “Adaptive contouring feedback, automated plan quality analytics, and case simulation can deliver high-frequency, individualized feedback at scale, allowing faculty to efficiently tailor teaching and training to each resident’s needs and learning styles.”

There are caveats, of course. “Across medicine, emerging evidence2 suggests that prolonged reliance on highly accurate decision-support can alter human vigilance and pattern recognition,” Dr. Katz noted. “The concern is not that AI replaces clinicians, but that premature automation may erode foundational cognitive skills needed when systems fail or conflict. Educational design should be staged: independent competence first, AI as a second reader next, and only later selective automation.”

Patel

Rohan Patel, MD, MPH, PGY-5 and Chief Resident at University Hospitals Seidman Cancer Center / Case Western Reserve University and Co-chair of the ROECSG AI Education Working Group, added, “As AI becomes a routine educational and clinical aid, assessment should shift away from rote memorization and toward conceptual understanding, reasoning under uncertainty, and the ability to critically interrogate or override AI output when needed. AI will make it easier than ever to retrieve facts, but certification should continue to test what cannot safely be outsourced.”

Dr. Patel continued, “I don’t necessarily feel that the oral boards or other exams should really look much different in the face of AI. However, the rise of AI does offer us an opportunity to broadly reevaluate what has been emphasized and tested for the resident population. This has been suggested by many for years, and perhaps now AI will force the issue a bit.”

References

  1. Artificial Intelligence (AI) Education Working Group. ROECSG. August 1, 2025. Accessed February 24, 2026. https://roecsg.org/artificial-intelligence-education-working-group/.</li>
  2. Budzyń K, et al. Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study. Lancet Gastroenterol Hepatol. 2025 Oct;10(10):896-903.
Copyright © 2025 American Society for Radiation Oncology