Intelligentia artificialis: Does that mean eruditio or ignorantia for scientific publishing?

ASTROnews scored a Zoom trifecta when we caught up with the Editors-in-Chief of ASTRO’s three journals. We wanted their opinions about how AI has already (or might eventually) impact scientific publishing, for better and worse.
The Red Journal’s Sue Yom, MD, PhD, FASTRO, Robert Amdur, MD, of Practical Radiation Oncology, and Rachel Jimenez, MD, FASTRO, of Advances in Radiation Oncology, graciously shared their thoughtful insights. Together they bring an academic seriousness not often seen in these largely imponderous pages, hence the gratuitous use of Latin in the title of this piece to add gravitas as they tell us quo vadimus.
ASTROnews: From a 30,000-foot perspective, what do you think is good about AI that’s genuinely helpful to the mission of scientific publishing?

Sue Yom, MD, PhD, FASTRO: First, I think AI has the potential to actually improve the quality of scientific publication in general, because you have the potential to have better writing, a clearer exposition of thought processes. More instructive, more informative, sharper. AI can do that for us. I say that as someone who reads journals because I love beautiful writing. I read the New England Journal not just for the articles, but because it’s a beautifully written journal, and it’s got such beautiful accompanying materials with each article. I see AI leveling the playing field, where journals that are not as well-resourced or funded could compete on that level.
The second thing is that scientific publication for a long time has been dominated by the Global North. One of the big debates right now is, how do we make the use of AI acceptable and safe within science, because this goes to an equity issue. For instance, at the Red Journal, when the grammar is not readable enough, we ask authors to use an editing service, and it’s quite expensive to do that. In the U.S., there are institutions that have such resources and institutions that don’t, and then there are countries that have even less than any of the institutions in the United States. So AI, in that sense, is a way that we could get to equity in access to publication.
The third way I think AI could be helpful is in evaluating science. There’s just such a huge amount of literature, and it’s so difficult to gauge impact. At the Red Journal, everything is about impact. How do we think a submitted manuscript is going to stand within the literature? And the amount of background literature has become so enormous that it’s actually very difficult and time-consuming for reviewers and editors to do that, to place the published literature against the submitted article. AI could help us do that much more quickly and efficiently, encapsulate results more quickly, get to the heart of the problem quickly.

Robert Amdur, MD: There are currently two positives that I see going on in medical publishing that AI is heavily involved with. One is the identification of plagiarism or duplicate publishing. Very powerful software programs have expanded the accuracy, scope and capacity to identify when an author is submitting a manuscript that, in its text, is very similar to another manuscript that has been published elsewhere. And, if the publishers are connected, and there are some channels of communication between publishers, AI is also useful at identifying when the same or a very similar manuscript is under review at another journal.
Number two is cleaning up the English language writing. And by this, I do not mean what Sue first said, which is that AI might offer more logical or better ideas. I do not see AI doing that, but many authors, not just those for whom English is a second language, already are running their draft of their manuscript through grammar check-type AI programs that are not just identifying grammar errors, but are changing and saying, this is a better way to say what you’re saying. Those are positive things, because we don’t want to see manuscripts that are poorly written, with weak grammar or syntax.

Rachel B. Jimenez, MD, FASTRO: I would say the advantages of AI for the researcher, the person who’s looking to publish, is that they can synthesize information far more quickly than they would before. Literature reviews, assessing what trials have been done in the space, and being able to summarize those things quickly and cohesively, is a distinct advantage for AI. And maybe even identifying the right journal for your manuscript. AI could tell you who the readership is and who’s downloading those articles, and who’s citing those articles, and perhaps what audience you want to get in front of.
I agree with Bob that we also get the benefit of identifying things that are plagiarized, overlapped or maybe even grossly incorrect in terms of the way that data has been presented or summarized.
I think another issue for the author is, what kinds of titles attract people to read your article, and should a title be reframed? And can AI make those suggestions about title changes? For the editor or publisher, how do you organize your journal? Are there things from a marketing perspective on the journal side where AI could be effectively utilized to bring in the target audience that you’re looking for to identify who your readership is, where they’re located, how they’re going to find you.
Rachel B. Jimenez, MD
AN: What do you see as the major downsides of AI for scientific publishing? The negatives and weaknesses?
RA: The negative that either is happening and we’re having trouble identifying it, or it’s about to happen, is the truly corrupt presentation, meaning where AI creates the data, creates the figure. And it is false. It is not real data. I suspect it is going on to some degree currently. And if not, it’s just about to. We cannot tell if data is false. Okay, we see numbers, we see statistical tests, we see plots. We do not know if someone made those up or if those are real experimental data. And the only way to detect that is probably going to be with some sort of AI-based software program.
RBJ: Any tool that makes us more efficient also makes us lazy, and so when you have researchers and authors who could ask AI to write their paper, or create their figures, that’s where we get into muddy territory in terms of intellectual property, data fidelity and all of those things. We don’t really know how AI is going to synthesize data. If we ask it to look at a set of data, and create a figure, is it going to throw out outliers? Because there’s only a few of them, and they’re messing up the synthesized version of the data? Those are the imperfections of AI tools. They have their own biases. They’ve been trained to look at or focus on certain things, and those things are not necessarily the things that we, as scientists, have been trained to look at and reflect accurately when we summarize our data. We may not even realize that the data is, to Bob’s point, corrupt.
SY: I agree the potential for hallucination and false data generation is high right now. I do think that the AI companies are aware of this and are changing their approach to focus on specific communities to get better control over the issue, and to reduce the sycophantic nature of AI, which is a lot of times what led to those hallucinations. Those are real risks and require human oversight. That’s why Bob’s point is very well taken, that you still have to have that.
Another downside of AI is the inflationary effect. We already have seen that scientific publication is magnifying out of control the amount of scientific literature, which is probably 10 times what it was just even several years ago. And that inflationary effect is clearly being enhanced even further by AI. We’re seeing at the Red Journal a huge uptick in, for instance, the number of letters to the editor and review articles that were clearly generated with AI assistance. We tend to be able to sniff those out, and we reject them if it appears that generative AI was involved. But I have no doubt that many journals are not catching them all, or we may not be catching all of them.

The duplication and repackaging is just increasing the overall volume of scientific literature, and that is not helpful, because there’s no originality or true advancement in that kind of activity. Bob is correct that the only way that we can counteract that is with more AI. So there will be AI bots in the future that will be built into journal software. Major publishers are buying AI companies, startups, and they will be involved in journal software in the future where we’ll be using good AI to detect bad AI.
The other way that we combat bad AI is through creating a culture of disclosure, not of shaming and secrecy. A culture of shaming will mean that people hide stuff. The only way to create a trusting culture of science again is transparency. And that means transparency about AI use, and that’s why I’m a huge proponent of creating better policies.
AN: What other important considerations are there? What other pros or cons do you see related to AI-infused scientific media or modern digital publishing writ large?
RBJ: There are a number of different AI tools that are publicly available, and they have all been put through their paces to look at ethical challenges, where people feed them philosophical scenarios and ask them to determine what to do in that particular circumstance. Every tool has a slightly different answer, and some are more ethical than others. We’ve been talking about AI as a monolithic entity, but I think a lot depends on which AI tool you’re using and how it’s designed, how it’s programmed, how it’s trained, what its intended use is, because one tool’s ethical approach to research might be very different than another’s.
I think that academic publishing is going to look very different in a very short period of time, and none of us can predict exactly how that might look, but if I were to utilize AI as an academic physician, I might ask it to curate for me my own personalized journal, pulling papers that are relevant to my research and to my clinical care, collating them, so that I have my monthly “Journal of Dr. Rachel Jimenez” that shows me all the things that are most interesting to me. So instead of subscribing to JCO and to the Red Journal and to the Green Journal and to Lancet, I would subscribe to nothing, and I would allow the AI to “hand select” for me the papers that I’m most interested in reading every month.
RA: There is another kind of corruption apart from what I already mentioned that, I’m sure, is already happening, but if it’s not it’s one step away. Many journals are moving quickly to incorporate other things besides the historic standard elements that determine the value of an article or a journal. Historically, it was simply citations in other peer-reviewed journals. Then people added downloads. Maybe only a few people referenced Brian’s paper, but look, a thousand people around the world downloaded it!1 So that’s now a metric. Sue, Rachel, and I currently get a report from Elsevier once a year which shows the citations and in a separate report the downloads.
There are many other journals where the publisher now uses metrics other than the Impact Factor, which is a commercial product that’s calculated based almost exclusively on how many times your paper was included in the bibliography of another published paper. You can easily write a program that crawls the web and either downloads papers to phony accounts or makes negative comments about Brian’s paper. Poor quality! Don’t read it!2 And so that’s another part of the corruption factor.
Sue Yom, MD, PhD
SY: An obvious concern about AI is lack of trust. And lack of trust in scientific journals, for me, is a really overriding concern. I think that is the most serious issue facing science going forward in the future, because there’s nothing that binds the scientific community except for trust. And if you start to lose trust in your peers, in your reviewers, in your editors, in your journals, then the scientific community will collapse. We need to find ways within an AI universe to instill trust, to maintain trust. And that requires human oversight, evidence of authenticity, human relationships and communication, as well as fostering disclosure and transparency, rather than an environment of shame and secrecy, which is the breeding ground for lack of trust that inspires an environment of theft and dishonesty amongst investigators.
Those are reasons why I think the move toward AI has to be pretty careful and pretty slow in scientific journals, and why I think there’s so much conservatism that you see on the editor and reviewer side, because we have to be really, really careful that people really believe in this system, and they believe in our essential accountability within this system. And if it seems like we’ve moved too quickly and abandoned those positions, not to be catastrophic about it, but that’s the end of the journal. That’s the end of science in general within radiation oncology.3
But the last thing I just want to throw in is that I actually think that AI is going to be transformative for scientific publishing, and I’m really excited about it, personally. I could see journals themselves become AI objects, where you go in and play with the data, within a controlled space, right? The whole notion of how we publish science, where science is a static activity that the user cannot interact with, that the user only receives, that’s going to go away. With AI, everything becomes Play-Doh. You’re going to be able to turn data around and look at it upside down, or in a different format, and generate new hypotheses. I think that’s totally okay, as long as it’s controlled, and it’s in a safe space, and it’s done in an ethical manner.
I learn from AI, and I integrate it into my own writing practice. Of course, I can’t do that at the journal, but I do use secure AI at my institution, and I feel like I become a better writer through using AI. I absorb it just as much as it’s absorbing me.
References
- Dr. Amdur is embellishing for effect. Dr. Kavanagh is not smart enough to design a bot to do this, nor does he have 1000 friends who would help him out like that.
- Here Dr. Amdur is closer to reality with his example, since many of Dr. Kavanagh’s papers generate this visceral response.
- Under ASTRO President Neha Vapiwala’s leadership, the 2026 Annual Meeting theme will be centered on the challenges of communicating science to the public while maintaining integrity and trust. It will be a great program. To whet everyone’s appetite, more on this topic will be forthcoming in the summer issue of ASTROnews.
