Developing Deep Learning-based Auto-segmentation of Organs at Risk

Presenting author: Jihye Koo, MS, Moffitt Cancer Center

By Diane Kean, ASTRO Communications

From a radiation therapy planning session to initial delivery of radiation therapy, it can take approximately two weeks for a head and neck cancer patient to begin treatment. While this time frame is needed for manual delineation of targets and organs at risk (OARs), radiation planning, quality assurance and chart checks, it can be an unbearable wait for a patient. Many efforts have been made to make the manual contouring process more time efficient through automated organ contouring with the help of computer technology, but quality issues have prevented routine use in the clinic.

Researchers at Moffitt Cancer Center in Tampa, Florida, were able to leverage a homogenously delineated head and neck OAR set to train an auto-contouring model that is comparable to that of a physician. As a result of the current qualitative and quantitative evaluation, the model shows superior performance compared to other commercially available models.

Jihye Koo, MS, presented the results during the Plenary Session II on Friday. Using a total of 864 previously treated head and neck cancer patients, Koo and colleagues developed a deep learning-based normal tissue 3-D auto-segmentation algorithm. A Dice Loss function with the Adam optimizer was used in training the models with 150-500 patients per OAR. The OARs were delineated by an experienced physician and referred to as gold data. A subset of 75 cases was withheld from training and used for validation. The researchers generated new OAR sets with three different deep-learning models and compared those to the gold data: A) the prototype model trained with gold data; B) a commercial software package trained with gold data; and C) the same commercial software with the model trained at another institution. The agreement between the gold data and auto-segmented structures was evaluated with Dice similarity coefficient, voxel-penalty metric that penalizes each missing or extra pixel and Hausdorff distance. An ANOVA test, which shows if there is any statistical difference between the groups, with post hoc pairwise analysis was performed to assess the differences in the metrics. The auto-segmented contours were also qualitatively evaluated by the physician on a scale of 1-5.

The evaluations showed that the prototype algorithm (A) had improved performance compared to the commercial algorithm (B, C), even when trained on data from the same institution. Auto segmentation results can differ significantly when the same algorithm is trained on data from different institutions. Overall, the hypothesis and the results were in agreement. Both the deep learning algorithm and the quality of the training data were key factors, and the combination of these two resulted in superior performance compared to the existing models.

“These findings can significantly reduce the time it takes until a patient receives radiation treatment,” said Ms. Koo. “It can also reduce the variation among physicians. The patient can be treated with a treatment plan generated based on a uniformly high quality contour, which is expected to help to decrease the potential risks of radiation treatment.”

Ms. Koo and colleagues recognize that there may be skeptical views about auto-contouring programs based on experiences of using currently available programs. “The performance of our prototype model has risen to a level comparable to that of manual contouring,” said Ms. Koo. “Of course, we do not recommend relying completely on auto-segmentation, but we expect it to help physicians save time and gives more peace of mind while contouring.”

The prototype model is continuously being improved to generate detailed precise contours. As a next step in the project, the research team is currently developing a dose prediction tool using the auto-contouring model.

Abstract 11 - Development of a Deep Learning-Based Auto-Segmentation of Organs at Risk for Head and Neck Radiotherapy Planning was presented on February 25, 2022, during the Plenary II session.

Published February 26, 2022



Connect With Us: