Artificial Intelligence in Oncology - Supporting scientific research
University of Groningen, University Medical Center Groningen
With the increased survival rate of cancer patients because of successful treatment with (chemo)radiation, the risk of toxicity and related side effects also increased. Prediction or early detection of toxicity is of utmost importance to allow adaptation of the treatment to avoid toxicity related problems. In addition, the early identification of patients that exhibit a lower expected tumor response allows to choose a more aggressive treatment regime for those patients.
Aim of this project is to facilitate personalized adaptive radiotherapy to improve treatment outcome. We will investigate determination of treatment efficacy (tumor response) and detection and prediction of side effect severity (toxicity) of radiation therapy for head-and-neck cancer using deep learning techniques to identify predictive image features. Ultimately, the goal is to facilitate personalized adaptive radiotherapy, where the treatment plan is adjusted on-the-go based on both the daily anatomical changes and the predictions of the models for toxicity and tumor response.
In head and neck cancer, four subprojects were carried out on automatic segmentation and two on outcome prediction. In the first manuscript (published) deep learning models were developed and tested on external data. In the second manuscript, (published) we proposed a DL-based method for generating probability maps aimed to capture the model uncertainty in the segmentation task.
We showed that the tumor probability maps can be used by radiation oncologists for adaptive tumor segmentation (third manuscript under review) and as input data, together with CT and PET images, for tumor related endpoints prediction (fourth manuscript published). We applied the methodology outlined in the second manuscript to perform segmentation on both primary tumors and pathologic lymph nodes. This approach exhibited excellent performance across internal and external test sets. Moreover, the quantification of uncertainty derived from probability maps demonstrated a robust negative correlation with metrics employed to evaluate segmentation accuracy, as elaborated in the fifth manuscript currently under internal review. To enhance outcome prediction models, we incorporated spatial information from pathologic lymph nodes into CT and PET data, along with the manually contoured structure of the primary tumor. This integration led to an observable improvement in predictive performance for certain tumor-related endpoints.
In lung cancer two sub projects were conducted on automatic segmentation. First project involved verifying the ability of the deep learning network to detect and segment the tumor in previously unseen breathing phases of the 4DCT, creating a more accurate contour. This first manuscript is currently under internal review. The second project involves a combination of PET and 4DCT imaging for a more accurate segmentation of tumor in late stages, including deep learning algorithms and deformable registration as comparison of methods with the ground truth. This project is still under the experimentation phase and the manuscript shall be submitted by the end of the year. This progress sets the stage for subsequent tumor endpoint prediction projects in 2025, utilizing the same comprehensive database.
In head and neck cancer, three subprojects were carried out on automatic segmentation and one on outcome prediction. In the first manuscript (published) deep learning models were developed and tested on external data. In the second manuscript, which is under review (second round), we proposed a DL-based method for generating probability maps aimed to capture the model uncertainty in the segmentation task. We showed that the tumor probability maps can be used by radiation oncologists for adaptive tumor segmentation (third manuscript written and under internal revision) and as input data, together with CT and PET images, for tumor related endpoints prediction (fourth manuscript written and under internal revision).
In lung cancer, tumor delineation faces additional challenges, such as breathing motion. Today in clinical practice imaging methods are used based on two phases of the breathing cycle to obtain estimated tumor location throughout the breathing motion. However, deep learning algorithms can be trained to review all the information from every breathing phase and provide an accurate tumor movement trajectory. First results of a 4D delineation on already collected local data turned out to be very promising and can be used in the near future to detect and more accurately delineate the tumor throughout the breathing cycle.
Delineation of tumor is one important step in treatment delivery, but it is a tedious task, and it is strongly dependent on the executing of the radiotherapist. Therefore, in the first part of this project the focus has been on the task of automatic tumor delineation on PET/CTs images of head and neck cancer patients using artificial intelligence. The first research paper was accepted, and it will soon be published in Lecture Notes in Computer Science (LNCS) Challenges (2022). The second manuscript has been written and yet to be submitted to a medical imaging journal. In the future the focus will be on clinical implementation of the method implemented for the second paper and data will be collected for the tumor related endpoints prediction.
During the first months of the project, a detailed planning was made in which several sub-projects are defined each with their own specific complexity and risk. Next, the datasets required for the sub-projects were defined and data collection was started. At this moment, the first Artificial Intelligence models have been implemented and training on the already collected data is started. Goal of these models is to automatically recognize and delineate head and neck tumors on the medical images. The first results obtained are very promising.
.