Sort by:
Page 38 of 1411402 results

Artificial intelligence as an independent reader of risk-dominant lung nodules: influence of CT reconstruction parameters.

Mao Y, Heuvelmans MA, van Tuinen M, Yu D, Yi J, Oudkerk M, Ye Z, de Bock GH, Dorrius MD

pubmed logopapersAug 29 2025
To assess the impact of reconstruction parameters on AI's performance in detecting and classifying risk-dominant nodules in a baseline low-dose CT (LDCT) screening among a Chinese general population. Baseline LDCT scans from 300 consecutive participants in the Netherlands and China Big-3 (NELCIN-B3) trial were included. AI analyzed each scan reconstructed with four settings: 1 mm/0.7 mm thickness/interval with medium-soft and hard kernels (D45f/1 mm, B80f/1 mm) and 2 mm/1 mm with soft and medium-soft kernels (B30f/2 mm, D45f/2 mm). Reading results from consensus read by two radiologists served as reference standard. At scan level, inter-reader agreement between AI and reference standard, sensitivity, and specificity in determining the presence of a risk-dominant nodule were evaluated. For reference-standard risk-dominant nodules, nodule detection rate, and agreement in nodule type classification between AI and reference standard were assessed. AI-D45f/1 mm demonstrated a significantly higher sensitivity than AI-B80f/1 mm in determining the presence of a risk-dominant nodule per scan (77.5% vs. 31.5%, p < 0.0001). For reference-standard risk-dominant nodules (111/300, 37.0%), kernel variations (AI-D45f/1 mm vs. AI-B80f/1 mm) did not significantly affect AI's nodule detection rate (87.4% vs. 82.0%, p = 0.26) but substantially influenced the agreement in nodule type classification between AI and reference standard (87.7% [50/57] vs. 17.7% [11/62], p < 0.0001). Change in thickness/interval (AI-D45f/1 mm vs. AI-D45f/2 mm) had no substantial influence on any of AI's performance (p > 0.05). Variations in reconstruction kernels significantly affected AI's performance in risk-dominant nodule type classification, but not nodule detection. Ensuring consistency with radiologist-preferred kernels significantly improved agreement in nodule type classification and may help integrate AI more smoothly into clinical workflows. Question Patient management in lung cancer screening depends on the risk-dominant nodule, yet no prior studies have assessed the impact of reconstruction parameters on AI performance for these nodules. Findings The difference between reconstruction kernels (AI-D45f/1 mm vs. AI-B80f/1 mm, or AI-B30f/2 mm vs. AI-D45f/2 mm) significantly affected AI's performance in risk-dominant nodule type classification, but not nodule detection. Clinical relevance The use of kernel for AI consistent with radiologist's choice is likely to improve the overall performance of AI-based CAD systems as an independent reader and support greater clinical acceptance and integration of AI tools into routine practice.

Fusion model integrating multi-sequence MRI radiomics and habitat imaging for predicting pathological complete response in breast cancer treated with neoadjuvant therapy.

Xu S, Ying Y, Hu Q, Li X, Li Y, Xiong H, Chen Y, Ye Q, Li X, Liu Y, Ai T, Du Y

pubmed logopapersAug 29 2025
This study aimed to develop a predictive model integrating multi-sequence MRI radiomics, deep learning features, and habitat imaging to forecast pathological complete response (pCR) in breast cancer patients undergoing neoadjuvant therapy (NAT). A retrospective analysis included 203 breast cancer patients treated with NAT from May 2018 to January 2023. Patients were divided into training (n = 162) and test (n = 41) sets. Radiomics features were extracted from intratumoral and peritumoral regions in multi-sequence MRI (T2WI, DWI, and DCE-MRI) datasets. Habitat imaging was employed to analyze tumor subregions, characterizing heterogeneity within the tumor. We constructed and validated machine learning models, including a fusion model integrating all features, using Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves, decision curve analysis (DCA), and confusion matrices. Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) analyses were performed for model interpretability. The fusion model achieved superior predictive performance compared to single-region models, with AUCs of 0.913 (95% CI: 0.770-1.000) in the test set. PR curve analysis showed improved precision-recall balance, while DCA indicated higher clinical benefit. Confusion matrix analysis confirmed the model's classification accuracy. SHAP revealed DCE_LLL_DependenceUniformity as the most critical feature for predicting pCR and PC72 for non-pCR. LIME provided patient-specific insights into feature contributions. Integrating multi-dimensional MRI features with habitat imaging enhances pCR prediction in breast cancer. The fusion model offers a robust, non-invasive tool for guiding individualized treatment strategies while providing transparent interpretability through SHAP and LIME analyses.

A Multi-Stage Fine-Tuning and Ensembling Strategy for Pancreatic Tumor Segmentation in Diagnostic and Therapeutic MRI

Omer Faruk Durugol, Maximilian Rokuss, Yannick Kirchhoff, Klaus H. Maier-Hein

arxiv logopreprintAug 29 2025
Automated segmentation of Pancreatic Ductal Adenocarcinoma (PDAC) from MRI is critical for clinical workflows but is hindered by poor tumor-tissue contrast and a scarcity of annotated data. This paper details our submission to the PANTHER challenge, addressing both diagnostic T1-weighted (Task 1) and therapeutic T2-weighted (Task 2) segmentation. Our approach is built upon the nnU-Net framework and leverages a deep, multi-stage cascaded pre-training strategy, starting from a general anatomical foundation model and sequentially fine-tuning on CT pancreatic lesion datasets and the target MRI modalities. Through extensive five-fold cross-validation, we systematically evaluated data augmentation schemes and training schedules. Our analysis revealed a critical trade-off, where aggressive data augmentation produced the highest volumetric accuracy, while default augmentations yielded superior boundary precision (achieving a state-of-the-art MASD of 5.46 mm and HD95 of 17.33 mm for Task 1). For our final submission, we exploited this finding by constructing custom, heterogeneous ensembles of specialist models, essentially creating a mix of experts. This metric-aware ensembling strategy proved highly effective, achieving a top cross-validation Tumor Dice score of 0.661 for Task 1 and 0.523 for Task 2. Our work presents a robust methodology for developing specialized, high-performance models in the context of limited data and complex medical imaging tasks (Team MIC-DKFZ).

Masked Autoencoder Pretraining and BiXLSTM ResNet Architecture for PET/CT Tumor Segmentation

Moona Mazher, Steven A Niederer, Abdul Qayyum

arxiv logopreprintAug 29 2025
The accurate segmentation of lesions in whole-body PET/CT imaging is es-sential for tumor characterization, treatment planning, and response assess-ment, yet current manual workflows are labor-intensive and prone to inter-observer variability. Automated deep learning methods have shown promise but often remain limited by modality specificity, isolated time points, or in-sufficient integration of expert knowledge. To address these challenges, we present a two-stage lesion segmentation framework developed for the fourth AutoPET Challenge. In the first stage, a Masked Autoencoder (MAE) is em-ployed for self-supervised pretraining on unlabeled PET/CT and longitudinal CT scans, enabling the extraction of robust modality-specific representations without manual annotations. In the second stage, the pretrained encoder is fine-tuned with a bidirectional XLSTM architecture augmented with ResNet blocks and a convolutional decoder. By jointly leveraging anatomical (CT) and functional (PET) information as complementary input channels, the model achieves improved temporal and spatial feature integration. Evalua-tion on the AutoPET Task 1 dataset demonstrates that self-supervised pre-training significantly enhances segmentation accuracy, achieving a Dice score of 0.582 compared to 0.543 without pretraining. These findings high-light the potential of combining self-supervised learning with multimodal fu-sion for robust and generalizable PET/CT lesion segmentation. Code will be available at https://github.com/RespectKnowledge/AutoPet_2025_BxLSTM_UNET_Segmentation

Radiomics and deep learning methods for predicting the growth of subsolid nodules based on CT images.

Chen J, Yan W, Shi Y, Pan X, Yu R, Wang D, Zhang X, Wang L, Liu K

pubmed logopapersAug 29 2025
The growth of subsolid nodules (SSNs) is a strong predictor of lung adenocarcinoma. However, the heterogeneity in the biological behavior of SSNs poses significant challenges for clinical management. This study aimed to evaluate the clinical utility of deep learning and radiomics approaches in predicting SSN growth based on computed tomography (CT) images. A total of 353 patients with 387 SSNs were enrolled in this retrospective study. All cases were divided into growth (n = 195) and non-growth (n = 192) groups and were randomly assigned to the training (n = 247), validation (n = 62), and test sets (n = 78) in a ratio of 3:1:1. We obtained 1454 radiomics features from each volumetric region of interest (VOI). Pearson correlation coefficient and the least absolute shrinkage and selection operator (LASSO) methods were used for radiomics signature determination. A ResNet18 architecture was used to construct the deep-learning model. The 2 models were combined via a ResNet-based fusion network to construct an ensemble model. The area under the curve (AUC) was plotted and decision curve analysis (DCA) was performed to determine the clinical performance of the 3 models. The combined model (AUC = 0.926, 95% CI: 0.869-0.977) outperformed the radiomics (AUC = 0.894, 95% CI: 0.808-0.957) and deep-learning models (AUC = 0.802, 95% CI: 0.695-0.899) in the test set. The DeLong test results showed a statistically significant difference between the combined model and the deep-learning model (P = .012), supporting the clinical value of DCA. This study demonstrates that integrating radiomics with deep learning offers promising potential for the preoperative prediction of SSN growth.

A network-assisted joint image and motion estimation approach for robust 3D MRI motion correction across severity levels.

Nghiem B, Wu Z, Kashyap S, Kasper L, Uludağ K

pubmed logopapersAug 29 2025
The purpose of this work was to develop and evaluate a novel method that leverages neural networks and physical modeling for 3D motion correction at different levels of corruption. The novel method ("UNet+JE") combines an existing neural network ("UNet<sub>mag</sub>") with a physics-informed algorithm for jointly estimating motion parameters and the motion-compensated image ("JE"). UNet<sub>mag</sub> and UNet+JE were trained on two training datasets separately with different distributions of motion corruption severity and compared to JE as a benchmark. All five resulting methods were tested on T<sub>1</sub>w 3D MPRAGE scans of healthy participants with simulated (n = 40) and in vivo (n = 10) motion corruption ranging from mild to severe motion. UNet+JE provided better motion correction than UNet<sub>mag</sub> ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo><</mo> <msup><mn>10</mn> <mrow><mo>-</mo> <mn>2</mn></mrow> </msup> </mrow> <annotation>$$ p<{10}^{-2} $$</annotation></semantics> </math> for all metrics for both simulated and in vivo data), under both training datasets. UNet<sub>mag</sub> exhibited residual image artifacts and blurring, as well as greater susceptibility to data distribution shifts than UNet+JE. UNet+JE and JE did not significantly differ in image correction quality ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo>></mo> <mn>0.05</mn></mrow> <annotation>$$ p>0.05 $$</annotation></semantics> </math> for all metrics), even under strong distribution shifts for UNet+JE. However, UNet+JE reduced runtimes by a median reduction factor of between 2.00 to 3.80 as well as 4.05 for the simulation and in vivo studies, respectively. UNet+JE benefitted from the robustness of joint estimation and the fast image improvement provided by the neural network, enabling the method to provide high quality 3D image correction under a wide range of motion corruption within shorter runtimes.

Deep Learning Radiomics Model Based on Computed Tomography Image for Predicting the Classification of Osteoporotic Vertebral Fractures: Algorithm Development and Validation.

Liu J, Zhang L, Yuan Y, Tang J, Liu Y, Xia L, Zhang J

pubmed logopapersAug 29 2025
Osteoporotic vertebral fractures (OVFs) are common in older adults and often lead to disability if not properly diagnosed and classified. With the increased use of computed tomography (CT) imaging and the development of radiomics and deep learning technologies, there is potential to improve the classification accuracy of OVFs. This study aims to evaluate the efficacy of a deep learning radiomics model, derived from CT imaging, in accurately classifying OVFs. The study analyzed 981 patients (aged 50-95 years; 687 women, 294 men), involving 1098 vertebrae, from 3 medical centers who underwent both CT and magnetic resonance imaging examinations. The Assessment System of Thoracolumbar Osteoporotic Fractures (ASTLOF) classified OVFs into Classes 0, 1, and 2. The data were categorized into 4 cohorts: training (n=750), internal validation (n=187), external validation (n=110), and prospective validation (n=51). Deep transfer learning used the ResNet-50 architecture, pretrained on RadImageNet and ImageNet, to extract imaging features. Deep transfer learning-based features were combined with radiomics features and refined using Least Absolute Shrinkage and Selection Operator (LASSO) regression. The performance of 8 machine learning classifiers for OVF classification was assessed using receiver operating characteristic metrics and the "One-vs-Rest" approach. Performance comparisons between RadImageNet- and ImageNet-based models were performed using the DeLong test. Shapley Additive Explanations (SHAP) analysis was used to interpret feature importance and the predictive rationale of the optimal fusion model. Feature selection and fusion yielded 33 and 54 fused features for the RadImageNet- and ImageNet-based models, respectively, following pretraining on the training set. The best-performing machine learning algorithms for these 2 deep learning radiomics models were the multilayer perceptron and Light Gradient Boosting Machine (LightGBM). The macro-average area under the curve (AUC) values for the fused models based on RadImageNet and ImageNet were 0.934 and 0.996, respectively, with DeLong test showing no statistically significant difference (P=2.34). The RadImageNet-based model significantly surpassed the ImageNet-based model across internal, external, and prospective validation sets, with macro-average AUCs of 0.837 versus 0.648, 0.773 versus 0.633, and 0.852 versus 0.648, respectively (P<.05). Using the binary "One-vs-Rest" approach, the RadImageNet-based fused model achieved superior predictive performance for Class 2 (AUC=0.907, 95% CI 0.805-0.999), with Classes 0 and 1 following (AUC/accuracy=0.829/0.803 and 0.794/0.768, respectively). SHAP analysis provided a visualization of feature importance in the RadImageNet-based fused model, highlighting the top 3 most influential features: cluster shade, mean, and large area low gray level emphasis, and their respective impacts on predictions. The RadImageNet-based fused model using CT imaging data exhibited superior predictive performance compared to the ImageNet-based model, demonstrating significant utility in OVF classification and aiding clinical decision-making for treatment planning. Among the 3 classes, the model performed best in identifying Class 2, followed by Class 0 and Class 1.

Temporal Flow Matching for Learning Spatio-Temporal Trajectories in 4D Longitudinal Medical Imaging

Nico Albert Disch, Yannick Kirchhoff, Robin Peretzke, Maximilian Rokuss, Saikat Roy, Constantin Ulrich, David Zimmerer, Klaus Maier-Hein

arxiv logopreprintAug 29 2025
Understanding temporal dynamics in medical imaging is crucial for applications such as disease progression modeling, treatment planning and anatomical development tracking. However, most deep learning methods either consider only single temporal contexts, or focus on tasks like classification or regression, limiting their ability for fine-grained spatial predictions. While some approaches have been explored, they are often limited to single timepoints, specific diseases or have other technical restrictions. To address this fundamental gap, we introduce Temporal Flow Matching (TFM), a unified generative trajectory method that (i) aims to learn the underlying temporal distribution, (ii) by design can fall back to a nearest image predictor, i.e. predicting the last context image (LCI), as a special case, and (iii) supports $3D$ volumes, multiple prior scans, and irregular sampling. Extensive benchmarks on three public longitudinal datasets show that TFM consistently surpasses spatio-temporal methods from natural imaging, establishing a new state-of-the-art and robust baseline for $4D$ medical image prediction.

PET/CT radiomics for non-invasive prediction of immunotherapy efficacy in cervical cancer.

Du T, Li C, Grzegozek M, Huang X, Rahaman M, Wang X, Sun H

pubmed logopapersAug 28 2025
PurposeThe prediction of immunotherapy efficacy in cervical cancer patients remains a critical clinical challenge. This study aims to develop and validate a deep learning-based automatic tumor segmentation method on PET/CT images, extract texture features from the tumor regions in cervical cancer patients, and investigate their correlation with PD-L1 expression. Furthermore, a predictive model for immunotherapy efficacy will be constructed.MethodsWe retrospectively collected data from 283 pathologically confirmed cervical cancer patients who underwent <sup>18</sup>F-FDG PET/CT examinations, divided into three subsets. Subset-I (n = 97) was used to develop a deep learning-based segmentation model using Attention-UNet and region-growing methods on co-registered PET/CT images. Subset-II (n = 101) was used to explore correlations between radiomic features and PD-L1 expression. Subset-III (n = 85) was used to construct and validate a radiomic model for predicting immunotherapy response.ResultsUsing Subset-I, a segmentation model was developed. The segmentation model achieved optimal performance at the 94th epoch with an IoU of 0.746 in the validation set. Manual evaluation confirmed accurate tumor localization. Sixteen features demonstrated excellent reproducibility (ICC > 0.75). Using Subset-II, PD-L1-correlated features were extracted and identified. In Subset-II, 183 features showed significant correlations with PD-L1 expression (P < 0.05).Using these features in Subset-III, a predictive model for immunotherapy efficacy was constructed and evaluated. In Subset-III, the SVM-based radiomic model achieved the best predictive performance with an AUC of 0.935.ConclusionWe validated, respectively in Subset-I, Subset-II, and Subset-III, that deep learning models incorporating medical prior knowledge can accurately and automatically segment cervical cancer lesions, that texture features extracted from <sup>18</sup>F-FDG PET/CT are significantly associated with PD-L1 expression, and that predictive models based on these features can effectively predict the efficacy of PD-L1 immunotherapy. This approach offers a non-invasive, efficient, and cost-effective tool for guiding individualized immunotherapy in cervical cancer patients and may help reduce patient burden, accelerate treatment planning.

Macrotrabecular-massive subtype in hepatocellular carcinoma based on contrast-enhanced CT: deep learning outperforms machine learning.

Jia L, Li Z, Huang G, Jiang H, Xu H, Zhao J, Li J, Lei J

pubmed logopapersAug 28 2025
To develop a CT-based deep learning model for predicting the macrotrabecular-massive (MTM) subtype of hepatocellular carcinoma (HCC) and to compare its diagnostic performance with machine learning models. We retrospectively collected contrast-enhanced CT data from patients diagnosed with HCC via histopathological examination between January 2019 and August 2023. These patients were recruited from two medical centers. All analyses were performed using two-dimensional regions of interest. We developed a novel deep learning network based on ResNet-50, named ResNet-ViT Contrastive Learning (RVCL). The RVCL model was compared against baseline deep learning models and machine learning models. Additionally, we developed a multimodal prediction model by integrating deep learning models with clinical parameters. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). A total of 368 patients (mean age, 56 ± 10; 285 [77%] male) from two institutions were retrospectively enrolled. Our RVCL model demonstrated superior diagnostic performance in predicting MTM (AUC = 0.93) on the external test set compared to the five baseline deep learning models (AUCs range 0.46-0.72, all p < 0.05) and the three machine learning models (AUCs range 0.49-0.60, all p < 0.05). However, integrating the clinical biomarker Alpha-Fetoprotein (AFP) into the RVCL model did not significant improvement in diagnostic performance (internal test data set: AUC 0.99 vs 0.95 [p = 0.08]; external test data set: AUC 0.98 vs 0.93 [p = 0.05]). The deep learning model based on contrast-enhanced CT can accurately predict the MTM subtype in HCC patients, offering a smart tool for clinical decision-making. The RVCL model introduces a transformative approach to the non-invasive diagnosis MTM subtype of HCC by harmonizing convolutional neural networks and vision transformers within a unified architecture. The RVCL model can accurately predict the MTM subtype. Deep learning outperforms machine learning for predicting MTM subtype. RVCL boosts accuracy and guides personalized therapy.
Page 38 of 1411402 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.