Sort by:
Page 21 of 1411403 results

CT-Based Radiomics Models with External Validation for Prediction of Recurrence and Disease-Specific Mortality After Radical Surgery of Colorectal Liver Metastases.

Marzi S, Vidiri A, Ianiro A, Parrino C, Ruggiero S, Trobiani C, Teodoli L, Vallati G, Trillò G, Ciolina M, Sperati F, Scarinci A, Virdis M, Busset MDD, Stecca T, Massani M, Morana G, Grazi GL

pubmed logopapersSep 10 2025
To build computed tomography (CT)-based radiomics models, with independent external validation, to predict recurrence and disease-specific mortality in patients with colorectal liver metastases (CRLM) who underwent liver resection. 113 patients were included in this retrospective study: the internal training cohort comprised 66 patients, while the external validation cohort comprised 47. All patients underwent a CT study before surgery. Up to five visible metastases, the whole liver volume, and the surrounding free-disease parenchymal liver were separately delineated on the portal venous phase of CT. Both radiomic features and baseline clinical parameters were considered in the models' building, using different families of machine learning (ML) algorithms. The Support Vector Machine and Naive Bayes ML classifiers provided the best predictive performance. A relevant role of second-order and higher-order texture features emerged from the largest lesion and the liver residual parenchyma. The prediction models for recurrence showed good accuracy, ranging from 70% to 78% and from 66% to 70% in the training and validation sets, respectively. Models for predicting disease-related mortality performed worse, with accuracies ranging from 67% to 73% and from 60% to 64% in the training and validation sets, respectively. CT-based radiomics, alone or in combination with baseline clinical data, allowed the prediction of recurrence and disease-specific mortality of patients with CRLM, with fair to good accuracy after validation in an external cohort. Further investigations with a larger patient population for training and validation are needed to corroborate our analyses.

Clinical evaluation of motion robust reconstruction using deep learning in lung CT.

Kuwajima S, Oura D

pubmed logopapersSep 10 2025
In lung CT imaging, motion artifacts caused by cardiac motion and respiration are common. Recently, CLEAR Motion, a deep learning-based reconstruction method that applies motion correction technology, has been developed. This study aims to quantitatively evaluate the clinical usefulness of CLEAR Motion. A total of 129 lung CT was analyzed, and heart rate, height, weight, and BMI of all patients were obtained from medical records. Images with and without CLEAR Motion were reconstructed, and quantitative evaluation was performed using variance of Laplacian (VL) and PSNR. The difference in VL (DVL) between the two reconstruction methods was used to evaluate which part of the lung field (upper, middle, or lower) CLEAR Motion is effective. To evaluate the effect of motion correction based on patient characteristics, the correlation between body mass index (BMI), heart rate and DVL was determined. Visual assessment of motion artifacts was performed using paired comparisons by 9 radiological technologists. With the exception of one case, VL was higher in CLEAR Motion. Almost all the cases (110 cases) showed large DVL in the lower part. BMI showed a positive correlation with DVL (r = 0.55, p < 0.05), while no differences in DVL were observed based on heart rate. The average PSNR was 35.8 ± 0.92 dB. Visual assessments indicated that CLEAR Motion was preferred in most cases, with an average preference score of 0.96 (p < 0.05). Using Clear Motion allows for obtaining images with fewer motion artifacts in lung CT.

Multispectral CT Denoising via Simulation-Trained Deep Learning: Experimental Results at the ESRF BM18

Peter Gänz, Steffen Kieß, Guangpu Yang, Jajnabalkya Guhathakurta, Tanja Pienkny, Charls Clark, Paul Tafforeau, Andreas Balles, Astrid Hölzing, Simon Zabler, Sven Simon

arxiv logopreprintSep 10 2025
Multispectral computed tomography (CT) enables advanced material characterization by acquiring energy-resolved projection data. However, since the incoming X-ray flux is be distributed across multiple narrow energy bins, the photon count per bin is greatly reduced compared to standard energy-integrated imaging. This inevitably introduces substantial noise, which can either prolong acquisition times and make scan durations infeasible or degrade image quality with strong noise artifacts. To address this challenge, we present a dedicated neural network-based denoising approach tailored for multispectral CT projections acquired at the BM18 beamline of the ESRF. The method exploits redundancies across angular, spatial, and spectral domains through specialized sub-networks combined via stacked generalization and an attention mechanism. Non-local similarities in the angular-spatial domain are leveraged alongside correlations between adjacent energy bands in the spectral domain, enabling robust noise suppression while preserving fine structural details. Training was performed exclusively on simulated data replicating the physical and noise characteristics of the BM18 setup, with validation conducted on CT scans of custom-designed phantoms containing both high-Z and low-Z materials. The denoised projections and reconstructions demonstrate substantial improvements in image quality compared to classical denoising methods and baseline CNN models. Quantitative evaluations confirm that the proposed method achieves superior performance across a broad spectral range, generalizing effectively to real-world experimental data while significantly reducing noise without compromising structural fidelity.

Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS) challenge results.

Riera-Marín M, O K S, Rodríguez-Comas J, May MS, Pan Z, Zhou X, Liang X, Erick FX, Prenner A, Hémon C, Boussot V, Dillenseger JL, Nunes JC, Qayyum A, Mazher M, Niederer SA, Kushibar K, Martín-Isla C, Radeva P, Lekadir K, Barfoot T, Garcia Peraza Herrera LC, Glocker B, Vercauteren T, Gago L, Englemann J, Kleiss JM, Aubanell A, Antolin A, García-López J, González Ballester MA, Galdrán A

pubmed logopapersSep 10 2025
Deep learning (DL) has become the dominant approach for medical image segmentation, yet ensuring the reliability and clinical applicability of these models requires addressing key challenges such as annotation variability, calibration, and uncertainty estimation. This is why we created the Calibration and Uncertainty for multiRater Volume Assessment in multiorgan Segmentation (CURVAS), which highlights the critical role of multiple annotators in establishing a more comprehensive ground truth, emphasizing that segmentation is inherently subjective and that leveraging inter-annotator variability is essential for robust model evaluation. Seven teams participated in the challenge, submitting a variety of DL models evaluated using metrics such as Dice Similarity Coefficient (DSC), Expected Calibration Error (ECE), and Continuous Ranked Probability Score (CRPS). By incorporating consensus and dissensus ground truth, we assess how DL models handle uncertainty and whether their confidence estimates align with true segmentation performance. Our findings reinforce the importance of well-calibrated models, as better calibration is strongly correlated with the quality of the results. Furthermore, we demonstrate that segmentation models trained on diverse datasets and enriched with pre-trained knowledge exhibit greater robustness, particularly in cases deviating from standard anatomical structures. Notably, the best-performing models achieved high DSC and well-calibrated uncertainty estimates. This work underscores the need for multi-annotator ground truth, thorough calibration assessments, and uncertainty-aware evaluations to develop trustworthy and clinically reliable DL-based medical image segmentation models.

Deep-Learning System for Automatic Measurement of the Femorotibial Rotational Angle on Lower-Extremity Computed Tomography.

Lee SW, Lee GP, Yoon I, Kim YJ, Kim KG

pubmed logopapersSep 10 2025
To develop and validate a deep-learning-based algorithm for automatic identification of anatomical landmarks and calculating femoral and tibial version angles (FTT angles) on lower-extremity CT scans. In this IRB-approved, retrospective study, lower-extremity CT scans from 270 adult patients (median age, 69 years; female to male ratio, 235:35) were analyzed. CT data were preprocessed using contrast-limited adaptive histogram equalization and RGB superposition to enhance tissue boundary distinction. The Attention U-Net model was trained using the gold standard of manual labeling and landmark drawing, enabling it to segment bones, detect landmarks, create lines, and automatically measure the femoral version and tibial torsion angles. The model's performance was validated against manual segmentations by a musculoskeletal radiologist using a test dataset. The segmentation model demonstrated 92.16%±0.02 sensitivity, 99.96%±<0.01 specificity, and 2.14±2.39 HD95, with a Dice similarity coefficient (DSC) of 93.12%±0.01. Automatic measurements of femoral and tibial torsion angles showed good correlation with radiologists' measurements, with correlation coefficients of 0.64 for femoral and 0.54 for tibial angles (p < 0.05). Automated segmentation significantly reduced the measurement time per leg compared to manual methods (57.5 ± 8.3 s vs. 79.6 ± 15.9 s, p < 0.05). We developed a method to automate the measurement of femorotibial rotation in continuous axial CT scans of patients with osteoarthritis (OA) using a deep-learning approach. This method has the potential to expedite the analysis of patient data in busy clinical settings.

A 3D multi-task network for the automatic segmentation of CT images featuring hip osteoarthritis.

Wang H, Zhang X, Li S, Zheng X, Zhang Y, Xie Q, Jin Z

pubmed logopapersSep 10 2025
Total hip arthroplasty (THA) is the standard surgical treatment for end-stage hip osteoarthritis, with its success dependent on precise preoperative planning, which, in turn, relies on accurate three-dimensional segmentation and reconstruction of the periarticular bone of the hip joint. However, patients with hip osteoarthritis often exhibit pathological characteristics, such as joint space narrowing, femoroacetabular impingement, osteophyte formation, and joint deformity. These changes present significant challenges for traditional manual or semi-automatic segmentation methods. To address these challenges, this study proposed a novel 3D UNet-based multi-task network to achieve rapid and accurate segmentation and reconstruction of the periarticular bone in hip osteoarthritis patients. The bone segmentation main network incorporated the Transformer module during the encoder to effectively capture spatial anatomical features, while a boundary-optimization branch was designed to address segmentation challenges at the acetabular-femoral interface. These branches were jointly optimized through a multi-task loss function, with an oversampling strategy introduced to enhance the network's feature learning capability for complex structures. The experimental results showed that the proposed method achieved excellent performance on the test set with hip osteoarthritis. The average Dice coefficient was 96.09% (96.98% for femur, 95.20% for hip), with an overall precision of 96.66% and recall of 97.32%. In terms of the boundary matching metrics, the average surface distance (ASD) and the 95% Hausdorff distance (HD95) were 0.40 mm and 1.78 mm, respectively. The metrics showed that the proposed automatic segmentation network achieved high accuracy in segmenting the periarticular bone of the hip joint, generating reliable 2D masks and 3D models, thereby demonstrating significant potential for supporting THA surgical planning.

Integrating radiomics and dosiomics with lung biologically equivalent dose for predicting symptomatic radiation pneumonitis after lung SBRT: A dual-center study.

Jiao Y, Wen Y, Li S, Gao H, Chen D, Sun L, Lin G, Ren Y

pubmed logopapersSep 10 2025
This study focused on developing and validating a composite model that integrates radiomic and dosiomic features based on a lung biologically equivalent dose segmentation approach to predict symptomatic radiation pneumonitis (SRP) following lung SBRT. A dual-centered cohorts of 182 lung cancer patients treated with SBRT were divided into training, validation, and external testing sets. Radiomic and dosiomic features were extracted from two distinct regions of interest (ROIs) in the planning computed tomography (CT) images and 3D dose distribution maps, which encompassed both the entire lung and biologically equivalent dose (BED) regions. Feature selection involved correlation filters and LASSO regularization. Five machine learning algorithms generated three individual models (dose-volume histogram [DVH], radiomic [R], dosiomic [D]) and three combined models (R + DVH, R + D, R + D + DVH). Performance was evaluated via ROC analysis, calibration, and decision curve analysis. Among the clinical and dosimetric factors, V<sub>BED70</sub> (α/β = 3 Gy) of the lung was recognized as an independent risk factor for SRP. BED-based radiomic and dosiomic models outperformed whole-lung models (AUCs: 0.806 vs. 0.674 and 0.821 vs. 0.647, respectively). The R + D + DVH trio model achieved the highest predictive accuracy (AUC: 0.889, 95 % CI: 0.701-0.956), with robust calibration and clinical utility. The R + D + DVH trio model based on lung biologically equivalent dose segmentation approach outperforms other models in predicting SRP across various SBRT fractionation schemes.

A multidimensional deep ensemble learning model predicts pathological response and outcomes in esophageal squamous cell carcinoma treated with neoadjuvant chemoradiotherapy from pretreatment CT imaging: A multicenter study.

Liu Y, Su Y, Peng J, Zhang W, Zhao F, Li Y, Song X, Ma Z, Zhang W, Ji J, Chen Y, Men Y, Ye F, Men K, Qin J, Liu W, Wang X, Bi N, Xue L, Yu W, Wang Q, Zhou M, Hui Z

pubmed logopapersSep 10 2025
Neoadjuvant chemoradiotherapy (nCRT) followed by esophagectomy remains standard for locally advanced esophageal squamous cell carcinoma (ESCC). However, accurately predicting pathological complete response (pCR) and treatment outcomes remains challenging. This study aimed to develop and validate a multidimensional deep ensemble learning model (DELRN) using pretreatment CT imaging to predict pCR and stratify prognostic risk in ESCC patients undergoing nCRT. In this multicenter, retrospective cohort study, 485 ESCC patients were enrolled from four hospitals (May 2009-August 2023, December 2017-September 2021, May 2014-September 2019, and March 2013-July 2019). Patients were divided into a discovery cohort (n = 194), an internal cohort (n = 49), and three external validation cohorts (n = 242). A multidimensional deep ensemble learning model (DELRN) integrating radiomics and 3D convolutional neural networks was developed based on pretreatment CT images to predict pCR and clinical outcomes. The model's performance was evaluated by discrimination, calibration, and clinical utility. Kaplan-Meier analysis assessed overall survival (OS) and disease-free survival (DFS) at two follow-up centers. The DELRN model demonstrated robust predictive performance for pCR across the discovery, internal, and external validation cohorts, with area under the curve (AUC) values of 0.943 (95 % CI: 0.912-0.973), 0.796 (95 % CI: 0.661-0.930), 0.767 (95 % CI: 0.646-0.887), 0.829 (95 % CI: 0.715-0.942), and 0.782 (95 % CI: 0.664-0.900), respectively, surpassing single-domain radiomics or deep learning models. DELRN effectively stratified patients into high-risk and low-risk groups for OS (log-rank P = 0.018 and 0.0053) and DFS (log-rank P = 0.00042 and 0.035). Multivariate analysis confirmed DELRN as an independent prognostic factor for OS and DFS. The DELRN model demonstrated promising clinical potential as an effective, non-invasive tool for predicting nCRT response and treatment outcome in ESCC patients, enabling personalized treatment strategies and improving clinical decision-making with future prospective multicenter validation.

An Interpretable Deep Learning Framework for Preoperative Classification of Lung Adenocarcinoma on CT Scans: Advancing Surgical Decision Support.

Shi Q, Liao Y, Li J, Huang H

pubmed logopapersSep 10 2025
Lung adenocarcinoma remains a leading cause of cancer-related mortality, and the diagnostic performance of computed tomography (CT) is limited when dependent solely on human interpretation. This study aimed to develop and evaluate an interpretable deep learning framework using an attention-enhanced Squeeze-and-Excitation Residual Network (SE-ResNet) to improve automated classification of lung adenocarcinoma from thoracic CT images. Furthermore, Gradient-weighted Class Activation Mapping (Grad-CAM) was applied to enhance model interpretability and assist in the visual localization of tumor regions. A total of 3800 chest CT axial slices were collected from 380 subjects (190 patients with lung adenocarcinoma and 190 controls, with 10 slices extracted from each case). This dataset was used to train and evaluate the baseline ResNet50 model as well as the proposed SE-ResNet50 model. Performance was compared using accuracy, Area Under the Curve (AUC), precision, recall, and F1-score. Grad-CAM visualizations were generated to assess the alignment between the model's attention and radiologically confirmed tumor locations. The SE-ResNet model achieved a classification accuracy of 94% and an AUC of 0.941, significantly outperforming the baseline ResNet50, which had an 85% accuracy and an AUC of 0.854. Grad-CAM heatmaps produced from the SE-ResNet demonstrated superior localization of tumor-relevant regions, confirming the enhanced focus provided by the attention mechanism. The proposed SE-ResNet framework delivers high accuracy and interpretability in classifying lung adenocarcinoma from CT images. It shows considerable potential as a decision-support tool to assist radiologists in diagnosis and may serve as a valuable clinical tool with further validation.

YOLOv12 Algorithm-Aided Detection and Classification of Lateral Malleolar Avulsion Fracture and Subfibular Ossicle Based on CT Images: A Multicenter Study.

Liu J, Sun P, Yuan Y, Chen Z, Tian K, Gao Q, Li X, Xia L, Zhang J, Xu N

pubmed logopapersSep 9 2025
Lateral malleolar avulsion fracture (LMAF) and subfibular ossicle (SFO) are distinct entities that both present as small bone fragments near the lateral malleolus on imaging, yet require different treatment strategies. Clinical and radiological differentiation is challenging, which can impede timely and precise management. On imaging, magnetic resonance imaging (MRI) is the diagnostic gold standard for differentiating LMAF from SFO, whereas radiological differentiation on computed tomography (CT) alone is challenging in routine practice. Deep convolutional neural networks (DCNNs) have shown promise in musculoskeletal imaging diagnostics, but robust, multicenter evidence in this specific context is lacking. To evaluate several state-of-the-art DCNNs-including the latest YOLOv12 algorithm - for detecting and classifying LMAF and SFO on CT images, using MRI-based diagnoses as the gold standard, and to compare model performance with radiologists reading CT alone. In this retrospective study, 1,918 patients (LMAF: 1253, SFO: 665) were enrolled from two hospitals in China between 2014 and 2024. MRI served as the gold standard and was independently interpreted by two senior musculoskeletal radiologists. Only CT images were used for model training, validation, and testing. CT images were manually annotated with bounding boxes. The cohort was randomly split into a training set (n=1,092), internal validation set (n=476), and external test set (n=350). Four deep learning models - Faster R-CNN, SSD, RetinaNet, and YOLOv12 - were trained and evaluated using identical procedures. Model performance was assessed using mean average precision at IoU=0.5 (mAP50), area under the receiver-operating curve (AUC), accuracy, sensitivity, and specificity. The external test set was also independently interpreted by two musculoskeletal radiologists with 7 and 15 years of experience, with results compared to the best performing model. Saliency maps were generated using Shapley values to enhance interpretability. Among the evaluated models, YOLOv12 achieved the highest detection and classification performance, with a mAP50 of 92.1% and an AUC of 0.983 on the external test set - significantly outperforming Faster R-CNN (mAP50: 63.7%, AUC: 0.79), SSD (mAP50 63.0%, AUC 0.63), and RetinaNet (mAP50: 67.0%, AUC: 0.73) (all P < .05). When using CT alone, radiologists performed at a moderate level (accuracy: 75.6%/69.1%; sensitivity: 75.0%/65.2%; specificity: 76.0%/71.1%), whereas YOLOv12 approached MRI-based reference performance (accuracy: 92.0%; sensitivity: 86.7%; specificity: 82.2%). Saliency maps corresponded well with expert-identified regions. While MRI (read by senior radiologists) is the gold standard for distinguishing LMAF from SFO, CT-based differentiation is challenging for radiologists. A CT-only DCNN (YOLOv12) achieved substantially higher performance than radiologists reading CT alone and approached the MRI-based reference standard, highlighting its potential to augment CT-based decision-making where MRI is limited or unavailable.
Page 21 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.