Sort by:
Page 47 of 6156144 results

Antonopoulos G, More S, Eickhoff SB, Raimondo F, Patil KR

pubmed logopapersOct 12 2025
Predictive modeling using structural magnetic resonance imaging (MRI) data is a prominent approach to study brain-aging. Machine learning frameworks have been employed to improve predictions and explore healthy and accelerated aging due to diseases. The high-dimensional MRI data pose challenges to building generalizable and interpretable models as well as for data privacy. Common practices are resampling or averaging voxels within predefined parcels which reduces anatomical specificity and biological interpretability. Effectively, naive fusion by averaging can result in information loss and reduced accuracy. We present a conceptually novel two-level stacking ensemble (SE) approach. The first level comprises regional models for predicting individuals' age based on voxel-wise information, fused by a second-level model yielding final predictions. Eight data fusion scenarios were explored using Gray matter volume (GMV) estimates from four large datasets. Performance measured using mean absolute error (MAE), R<sup>2</sup>, correlation and prediction bias, showed that SE outperformed the region-wise averages. The best performance was obtained when first-level regional predictions were obtained as out-of-sample predictions on the application site with second-level models trained on independent and site-specific data (MAE = 4.75 vs baseline regional mean GMV MAE = 5.68). Performance improved as more datasets were used for training. First-level predictions showed improved and more robust aging signal providing new biological insights and enhanced data privacy. Overall, the SE improves accuracy compared to the baseline while preserving or enhancing data privacy. Finally, we show the utility of our SE model on a clinical cohort showing accelerated aging in cognitively impaired and Alzheimer's disease patients.

Clemence Mottez, Louisa Fay, Maya Varma, Sophie Ostmeier, Curtis Langlotz

arxiv logopreprintOct 12 2025
Deep learning models have shown promise in improving diagnostic accuracy from chest X-rays, but they also risk perpetuating healthcare disparities when performance varies across demographic groups. In this work, we present a comprehensive bias detection and mitigation framework targeting sex, age, and race-based disparities when performing diagnostic tasks with chest X-rays. We extend a recent CNN-XGBoost pipeline to support multi-label classification and evaluate its performance across four medical conditions. We show that replacing the final layer of CNN with an eXtreme Gradient Boosting classifier improves the fairness of the subgroup while maintaining or improving the overall predictive performance. To validate its generalizability, we apply the method to different backbones, namely DenseNet-121 and ResNet-50, and achieve similarly strong performance and fairness outcomes, confirming its model-agnostic design. We further compare this lightweight adapter training method with traditional full-model training bias mitigation techniques, including adversarial training, reweighting, data augmentation, and active learning, and find that our approach offers competitive or superior bias reduction at a fraction of the computational cost. Finally, we show that combining eXtreme Gradient Boosting retraining with active learning yields the largest reduction in bias across all demographic subgroups, both in and out of distribution on the CheXpert and MIMIC datasets, establishing a practical and effective path toward equitable deep learning deployment in clinical radiology.

Junhao Dong, Dejia Liu, Ruiqi Ding, Zongxing Chen, Yingjie Huang, Zhu Meng, Jianbo Zhao, Zhicheng Zhao, Fei Su

arxiv logopreprintOct 12 2025
Transjugular intrahepatic portosystemic shunt (TIPS) is an established procedure for portal hypertension, but provides variable survival outcomes and frequent overt hepatic encephalopathy (OHE), indicating the necessity of accurate preoperative prognostic modeling. Current studies typically build machine learning models from preoperative CT images or clinical characteristics, but face three key challenges: (1) labor-intensive region-of-interest (ROI) annotation, (2) poor reliability and generalizability of unimodal methods, and (3) incomplete assessment from single-endpoint prediction. Moreover, the lack of publicly accessible datasets constrains research in this field. Therefore, we present MultiTIPS, the first public multi-center dataset for TIPS prognosis, and propose a novel multimodal prognostic framework based on it. The framework comprises three core modules: (1) dual-option segmentation, which integrates semi-supervised and foundation model-based pipelines to achieve robust ROI segmentation with limited annotations and facilitate subsequent feature extraction; (2) multimodal interaction, where three techniques, multi-grained radiomics attention (MGRA), progressive orthogonal disentanglement (POD), and clinically guided prognostic enhancement (CGPE), are introduced to enable cross-modal feature interaction and complementary representation integration, thus improving model accuracy and robustness; and (3) multi-task prediction, where a staged training strategy is used to perform stable optimization of survival, portal pressure gradient (PPG), and OHE prediction for comprehensive prognostic assessment. Extensive experiments on MultiTIPS demonstrate the superiority of the proposed method over state-of-the-art approaches, along with strong cross-domain generalization and interpretability, indicating its promise for clinical application. The dataset and code are available.

Theo Di Piazza, Carole Lazarus, Olivier Nempont, Loic Boussel

arxiv logopreprintOct 12 2025
With the growing volume of CT examinations, there is an increasing demand for automated tools such as organ segmentation, abnormality detection, and report generation to support radiologists in managing their clinical workload. Multi-label classification of 3D Chest CT scans remains a critical yet challenging problem due to the complex spatial relationships inherent in volumetric data and the wide variability of abnormalities. Existing methods based on 3D convolutional neural networks struggle to capture long-range dependencies, while Vision Transformers often require extensive pre-training on large-scale, domain-specific datasets to perform competitively. In this work, we propose a 2.5D alternative by introducing a new graph-based framework that represents 3D CT volumes as structured graphs, where axial slice triplets serve as nodes processed through spectral graph convolution, enabling the model to reason over inter-slice dependencies while maintaining complexity compatible with clinical deployment. Our method, trained and evaluated on 3 datasets from independent institutions, achieves strong cross-dataset generalization, and shows competitive performance compared to state-of-the-art visual encoders. We further conduct comprehensive ablation studies to evaluate the impact of various aggregation strategies, edge-weighting schemes, and graph connectivity patterns. Additionally, we demonstrate the broader applicability of our approach through transfer experiments on automated radiology report generation and abdominal CT data.\\ This work extends our previous contribution presented at the MICCAI 2025 EMERGE Workshop.

Wang H, Kong JF, Wen L, Wang XJ, Zhang WT, Wang ZQ, Zeng L, Huang YT, Yang SH, Li M, Chen TW, Liu J, Wang GX

pubmed logopapersOct 12 2025
To develop and test machine learning (ML) models using computed tomography angiography to identify the intracranial aneurysm (IA) responsible for subarachnoid hemorrhage (SAH) accurately in patients with multiple saccular IAs and to determine whether these models outperform traditional predictive markers. Two hundred seven SAH patients with 460 IAs from four hospitals were included from May 2018-December 2023 and randomly divided into training (80%) and internal validation (20%) sets. Additionally, an external validation set comprising 65 patients with 147 IAs from other four hospitals was used. The predictive models were developed using ML methods that integrated the morphological features of IAs (e.g., size and shape) to identify the responsible IA. These models were then compared with traditional predictive markers that relies on hemorrhage patterns and the maximum IA size. The areas under the curves (AUCs) for the hemorrhage patterns and the maximum IA size were 0.496-0.505, 0.502-0.523, and 0.488-0.498 in the training, internal validation, and external validation sets, respectively. Among the 13 ML models, the best-performing models were the Gaussian process, logistic regression, and quadratic discriminant analysis models, with AUCs of 0.912 [95 % confidence interval (CI): 0.881-0.943], 0.894 (95 % CI: 0.861-0.928), and 0.890 (95 % CI: 0.756-0.924), respectively, for the training set; 0.869 (95 % CI: 0.798-0.941), 0.872 (95 % CI: 0.802-0.942), and 0.853 (95 % CI: 0.778-0.929), respectively, for the internal validation set; and 0.898 (95 % CI: 0.848-0.947), 0.892 (95 % CI: 0.840-0.943), and 0.897 (95 % CI: 0.847-0.947), respectively, for the external validation set. DeLong tests revealed no significant differences among these models, but all the models outperformed traditional predictive markers (P < 0.001). ML models that integrate multiple morphological features can predict the IA responsible for SAH accurately in patients with multiple IAs. These models outperform traditional predictive markers in identifying the responsible IA, thereby facilitating prompt and effective treatment.

Tang K, She R, Chen G, Xie Z, Li T, Chen D, Huang W, Feng Q, Zhao Y, Liu Y

pubmed logopapersOct 12 2025
To develop and validate a multimodal deep learning (DL) model that integrates preoperative contrast-enhanced computed tomography (CECT) and postoperative whole-slide images (WSIs) to predict microsatellite instability (MSI) status in colorectal cancer (CRC). This retrospective, multicenter study enrolled 305 CRC patients with paired CECT and WSIs. Patients from Center I and II were allocated to the training (n = 169) and internal validation (n = 85) sets, while those from Center III formed the external test set (n = 51). Pathology-based DL (PathDL) and venous-phase CECT (VPDL) models were constructed using EfficientNet-b0 and ResNet 101 architectures, respectively. A fusion model (F-VP-PathDL, Fusion of venous phase CT and pathology with deep learning) was developed using an adaptive residual network to integrate features from both modalities. Model performance was evaluated using area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1 score. The F-VP-PathDL model achieved strong performance on the internal validation set, with an AUC of 0.883 (95 % CI: 0.732-0.967). On the external test set, the model achieved an AUC of 0.905 (95 % CI: 0.831-0.945), outperforming single-modality and alternative fusion models (PathDL: 0.794; VPDL: 0.858; APDL: 0.802; F-AVPDL: 0.813). The model also demonstrated robust accuracy (84.2 %, 95 % CI: 69.1 %-92.8 %), sensitivity (80.3 %, 95 % CI: 28.4 %-98.7 %), specificity (83.7 %, 95% CI: 68.8 %-93.9 %) and F1 score (0.837, 95 % CI: 0.326-0.999) on the external test set. The F-VP-PathDL model demonstrates robust generalizability across centers and offers a clinically scalable tool for MSI prediction in CRC, supporting patient stratification and informing immunotherapy decisions.

Shen E, Zhou Q, Li C, Wang H, Yuan J, Ge Y, Chen Y, Zhao K, Zhang W, Zhao D, Jin Z

pubmed logopapersOct 11 2025
Three-dimensional (3D) ultrasound imaging offers a larger field of view and enables volumetric measurements. Among the versatile methods, free-hand 3D ultrasound imaging utilizing deep learning networks for spatial coordinate prediction exhibits advantages in terms of simplified device configuration and user-friendliness. However, this imaging method is restricted to predicting the relative spatial transformation between two consecutive 2D ultrasound images, resulting in substantial cumulative errors. When imaging large organs, cumulative errors can severely distort the 3D images. In this study, we proposed a labeling strategy based on the ultrasound image coordinate system, enhancing the network prediction accuracy. Meanwhile, pre-planning the scanning trajectory and using it to guide the network prediction significantly reduced cumulative error. Spinal 3D ultrasound imaging was performed on both healthy volunteers and scoliosis patients. Comparison of reconstruction results across different methods demonstrated that the proposed method improved the prediction accuracy by approximately 40% and reduced the cumulative error by nearly 80%. This method shows promise for application in various deep learning networks and different tissues and is expected to facilitate the broader clinical adoption of 3D ultrasound imaging.

Rim D, Pham W, Fatouleh R, Hennessy A, Schlaich M, Henderson LA, Macefield VG

pubmed logopapersOct 11 2025
Hypertension is characterised by both enlarged perivascular spaces (ePVS) and chronically elevated resting sympathetic outflow. ePVS is associated with heart rate variability, suggesting links to autonomic outflow; however, heart rate variability offers limited information on sympathetic nerve activity. Here, we assessed whether ePVS are associated with muscle sympathetic nerve activity (MSNA) in 25 hypertensive patients and 50 healthy normotensive adults. T1-weighted MRI anatomical brain images were analysed for ePVS using a deep learning-based segmentation algorithm (nnU-Net). Spontaneous bursts of MSNA were recorded from the right common peroneal nerve via a tungsten microelectrode immediately before the MRI scan in a supine position. A backward regression analysis was conducted to test the relationship between ePVS and MSNA. Significant associations were found between MSNA and ePVS in the white matter (β = 1.02, p = 0.007), basal ganglia (β = 0.43, p = 0.001), and hippocampus (β = 0.24, p = 0.010) in healthy normotensive adults. Similar associations were observed in individuals with hypertension. Notably, the association between MSNA and midbrain ePVS cluster was only observed in the hypertensive group (β = 0.41, p = 0.005). ePVS were associated with MSNA in both normotensive and hypertensive patients. These findings warrant further research into the causal relationship between MSNA and ePVS and highlight the potential for ePVS as a neuroimaging biomarker for sympathetic nerve activity.

Raith S, Pankert T, Jaganathan S, Pankert K, Lee H, Peters F, Hölzle F, Modabber A

pubmed logopapersOct 11 2025
Mandibular reconstruction following continuity resection due to tumor ablation or osteonecrosis remains a significant challenge in maxillofacial surgery. Virtual surgical planning (VSP) relies on accurate segmentation of the mandible, yet existing AI models typically include teeth, making them unsuitable for planning of autologous transplants dimensions aiming for reconstructing edentulous mandibles optimized for dental implant insertion. This study investigates the feasibility of using deep learning-based segmentation to generate anatomically valid, toothless mandibles from dentate CT scans, ensuring geometric accuracy for reconstructive planning. A two-stage convolutional neural network (CNN) approach was employed to segment mandibles from computed tomography (CT) data. The dataset (n = 246) included dentate, partially dentate, and edentulous mandibles. Ground truth segmentations were manually modified to create Class III (moderate alveolar atrophy) and Class V (severe atrophy) models, representing different degrees of post-extraction bone resorption. The AI models were trained on the original (O), Class III (Cl. III), and Class V (Cl. V) datasets, and performance was evaluated using Dice similarity coefficients (DSC), average surface distance, and automatically detected anatomical curvatures. AI-generated segmentations demonstrated high anatomical accuracy across all models, with mean DSCs exceeding 0.94. Accuracy was highest in edentulous mandibles (DSC 0.96 ± 0.014) and slightly lower in fully dentate cases, particularly for Class V modifications (DSC 0.936 ± 0.030). The caudolateral curve remained consistent, confirming that baseline mandibular geometry was preserved despite alveolar ridge modifications. This study confirms that AI-driven segmentation can generate anatomically valid edentulous mandibles from dentate CT scans with high accuracy. The innovation of the work is the precise adaptation of alveolar ridge geometry, making it a valuable tool for patient-specific virtual surgical planning in mandibular reconstruction.

Shinkawa H, Ueda D, Kurimoto S, Kaibori M, Ueno M, Yasuda S, Ikoma H, Aihara T, Nakai T, Kinoshita M, Kosaka H, Hayami S, Matsuo Y, Morimura R, Nakajima T, Nobori C, Ishizawa T

pubmed logopapersOct 11 2025
No reports described the deep-learning (DL) models using computed tomography (CT) as an imaging biomarker for predicting postoperative long-term outcomes in patients with hepatocellular carcinoma (HCC). This study aimed to validate the DL models for individualized prognostication after HCC resection using CT as an imaging biomarker. This study included 1733 patients undergoing hepatic resection for solitary HCC. Participants were classified into training, validation, and test datasets. DL predictive models were developed using clinical variables and CT imaging to predict recurrence within 2 and 5 years and overall survival (OS) of > 5 and > 10 years postoperatively. Youden index was utilized to identify cutoff values. Permutation importance was used to calculate the importance of each explanatory variable. DL predictive models for recurrence within 2 and 5 years and OS of > 5 and > 10 years postoperatively were developed in the test datasets, with the area under the curve of 0.70, 0.70, 0.80, and 0.80, respectively. Permutation importance demonstrated that CT imaging analysis revealed the highest importance value. The postoperative recurrence rates within 2 and 5 years were 52.6% versus 18.5% (p < 0.001) and 78.9% versus 46.7% (p < 0.001) and overall mortality within 5 and 10 years postoperatively were 45.1% versus 9.2% (p < 0.001) and 87.1% versus 43.2% (p < 0.001) in the high-risk versus low-risk groups, respectively. Our DL models using CT as an imaging biomarker are useful for individualized prognostication and may help optimize treatment planning for patients with HCC.
Page 47 of 6156144 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.