Sort by:
Page 81 of 3993982 results

Deep Learning-Based Prediction of Microvascular Invasion and Survival Outcomes in Hepatocellular Carcinoma Using Dual-phase CT Imaging of Tumors and Lesser Omental Adipose: A Multicenter Study.

Miao S, Sun M, Li X, Wang M, Jiang Y, Liu Z, Wang Q, Ding X, Wang R

pubmed logopapersJul 23 2025
Accurate preoperative prediction of microvascular invasion (MVI) in hepatocellular carcinoma (HCC) remains challenging. Current imaging biomarkers show limited predictive performance. To develop a deep learning model based on preoperative multiphase CT images of tumors and lesser omental adipose tissue (LOAT) for predicting MVI status and to analyze associated survival outcomes. This retrospective study included pathologically confirmed HCC patients from two medical centers between 2016 and 2023. A dual-branch feature fusion model based on ResNet18 was constructed, which extracted fused features from dual-phase CT images of both tumors and LOAT. The model's performance was evaluated on both internal and external test sets. Logistic regression was used to identify independent predictors of MVI. Based on MVI status, patients in the training, internal test, and external test cohorts were stratified into high- and low-risk groups, and overall survival differences were analyzed. The model incorporating LOAT features outperformed the tumor-only modality, achieving an AUC of 0.889 (95% CI: [0.882, 0.962], P=0.004) in the internal test set and 0.826 (95% CI: [0.793, 0.872], P=0.006) in the external test set. Both results surpassed the independent diagnoses of three radiologists (average AUC=0.772). Multivariate logistic regression confirmed that maximum tumor diameter and LOAT area were independent predictors of MVI. Further Cox regression analysis showed that MVI-positive patients had significantly increased mortality risks in both the internal test set (Hazard Ratio [HR]=2.246, 95% CI: [1.088, 4.637], P=0.029) and external test set (HR=3.797, 95% CI: [1.262, 11.422], P=0.018). This study is the first to use a deep learning framework integrating LOAT and tumor imaging features, improving preoperative MVI risk stratification accuracy. Independent prognostic value of LOAT has been validated in multicenter cohorts, highlighting its potential to guide personalized surgical planning.

To Compare the Application Value of Different Deep Learning Models Based on CT in Predicting Visceral Pleural Invasion of Non-small Cell Lung Cancer: A Retrospective, Multicenter Study.

Zhu X, Yang Y, Yan C, Xie Z, Shi H, Ji H, He L, Yang T, Wang J

pubmed logopapersJul 23 2025
Visceral pleural invasion (VPI) indicates poor prognosis in non-small cell lung cancer (NSCLC), and upgrades T classification of NSCLC from T1 to T2 when accompanied by VPI. This study aimed to develop and validate deep learning models for the accurate prediction of VPI in patients with NSCLC, and to compare the performance of two-dimensional (2D), three-dimensional (3D), and hybrid 3D models. This retrospective study included consecutive patients with pathologically confirmed lung tumor between June 2017 and September 2022. The clinical data and preoperative imaging features of these patients were investigated and their relationships with VPI were statistically compared. Elastic fiber staining analysis results were the gold standard for diagnosis of VPI. The data of non-VPI and VPI patients were randomly divided into training cohort and validation cohort based on 8:2 and 6:4, respectively. The EfficientNet-B0_2D model and Double-head Res2Net/_F6/_F24 models were constructed, optimized and verified using two convolutional neural network model architectures-EfficientNet-B0 and Res2Net, respectively, by extracting the features of original CT images and combining specific clinical-CT features. The receiver operating characteristic curve, the area under the curve (AUC), and confusion matrix were utilized to assess the diagnostic efficiency of models. Delong test was used to compare performance between models. A total of 1931 patients with NSCLC were finally evaluated. By univariate analysis, 20 clinical-CT features were identified as risk predictors of VPI. Comparison of the diagnostic efficacy among the EfficientNet-b0_2D, Double-head Res2Net, Res2Net_F6, and Res2Net_F24 combined models revealed that Double-head Res2Net_F6 model owned the largest AUC of 0.941 among all models, followed by Double-head Res2Net (AUC=0.879), Double-head Res2Net_F24 (AUC=0.876), and EfficientNet-b0_2D (AUC=0.785). The three 3D-based models showed comparable predictive performance in the validation cohort and all outperformed the 2D model (EfficientNet-B0_2D, all P<0.05). It is feasible to predict VPI in NSCLC with the predictive models based on deep learning, and the Double-head Res2Net_F6 model fused with six clinical-CT features showed greatest diagnostic efficacy.

CAP-Net: Carotid Artery Plaque Segmentation System Based on Computed Tomography Angiography.

Luo X, Hu B, Zhou S, Wu Q, Geng C, Zhao L, Li Y, Di R, Pu J, Geng D, Yang L

pubmed logopapersJul 23 2025
Diagnosis of carotid plaques from head and neck CT angiography (CTA) scans is typically time-consuming and labor-intensive, leading to limited studies and unpleasant results in this area. The objective of this study is to develop a deep-learning-based model for detection and segmentation of carotid plaques using CTA images. CTA images from 1061 patients (765 male; 296 female) with 4048 carotid plaques were included and split into a 75% training-validation set and a 25% independent test set. We built a workflow involving three modified deep learning networks: a plain U-Net for coarse artery segmentation, an Attention U-Net for fine artery segmentation, a dual-channel-input ConvNeXt-based U-Net architecture for plaque segmentation, and post-processing to refine predictions and eliminate false positives. The models were trained on the training-validation set using five-fold cross-validation and further evaluated on the independent test set using comprehensive metrics for segmentation and plaque detection. The proposed workflow was evaluated in the independent test set (261 patients with 902 carotid plaques) and achieved a mean dice similarity coefficient (DSC) of 0.91±0.04 in artery segmentation, and 0.75±0.14/0.67±0.15 in plaque segmentation per artery/patient. The model detected 95.5% (861/902) plaques, including 96.6% (423/438), 95.3% (307/322), and 92.3% (131/142) of calcified, mixed, and soft plaques, with less than one (0.63±0.93) false positive plaque per patient on average. This study developed an automatic detection and segmentation deep learning-based CAP-Net for carotid plaques using CTA, which yielded promising results in identifying and delineating plaques.

Benchmarking of Deep Learning Methods for Generic MRI Multi-Organ Abdominal Segmentation

Deepa Krishnaswamy, Cosmin Ciausu, Steve Pieper, Ron Kikinis, Benjamin Billot, Andrey Fedorov

arxiv logopreprintJul 23 2025
Recent advances in deep learning have led to robust automated tools for segmentation of abdominal computed tomography (CT). Meanwhile, segmentation of magnetic resonance imaging (MRI) is substantially more challenging due to the inherent signal variability and the increased effort required for annotating training datasets. Hence, existing approaches are trained on limited sets of MRI sequences, which might limit their generalizability. To characterize the landscape of MRI abdominal segmentation tools, we present here a comprehensive benchmarking of the three state-of-the-art and open-source models: MRSegmentator, MRISegmentator-Abdomen, and TotalSegmentator MRI. Since these models are trained using labor-intensive manual annotation cycles, we also introduce and evaluate ABDSynth, a SynthSeg-based model purely trained on widely available CT segmentations (no real images). More generally, we assess accuracy and generalizability by leveraging three public datasets (not seen by any of the evaluated methods during their training), which span all major manufacturers, five MRI sequences, as well as a variety of subject conditions, voxel resolutions, and fields-of-view. Our results reveal that MRSegmentator achieves the best performance and is most generalizable. In contrast, ABDSynth yields slightly less accurate results, but its relaxed requirements in training data make it an alternative when the annotation budget is limited. The evaluation code and datasets are given for future benchmarking at https://github.com/deepakri201/AbdoBench, along with inference code and weights for ABDSynth.

VGS-ATD: Robust Distributed Learning for Multi-Label Medical Image Classification Under Heterogeneous and Imbalanced Conditions

Zehui Zhao, Laith Alzubaidi, Haider A. Alwzwazy, Jinglan Zhang, Yuantong Gu

arxiv logopreprintJul 23 2025
In recent years, advanced deep learning architectures have shown strong performance in medical imaging tasks. However, the traditional centralized learning paradigm poses serious privacy risks as all data is collected and trained on a single server. To mitigate this challenge, decentralized approaches such as federated learning and swarm learning have emerged, allowing model training on local nodes while sharing only model weights. While these methods enhance privacy, they struggle with heterogeneous and imbalanced data and suffer from inefficiencies due to frequent communication and the aggregation of weights. More critically, the dynamic and complex nature of clinical environments demands scalable AI systems capable of continuously learning from diverse modalities and multilabels. Yet, both centralized and decentralized models are prone to catastrophic forgetting during system expansion, often requiring full model retraining to incorporate new data. To address these limitations, we propose VGS-ATD, a novel distributed learning framework. To validate VGS-ATD, we evaluate it in experiments spanning 30 datasets and 80 independent labels across distributed nodes, VGS-ATD achieved an overall accuracy of 92.7%, outperforming centralized learning (84.9%) and swarm learning (72.99%), while federated learning failed under these conditions due to high requirements on computational resources. VGS-ATD also demonstrated strong scalability, with only a 1% drop in accuracy on existing nodes after expansion, compared to a 20% drop in centralized learning, highlighting its resilience to catastrophic forgetting. Additionally, it reduced computational costs by up to 50% relative to both centralized and swarm learning, confirming its superior efficiency and scalability.

Developing deep learning-based cerebral ventricle auto-segmentation system and clinical application for the evaluation of ventriculomegaly.

Nam SM, Hwang JH, Kim JM, Lee DI, Kim YH, Park SJ, Park CK, Dho YS, Kim MS

pubmed logopapersJul 23 2025
Current methods for evaluating ventriculomegaly, particularly Evans' Index (EI), fail to accurately assess three-dimensional ventricular changes. We developed and validated an automated multi-class segmentation system for precise volumetric assessment, simultaneously segmenting five anatomical classes (ventricles, parenchyma, skull, skin, and hemorrhage) to support future augmented reality (AR)-guided external ventricular drainage (EVD) systems. Using the nnUNet architecture, we trained our model on 288 brain CT scans with diverse pathological conditions and validated it using internal (n=10),external (n=43) and public (n=192) datasets. Clinical validation involved 227 patients who underwent CSF drainage procedures. We compared automated volumetric measurements against traditional EI measurements and actual CSF drainage volumes in surgical cases. The model achieved exceptional performance with a mean Dice similarity coefficient of 93.0% across all five classes, demonstrating consistent performance across institutional and public datasets, with particularly robust ventricle segmentation (92.5%). Clinical validation revealed EI was the strongest single predictor of ventricular volume (adjusted R<sup>2</sup> = 0.430, p < 0.001), though influenced by age, sex, and diagnosis type. Most significantly, in EVD cases, automated volume differences showed remarkable correlation with actual CSF drainage amounts (β = 0.956, adjusted R<sup>2</sup> = 0.936, p < 0.001), validating the system's accuracy in measuring real CSF volume changes. Our comprehensive multi-class segmentation system offers a superior alternative to traditional measurements with potential for non-invasive CSF dynamics monitoring and AR-guided EVD placement.

BrainCNN: Automated Brain Tumor Grading from Magnetic Resonance Images Using a Convolutional Neural Network-Based Customized Model.

Yang J, Siddique MA, Ullah H, Gilanie G, Por LY, Alshathri S, El-Shafai W, Aldossary H, Gadekallu TR

pubmed logopapersJul 23 2025
Brain tumors pose a significant risk to human life, making accurate grading essential for effective treatment planning and improved survival rates. Magnetic Resonance Imaging (MRI) plays a crucial role in this process. The objective of this study was to develop an automated brain tumor grading system utilizing deep learning techniques. A dataset comprising 293 MRI scans from patients was obtained from the Department of Radiology at Bahawal Victoria Hospital in Bahawalpur, Pakistan. The proposed approach integrates a specialized Convolutional Neural Network (CNN) with pre-trained models to classify brain tumors into low-grade (LGT) and high-grade (HGT) categories with high accuracy. To assess the model's robustness, experiments were conducted using various methods: (1) raw MRI slices, (2) MRI segments containing only the tumor area, (3) feature-extracted slices derived from the original images through the proposed CNN architecture, and (4) feature-extracted slices from tumor area-only segmented images using the proposed CNN. The MRI slices and the features extracted from them were labeled using machine learning models, including Support Vector Machine (SVM) and CNN architectures based on transfer learning, such as MobileNet, Inception V3, and ResNet-50. Additionally, a custom model was specifically developed for this research. The proposed model achieved an impressive peak accuracy of 99.45%, with classification accuracies of 99.56% for low-grade tumors and 99.49% for high-grade tumors, surpassing traditional methods. These results not only enhance the accuracy of brain tumor grading but also improve computational efficiency by reducing processing time and the number of iterations required.

Kissing Spine and Other Imaging Predictors of Postoperative Cement Displacement Following Percutaneous Kyphoplasty: A Machine Learning Approach.

Zhao Y, Bo L, Qian L, Chen X, Wang Y, Cui L, Xin Y, Liu L

pubmed logopapersJul 23 2025
To investigate the risk factors associated with postoperative cement displacement following percutaneous kyphoplasty (PKP) in patients with osteoporotic vertebral compression fractures (OVCF) and to develop predictive models for clinical risk assessment. This retrospective study included 198 patients with OVCF who underwent PKP. Imaging and clinical variables were collected. Multiple machine learning models, including logistic regression, L1- and L2-regularized logistic regression, support vector machine (SVM), decision tree, gradient boosting, and random forest, were developed to predict cement displacement. L1- and L2-regularized logistic regression models identified four key risk factors: kissing spine (L1: 1.11; L2: 0.91), incomplete anterior cortex (L1: -1.60; L2: -1.62), low vertebral body CT value (L1: -2.38; L2: -1.71), and large Cobb change (L1: 0.89; L2: 0.87). The support vector machine (SVM) model achieved the best performance (accuracy: 0.983, precision: 0.875, recall: 1.000, F1-score: 0.933, specificity: 0.981, AUC: 0.997). Other models, including logistic regression, decision tree, gradient boosting, and random forest, also showed high performance but were slightly inferior to SVM. Key predictors of cement displacement were identified, and machine learning models were developed for risk assessment. These findings can assist clinicians in identifying high-risk patients, optimizing treatment strategies, and improving patient outcomes.

VGS-ATD: Robust Distributed Learning for Multi-Label Medical Image Classification Under Heterogeneous and Imbalanced Conditions

Zehui Zhao, Laith Alzubaidi, Haider A. Alwzwazy, Jinglan Zhang, Yuantong Gu

arxiv logopreprintJul 23 2025
In recent years, advanced deep learning architectures have shown strong performance in medical imaging tasks. However, the traditional centralized learning paradigm poses serious privacy risks as all data is collected and trained on a single server. To mitigate this challenge, decentralized approaches such as federated learning and swarm learning have emerged, allowing model training on local nodes while sharing only model weights. While these methods enhance privacy, they struggle with heterogeneous and imbalanced data and suffer from inefficiencies due to frequent communication and the aggregation of weights. More critically, the dynamic and complex nature of clinical environments demands scalable AI systems capable of continuously learning from diverse modalities and multilabels. Yet, both centralized and decentralized models are prone to catastrophic forgetting during system expansion, often requiring full model retraining to incorporate new data. To address these limitations, we propose VGS-ATD, a novel distributed learning framework. To validate VGS-ATD, we evaluate it in experiments spanning 30 datasets and 80 independent labels across distributed nodes, VGS-ATD achieved an overall accuracy of 92.7%, outperforming centralized learning (84.9%) and swarm learning (72.99%), while federated learning failed under these conditions due to high requirements on computational resources. VGS-ATD also demonstrated strong scalability, with only a 1% drop in accuracy on existing nodes after expansion, compared to a 20% drop in centralized learning, highlighting its resilience to catastrophic forgetting. Additionally, it reduced computational costs by up to 50% relative to both centralized and swarm learning, confirming its superior efficiency and scalability.

Interpretable Deep Learning Approaches for Reliable GI Image Classification: A Study with the HyperKvasir Dataset

Wahid, S. B., Rothy, Z. T., News, R. K., Rieyan, S. A.

medrxiv logopreprintJul 23 2025
Deep learning has emerged as a promising tool for automating gastrointestinal (GI) disease diagnosis. However, multi-class GI disease classification remains underexplored. This study addresses this gap by presenting a framework that uses advanced models like InceptionNetV3 and ResNet50, combined with boosting algorithms (XGB, LGBM), to classify lower GI abnormalities. InceptionNetV3 with XGB achieved the best recall of 0.81 and an F1 score of 0.90. To assist clinicians in understanding model decisions, the Grad-CAM technique, a form of explainable AI, was employed to highlight the critical regions influencing predictions, fostering trust in these systems. This approach significantly improves both the accuracy and reliability of GI disease diagnosis.
Page 81 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.