Sort by:
Page 16 of 1601593 results

Concurrent AI assistance with LI-RADS classification for contrast enhanced MRI of focal hepatic nodules: a multi-reader, multi-case study.

Qin X, Huang L, Wei Y, Li H, Wu Y, Zhong J, Jian M, Zhang J, Zheng Z, Xu Y, Yan C

pubmed logopapersSep 16 2025
The Liver Imaging Reporting and Data System (LI-RADS) assessment is subject to inter-reader variability. The present study aimed to evaluate the impact of an artificial intelligence (AI) system on the accuracy and inter-reader agreement of LI-RADS classification based on contrast-enhanced magnetic resonance imaging among radiologists with varying experience levels. This single-center, multi-reader, multi-case retrospective study included 120 patients with 200 focal liver lesions who underwent abdominal contrast-enhanced magnetic resonance imaging examinations between June 2023 and May 2024. Five radiologists with different experience levels independently assessed LI-RADS classification and imaging features with and without AI assistance. The reference standard was established by consensus between two expert radiologists. Accuracy was used to measure the performance of AI systems and radiologists. Kappa or intraclass correlation coefficient was utilized to estimate inter-reader agreement. The LI-RADS categories were as follows: 33.5% of LR-3 (67/200), 29.0% of LR-4 (58/200), 33.5% of LR-5 (67/200), and 4.0% of LR-M (8/200) cases. The AI system significantly improved the overall accuracy of LI-RADS classification from 69.9 to 80.1% (p < 0.001), with the most notable improvement among junior radiologists from 65.7 to 79.7% (p < 0.001). Inter-reader agreement for LI-RADS classification was significantly higher with AI assistance compared to that without (weighted Cohen's kappa, 0.655 vs. 0.812, p < 0.001). The AI system also enhanced the accuracy and inter-reader agreement for imaging features, including non-rim arterial phase hyperenhancement, non-peripheral washout, and restricted diffusion. Additionally, inter-reader agreement for lesion size measurements improved, with intraclass correlation coefficient changing from 0.857 to 0.951 (p < 0.001). The AI system significantly increases accuracy and inter-reader agreement of LI-RADS 3/4/5/M classification, particularly benefiting junior radiologists.

3DViT-GAT: A Unified Atlas-Based 3D Vision Transformer and Graph Learning Framework for Major Depressive Disorder Detection Using Structural MRI Data

Nojod M. Alotaibi, Areej M. Alhothali, Manar S. Ali

arxiv logopreprintSep 15 2025
Major depressive disorder (MDD) is a prevalent mental health condition that negatively impacts both individual well-being and global public health. Automated detection of MDD using structural magnetic resonance imaging (sMRI) and deep learning (DL) methods holds increasing promise for improving diagnostic accuracy and enabling early intervention. Most existing methods employ either voxel-level features or handcrafted regional representations built from predefined brain atlases, limiting their ability to capture complex brain patterns. This paper develops a unified pipeline that utilizes Vision Transformers (ViTs) for extracting 3D region embeddings from sMRI data and Graph Neural Network (GNN) for classification. We explore two strategies for defining regions: (1) an atlas-based approach using predefined structural and functional brain atlases, and (2) an cube-based method by which ViTs are trained directly to identify regions from uniformly extracted 3D patches. Further, cosine similarity graphs are generated to model interregional relationships, and guide GNN-based classification. Extensive experiments were conducted using the REST-meta-MDD dataset to demonstrate the effectiveness of our model. With stratified 10-fold cross-validation, the best model obtained 78.98% accuracy, 76.54% sensitivity, 81.58% specificity, 81.58% precision, and 78.98% F1-score. Further, atlas-based models consistently outperformed the cube-based approach, highlighting the importance of using domain-specific anatomical priors for MDD detection.

Deep learning based multi-shot breast diffusion MRI: Improving imaging quality and reduced distortion.

Chien N, Cho YH, Wang MY, Tsai LW, Yeh CY, Li CW, Lan P, Wang X, Liu KL, Chang YC

pubmed logopapersSep 15 2025
To investigate the imaging performance of deep-learning reconstruction on multiplexed sensitivity encoding (MUSE DL) compared to single-shot diffusion-weighted imaging (SS-DWI) in the breast. In this prospective, institutional review board-approved study, both single-shot (SS-DWI) and multi-shot MUSE DWI were performed on patients. MUSE DWI was processed using deep-learning reconstruction (MUSE DL). Quantitative analysis included calculating apparent diffusion coefficients (ADCs), signal-to-noise ratio (SNR) within fibroglandular tissue (FGT), adjacent pectoralis muscle, and breast tumors. The Hausdorff distance (HD) was used as a distortion index to compare breast contours between T2-weighted anatomical images, SS-DWI, and MUSE images. Subjective visual qualitative analysis was performed using Likert scale. Quantitative analyses were assessed using Friedman's rank-based analysis with Bonferroni correction. Sixty-one female participants (mean age 49.07 years ± 11.0 [standard deviation]; age range 23-75 years) with 65 breast lesions were included in this study. All data were acquired using a 3 T MRI scanner. The MUSE DL yielded significant improvement in image quality compared with non-DL MUSE in both 2-shot and 4-shot settings (SNR enhancement FGT 2-shot DL 207.8 % [125.5-309.3],4- shot DL 175.1 % [102.2-223.5]). No significant difference was observed in the ADC between MUSE, MUSE DL, and SS-DWI in both benign (P = 0.154) and malignant tumors (P = 0.167). There was significantly less distortion in the 2- and 4-shot MUSE DL images (HD 3.11 mm, 2.58 mm) than in the SS-DWI images (4.15 mm, P < 0.001). MUSE DL enhances SNR, minimizes image distortion, and preserves lesion diagnosis accuracy and ADC values.

Multi-scale based Network and Adaptive EfficientnetB7 with ASPP: Analysis of Novel Brain Tumor Segmentation and Classification.

Kulkarni SV, Poornapushpakala S

pubmed logopapersSep 15 2025
Medical imaging has undergone significant advancements with the integration of deep learning techniques, leading to enhanced accuracy in image analysis. These methods autonomously extract relevant features from medical images, thereby improving the detection and classification of various diseases. Among imaging modalities, Magnetic Resonance Imaging (MRI) is particularly valuable due to its high contrast resolution, which enables the differentiation of soft tissues, making it indispensable in the diagnosis of brain disorders. The accurate classification of brain tumors is crucial for diagnosing many neurological conditions. However, conventional classification techniques are often limited by high computational complexity and suboptimal accuracy. Motivated by these issues, an innovative model is proposed in this work for segmenting and classifying brain tumors. The research aims to develop a robust and efficient deep learning framework that can assist clinicians in making precise and early diagnoses, ultimately leading to more effective treatment planning. The proposed methodology begins with the acquisition of MRI images from standardized medical imaging databases. Subsequently, the abnormal regions from the images are segmented using the Multiscale Bilateral Awareness Network (MBANet), which incorporates multi-scale operations to enhance feature representation and image quality. A novel classificationarchitecture then processes the segmented images, termed Region Vision Transformer-based Adaptive EfficientNetB7 with Atrous Spatial Pyramid Pooling (RVAEB7-ASPP). To optimize the performance of the classification model, hyperparameters are fine-tuned using the Modified Random Parameter-based Hippopotamus Optimization Algorithm (MRP-HOA). The model's effectiveness is verified through a comprehensive experimental evaluation that utilizes various performance metrics and is compared to current state-of-the-art methods. The proposed MRP-HOA-RVAEB7-ASPP model achieves an impressive classification accuracy of 98.2%, significantly outperforming conventional approaches in brain tumor classification tasks. The MBANet effectively performs brain tumor segmentation, while the RVAEB7-ASPP model provides reliable classification. The integration of the MRP-HOA-RVAEB7-ASPP model optimizes feature extractions and parameter tuning, leading to improved accuracy and robustness. The integration of advanced segmentation, adaptive feature extraction, and optimal parameter tuning enhances the reliability and accuracy of the model. This framework provides a more effective and trustworthy solution for the early detection and clinical assessment of brain tumors, leading to improved patient outcomes through timely intervention.

Fully automatic bile duct segmentation in magnetic resonance cholangiopancreatography for biliary surgery planning using deep learning.

Tao H, Wang J, Guo K, Luo W, Zeng X, Lu M, Lin J, Li B, Qian Y, Yang J

pubmed logopapersSep 15 2025
To automatically and accurately perform three-dimensional reconstruction of dilated and non-dilated bile ducts based on magnetic resonance cholangiopancreatography (MRCP) data, assisting in the formulation of optimal surgical plans and guiding precise bile duct surgery. A total of 249 consecutive patients who underwent standardized 3D-MRCP scans were randomly divided into a training cohort (n = 208) and a testing cohort (n = 41). Ground truth segmentation was manually delineated by two hepatobiliary surgeons or radiologists following industry certification procedures and reviewed by two expert-level physicians for biliary surgery planning. The deep learning semantic segmentation model was constructed using the nnU-Net framework. Model performance was assessed by comparing model predictions with ground truth segmentation as well as real surgical scenarios. The generalization of the model was tested on a dataset of 10 3D-MRCP scans from other centers, with ground truth segmentation of biliary structures. The evaluation was performed on 41 internal test sets and 10 external test sets, with mean Dice Similarity Coefficient (DSC) values of respectively 0.9403 and 0.9070. The correlation coefficient between the 3D model based on automatic segmentation predictions and the ground truth results exceeded 0.95. The 95 % limits of agreement (LoA) for biliary tract length ranged from -4.456 to 4.781, and for biliary tract volume ranged from -3.404 to 3.650 ml. Furthermore, the intraoperative Indocyanine green (ICG) fluorescence imaging and operation situation validated that this model can accurately reconstruct biliary landmarks. By leveraging a deep learning algorithmic framework, an AI model can be trained to perform automatic and accurate 3D reconstructions of non-dilated bile ducts, thereby providing guidance for the preoperative planning of complex biliary surgeries.

Enriched text-guided variational multimodal knowledge distillation network (VMD) for automated diagnosis of plaque vulnerability in 3D carotid artery MRI

Bo Cao, Fan Yu, Mengmeng Feng, SenHao Zhang, Xin Meng, Yue Zhang, Zhen Qian, Jie Lu

arxiv logopreprintSep 15 2025
Multimodal learning has attracted much attention in recent years due to its ability to effectively utilize data features from a variety of different modalities. Diagnosing the vulnerability of atherosclerotic plaques directly from carotid 3D MRI images is relatively challenging for both radiologists and conventional 3D vision networks. In clinical practice, radiologists assess patient conditions using a multimodal approach that incorporates various imaging modalities and domain-specific expertise, paving the way for the creation of multimodal diagnostic networks. In this paper, we have developed an effective strategy to leverage radiologists' domain knowledge to automate the diagnosis of carotid plaque vulnerability through Variation inference and Multimodal knowledge Distillation (VMD). This method excels in harnessing cross-modality prior knowledge from limited image annotations and radiology reports within training data, thereby enhancing the diagnostic network's accuracy for unannotated 3D MRI images. We conducted in-depth experiments on the dataset collected in-house and verified the effectiveness of the VMD strategy we proposed.

Advancing Alzheimer's Disease Diagnosis Using VGG19 and XGBoost: A Neuroimaging-Based Method.

Boudi A, He J, Abd El Kader I, Liu X, Mouhafid M

pubmed logopapersSep 15 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that currently affects over 55 million individuals worldwide. Conventional diagnostic approaches often rely on subjective clinical assessments and isolated biomarkers, limiting their accuracy and early-stage effectiveness. With the rising global burden of AD, there is an urgent need for objective, automated tools that enhance diagnostic precision using neuroimaging data. This study proposes a novel diagnostic framework combining a fine-tuned VGG19 deep convolutional neural network with an eXtreme Gradient Boosting (XGBoost) classifier. The model was trained and validated on the OASIS MRI dataset (Dataset 2), which was manually balanced to ensure equitable class representation across the four AD stages. The VGG19 model was pre-trained on ImageNet and fine-tuned by unfreezing its last ten layers. Data augmentation strategies, including random rotation and zoom, were applied to improve generalization. Extracted features were classified using XGBoost, incorporating class weighting, early stopping, and adaptive learning. Model performance was evaluated using accuracy, precision, recall, F1-score, and ROC-AUC. The proposed VGG19-XGBoost model achieved a test accuracy of 99.6%, with an average precision of 1.00, a recall of 0.99, and an F1-score of 0.99 on the balanced OASIS dataset. ROC curves indicated high separability across AD stages, confirming strong discriminatory power and robustness in classification. The integration of deep feature extraction with ensemble learning demonstrated substantial improvement over conventional single-model approaches. The hybrid model effectively mitigated issues of class imbalance and overfitting, offering stable performance across all dementia stages. These findings suggest the method's practical viability for clinical decision support in early AD diagnosis. This study presents a high-performing, automated diagnostic tool for Alzheimer's disease based on neuroimaging. The VGG19-XGBoost hybrid architecture demonstrates exceptional accuracy and robustness, underscoring its potential for real-world applications. Future work will focus on integrating multimodal data and validating the model on larger and more diverse populations to enhance clinical utility and generalizability.

DinoAtten3D: Slice-Level Attention Aggregation of DinoV2 for 3D Brain MRI Anomaly Classification

Fazle Rafsani, Jay Shah, Catherine D. Chong, Todd J. Schwedt, Teresa Wu

arxiv logopreprintSep 15 2025
Anomaly detection and classification in medical imaging are critical for early diagnosis but remain challenging due to limited annotated data, class imbalance, and the high cost of expert labeling. Emerging vision foundation models such as DINOv2, pretrained on extensive, unlabeled datasets, offer generalized representations that can potentially alleviate these limitations. In this study, we propose an attention-based global aggregation framework tailored specifically for 3D medical image anomaly classification. Leveraging the self-supervised DINOv2 model as a pretrained feature extractor, our method processes individual 2D axial slices of brain MRIs, assigning adaptive slice-level importance weights through a soft attention mechanism. To further address data scarcity, we employ a composite loss function combining supervised contrastive learning with class-variance regularization, enhancing inter-class separability and intra-class consistency. We validate our framework on the ADNI dataset and an institutional multi-class headache cohort, demonstrating strong anomaly classification performance despite limited data availability and significant class imbalance. Our results highlight the efficacy of utilizing pretrained 2D foundation models combined with attention-based slice aggregation for robust volumetric anomaly detection in medical imaging. Our implementation is publicly available at https://github.com/Rafsani/DinoAtten3D.git.

Trade-Off Analysis of Classical Machine Learning and Deep Learning Models for Robust Brain Tumor Detection: Benchmark Study.

Tian Y

pubmed logopapersSep 15 2025
Medical image analysis plays a critical role in brain tumor detection, but training deep learning models often requires large, labeled datasets, which can be time-consuming and costly. This study explores a comparative analysis of machine learning and deep learning models for brain tumor classification, focusing on whether deep learning models are necessary for small medical datasets and whether self-supervised learning can reduce annotation costs. The primary goal is to evaluate trade-offs between traditional machine learning and deep learning, including self-supervised models under small medical image data. The secondary goal is to assess model robustness, transferability, and generalization through evaluation of unseen data within- and cross-domains. Four models were compared: (1) support vector machine (SVM) with histogram of oriented gradients (HOG) features, (2) a convolutional neural network based on ResNet18, (3) a transformer-based model using vision transformer (ViT-B/16), and (4) a self-supervised learning approach using Simple Contrastive Learning of Visual Representations (SimCLR). These models were selected to represent diverse paradigms. SVM+HOG represents traditional feature engineering with low computational cost, ResNet18 serves as a well-established convolutional neural network with strong baseline performance, ViT-B/16 leverages self-attention to capture long-range spatial features, and SimCLR enables learning from unlabeled data, potentially reducing annotation costs. The primary dataset consisted of 2870 brain magnetic resonance images across 4 classes: glioma, meningioma, pituitary, and nontumor. All models were trained under consistent settings, including data augmentation, early stopping, and 3 independent runs using the different random seeds to account for performance variability. Performance metrics included accuracy, precision, recall, F<sub>1</sub>-score, and convergence. To assess robustness and generalization capability, evaluation was performed on unseen test data from both the primary and cross datasets. No retraining or test augmentations were applied to the external data, thereby reflecting realistic deployment conditions. The models demonstrated consistently strong performance in both within-domain and cross-domain evaluations. The results revealed distinct trade-offs; ResNet18 achieved the highest validation accuracy (mean 99.77%, SD 0.00%) and the lowest validation loss, along with a weighted test accuracy of 99% within-domain and 95% cross-domain. SimCLR reached a mean validation accuracy of 97.29% (SD 0.86%) and achieved up to 97% weighted test accuracy within-domain and 91% cross-domain, despite requiring 2-stage training phases involving contrastive pretraining followed by linear evaluation. ViT-B/16 reached a mean validation accuracy of 97.36% (SD 0.11%), with a weighted test accuracy of 98% within-domain and 93% cross-domain. SVM+HOG maintained a competitive validation accuracy of 96.51%, with 97% within-domain test accuracy, though its accuracy dropped to 80% cross-domain. The study reveals meaningful trade-offs between model complexity, annotation requirements, and deployment feasibility-critical factors for selecting models in real-world medical imaging applications.

Prediction of Cardiovascular Events Using Fully Automated Global Longitudinal and Circumferential Strain in Patients Undergoing Stress CMR.

Afana AS, Garot J, Duhamel S, Hovasse T, Champagne S, Unterseeh T, Garot P, Akodad M, Chitiboi T, Sharma P, Jacob A, Gonçalves T, Florence J, Unger A, Sanguineti F, Militaru S, Pezel T, Toupin S

pubmed logopapersSep 15 2025
Stress perfusion cardiovascular magnetic resonance (CMR) is widely used to detect myocardial ischemia, mostly through visual assessment. Recent studies suggest that strain imaging at rest and during stress can also help in prognostic stratification. However, the additional prognostic value of combining both rest and stress strain imaging has not been fully established. This study examined the incremental benefit of combining these strain measures with traditional risk prognosticators and CMR findings to predict major adverse clinical events (MACE) in a cohort of consecutive patients referred for stress CMR. This retrospective, single-center observational study included all consecutive patients with known or suspected coronary artery disease referred for stress CMR between 2016 and 2018. Fully automated machine learning was used to obtain global longitudinal strain at rest (rest-GLS) and global circumferential strain at stress (stress-GCS). The primary outcome was MACE, including cardiovascular death or hospitalization for heart failure. Cox models were used to assess the incremental prognostic value of combining these strain features with traditional prognosticators. Of 2778 patients (age 65±12 years, 68% male), 96% had feasible, fully automated rest-GLS and stress-GCS measurements. After a median follow-up of 5.2 (4.8-5.5) years, 316 (11.1%) patients experienced MACE. After adjustment for traditional prognosticators, both rest-GLS (hazard ratio, 1.09 [95% CI, 1.05-1.13]; <i>P</i><0.001) and stress-GCS (hazard ratio, 1.08 [95% CI, 1.03-1.12]; <i>P</i><0.001) were independently associated with MACE. The best cutoffs for MACE prediction were >-10% for rest-GLS and stress-GCS, with a C-index improvement of 0.02, continuous net reclassification improvement of 15.6%, and integrative discrimination index of 2.2% (all <i>P</i><0.001). The combination of rest-GLS and stress-GCS, with a cutoff of >-10% provided an incremental prognostic value over and above traditional prognosticators, including CMR parameters, for predicting MACE in patients undergoing stress CMR.
Page 16 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.