Sort by:
Page 134 of 1621612 results

UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training.

Lei J, Dai L, Jiang H, Wu C, Zhang X, Zhang Y, Yao J, Xie W, Zhang Y, Li Y, Zhang Y, Wang Y

pubmed logopapersJun 1 2025
Magnetic Resonance Imaging (MRI) has become a pivotal tool in diagnosing brain diseases, with a wide array of computer-aided artificial intelligence methods being proposed to enhance diagnostic accuracy. However, early studies were often limited by small-scale datasets and a narrow range of disease types, which posed challenges in model generalization. This study presents UniBrain, a hierarchical knowledge-enhanced pre-training framework designed for universal brain MRI diagnosis. UniBrain leverages a large-scale dataset comprising 24,770 imaging-report pairs from routine diagnostics for pre-training. Unlike previous approaches that either focused solely on visual representation learning or used brute-force alignment between vision and language, the framework introduces a hierarchical alignment mechanism. This mechanism extracts structured knowledge from free-text clinical reports at multiple granularities, enabling vision-language alignment at both the sequence and case levels, thereby significantly improving feature learning efficiency. A coupled vision-language perception module is further employed for text-guided multi-label classification, which facilitates zero-shot evaluation and fine-tuning of downstream tasks without modifying the model architecture. UniBrain is validated on both in-domain and out-of-domain datasets, consistently surpassing existing state-of-the-art diagnostic models and demonstrating performance on par with radiologists in specific disease categories. It shows strong generalization capabilities across diverse tasks, highlighting its potential for broad clinical application. The code is available at https://github.com/ljy19970415/UniBrain.

MCNEL: A multi-scale convolutional network and ensemble learning for Alzheimer's disease diagnosis.

Yan F, Peng L, Dong F, Hirota K

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) significantly threatens community well-being and healthcare resource allocation due to its high incidence and mortality. Therefore, early detection and intervention are crucial for reducing AD-related fatalities. However, the existing deep learning-based approaches often struggle to capture complex structural features of magnetic resonance imaging (MRI) data effectively. Common techniques for multi-scale feature fusion, such as direct summation and concatenation methods, often introduce redundant noise that can negatively affect model performance. These challenges highlight the need for developing more advanced methods to improve feature extraction and fusion, aiming to enhance diagnostic accuracy. This study proposes a multi-scale convolutional network and ensemble learning (MCNEL) framework for early and accurate AD diagnosis. The framework adopts enhanced versions of the EfficientNet-B0 and MobileNetV2 models, which are subsequently integrated with the DenseNet121 model to create a hybrid feature extraction tool capable of extracting features from multi-view slices. Additionally, a SimAM-based feature fusion method is developed to synthesize key feature information derived from multi-scale images. To ensure classification accuracy in distinguishing AD from multiple stages of cognitive impairment, this study designs an ensemble learning classifier model using multiple classifiers and a self-adaptive weight adjustment strategy. Extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset validate the effectiveness of our solution, which achieves average accuracies of 96.67% for ADNI-1 and 96.20% for ADNI-2, respectively. The results indicate that the MCNEL outperforms recent comparable algorithms in terms of various evaluation metrics, demonstrating superior performance and robustness in AD diagnosis. This study markedly enhances the diagnostic capabilities for AD, allowing patients to receive timely treatments that can slow down disease progression and improve their quality of life.

DCE-MRI based deep learning analysis of intratumoral subregion for predicting Ki-67 expression level in breast cancer.

Ding Z, Zhang C, Xia C, Yao Q, Wei Y, Zhang X, Zhao N, Wang X, Shi S

pubmed logopapersJun 1 2025
To evaluate whether deep learning (DL) analysis of intratumor subregion based on dynamic contrast-enhanced MRI (DCE-MRI) can help predict Ki-67 expression level in breast cancer. A total of 290 breast cancer patients from two hospitals were retrospectively collected. A k-means clustering algorithm confirmed subregions of tumor. DL features of whole tumor and subregions were extracted from DCE-MRI images based on 3D ResNet18 pre-trained model. The logistic regression model was constructed after dimension reduction. Model performance was assessed using the area under the curve (AUC), and clinical value was demonstrated through decision curve analysis (DCA). The k-means clustering method clustered the tumor into two subregions (habitat 1 and habitat 2) based on voxel values. Both the habitat 1 model (validation set: AUC = 0.771, 95 %CI: 0.642-0.900 and external test set: AUC = 0.794, 95 %CI: 0.696-0.891) and the habitat 2 model (AUC = 0.734, 95 %CI: 0.605-0.862 and AUC = 0.756, 95 %CI: 0.646-0.866) showed better predictive capabilities for Ki-67 expression level than the whole tumor model (AUC = 0.686, 95 %CI: 0.550-0.823 and AUC = 0.680, 95 %CI: 0.555-0.804). The combined model based on the two subregions further enhanced the predictive capability (AUC = 0.808, 95 %CI: 0.696-0.921 and AUC = 0.842, 95 %CI: 0.758-0.926), and it demonstrated higher clinical value than other models in DCA. The deep learning model derived from subregion of tumor showed better performance for predicting Ki-67 expression level in breast cancer patients. Additionally, the model that integrated two subregions further enhanced the predictive performance.

Exploring the significance of the frontal lobe for diagnosis of schizophrenia using explainable artificial intelligence and group level analysis.

Varaprasad SA, Goel T

pubmed logopapersJun 1 2025
Schizophrenia (SZ) is a complex mental disorder characterized by a profound disruption in cognition and emotion, often resulting in a distorted perception of reality. Magnetic resonance imaging (MRI) is an essential tool for diagnosing SZ which helps to understand the organization of the brain. Functional MRI (fMRI) is a specialized imaging technique to measure and map brain activity by detecting changes in blood flow and oxygenation. The proposed paper correlates the results using an explainable deep learning approach to identify the significant regions of SZ patients using group-level analysis for both structural MRI (sMRI) and fMRI data. The study found that the heat maps for Grad-CAM show clear visualization in the frontal lobe for the classification of SZ and CN with a 97.33% accuracy. The group difference analysis reveals that sMRI data shows intense voxel activity in the right superior frontal gyrus of the frontal lobe in SZ patients. Also, the group difference between SZ and CN during n-back tasks of fMRI data indicates significant voxel activation in the frontal cortex of the frontal lobe. These findings suggest that the frontal lobe plays a crucial role in the diagnosis of SZ, aiding clinicians in planning the treatment.

Semantic segmentation for individual thigh skeletal muscles of athletes on magnetic resonance images.

Kasahara J, Ozaki H, Matsubayashi T, Takahashi H, Nakayama R

pubmed logopapersJun 1 2025
The skeletal muscles that athletes should train vary depending on their discipline and position. Therefore, individual skeletal muscle cross-sectional area assessment is important in the development of training strategies. To measure the cross-sectional area of skeletal muscle, manual segmentation of each muscle is performed using magnetic resonance (MR) imaging. This task is time-consuming and requires significant effort. Additionally, interobserver variability can sometimes be problematic. The purpose of this study was to develop an automated computerized method for semantic segmentation of individual thigh skeletal muscles from MR images of athletes. Our database consisted of 697 images from the thighs of 697 elite athletes. The images were randomly divided into a training dataset (70%), a validation dataset (10%), and a test dataset (20%). A label image was generated for each image by manually annotating 15 object classes: 12 different skeletal muscles, fat, bones, and vessels and nerves. Using the validation dataset, DeepLab v3+ was chosen from three different semantic segmentation models as a base model for segmenting individual thigh skeletal muscles. The feature extractor in DeepLab v3+ was also optimized to ResNet50. The mean Jaccard index and Dice index for the proposed method were 0.853 and 0.916, respectively, which were significantly higher than those from conventional DeepLab v3+ (Jaccard index: 0.810, p < .001; Dice index: 0.887, p < .001). The proposed method achieved a mean area error for 15 objective classes of 3.12%, useful in the assessment of skeletal muscle cross-sectional area from MR images.

Standardized pancreatic MRI-T1 measurement methods: comparison between manual measurement and a semi-automated pipeline with automatic quality control.

Triay Bagur A, Arya Z, Waddell T, Pansini M, Fernandes C, Counter D, Jackson E, Thomaides-Brears HB, Robson MD, Bulte DP, Banerjee R, Aljabar P, Brady M

pubmed logopapersJun 1 2025
Scanner-referenced T1 (srT1) is a method for measuring pancreas T1 relaxation time. The purpose of this multi-centre study is 2-fold: (1) to evaluate the repeatability of manual ROI-based analysis of srT1, (2) to validate a semi-automated measurement method with an automatic quality control (QC) module to identify likely discrepancies between automated and manual measurements. Pancreatic MRI scans from a scan-rescan cohort (46 subjects) were used to evaluate the repeatability of manual analysis. Seven hundred and eight scans from a longitudinal multi-centre study of 466 subjects were divided into training, internal validation (IV), and external validation (EV) cohorts. A semi-automated method for measuring srT1 using machine learning is proposed and compared against manual analysis on the validation cohorts with and without automated QC. Inter-operator agreement between manual ROI-based method and semi-automated method had low bias (3.8 ms or 0.5%) and limits of agreement [-36.6, 44.1] ms. There was good agreement between the 2 methods without automated QC (IV: 3.2 [-47.1, 53.5] ms, EV: -0.5 [-35.2, 34.2] ms). After QC, agreement on the IV set improved, was unchanged in the EV set, and the agreement in both was within inter-operator bounds (IV: -0.04 [-33.4, 33.3] ms, EV: -1.9 [-37.6, 33.7] ms). The semi-automated method improved scan-rescan agreement versus manual analysis (manual: 8.2 [-49.7, 66] ms, automated: 6.7 [-46.7, 60.1] ms). The semi-automated method for characterization of standardized pancreatic T1 using MRI has the potential to decrease analysis time while maintaining accuracy and improving scan-rescan agreement. We provide intra-operator, inter-operator, and scan-rescan agreement values for manual measurement of srT1, a standardized biomarker for measuring pancreas fibro-inflammation. Applying a semi-automated measurement method improves scan-rescan agreement and agrees well with manual measurements, while reducing human effort. Adding automated QC can improve agreement between manual and automated measurements. We describe a method for semi-automated, standardized measurement of pancreatic T1 (srT1), which includes automated quality control. Measurements show good agreement with manual ROI-based analysis, with comparable consistency to inter-operator performance.

Optimized attention-enhanced U-Net for autism detection and region localization in MRI.

K VRP, Bindu CH, Rama Devi K

pubmed logopapersJun 1 2025
Autism spectrum disorder (ASD) is a neurodevelopmental condition that affects a child's cognitive and social skills, often diagnosed only after symptoms appear around age 2. Leveraging MRI for early ASD detection can improve intervention outcomes. This study proposes a framework for autism detection and region localization using an optimized deep learning approach with attention mechanisms. The pipeline includes MRI image collection, pre-processing (bias field correction, histogram equalization, artifact removal, and non-local mean filtering), and autism classification with a Symmetric Structured MobileNet with Attention Mechanism (SSM-AM). Enhanced by Refreshing Awareness-aided Election-Based Optimization (RA-EBO), SSM-AM achieves robust classification. Abnormality region localization utilizes a Multiscale Dilated Attention-based Adaptive U-Net (MDA-AUnet) further optimized by RA-EBO. Experimental results demonstrate that our proposed model outperforms existing methods, achieving an accuracy of 97.29%, sensitivity of 97.27%, specificity of 97.36%, and precision of 98.98%, significantly improving classification and localization performance. These results highlight the potential of our approach for early ASD diagnosis and targeted interventions. The datasets utilized for this work are publicly available at https://fcon_1000.projects.nitrc.org/indi/abide/.

The impact of training image quality with a novel protocol on artificial intelligence-based LGE-MRI image segmentation for potential atrial fibrillation management.

Berezhnoy AK, Kalinin AS, Parshin DA, Selivanov AS, Demin AG, Zubov AG, Shaidullina RS, Aitova AA, Slotvitsky MM, Kalemberg AA, Kirillova VS, Syrovnev VA, Agladze KI, Tsvelaya VA

pubmed logopapersJun 1 2025
Atrial fibrillation (AF) is the most common cardiac arrhythmia, affecting up to 2 % of the population. Catheter ablation is a promising treatment for AF, particularly for paroxysmal AF patients, but it often has high recurrence rates. Developing in silico models of patients' atria during the ablation procedure using cardiac MRI data may help reduce these rates. This study aims to develop an effective automated deep learning-based segmentation pipeline by compiling a specialized dataset and employing standardized labeling protocols to improve segmentation accuracy and efficiency. In doing so, we aim to achieve the highest possible accuracy and generalization ability while minimizing the burden on clinicians involved in manual data segmentation. We collected LGE-MRI data from VMRC and the cDEMRIS database. Two specialists manually labeled the data using standardized protocols to reduce subjective errors. Neural network (nnU-Net and smpU-Net++) performance was evaluated using statistical tests, including sensitivity and specificity analysis. A new database of LGE-MRI images, based on manual segmentation, was created (VMRC). Our approach with consistent labeling protocols achieved a Dice coefficient of 92.4 % ± 0.8 % for the cavity and 64.5 % ± 1.9 % for LA walls. Using the pre-trained RIFE model, we attained a Dice score of approximately 89.1 % ± 1.6 % for atrial LGE-MRI imputation, outperforming classical methods. Sensitivity and specificity values demonstrated substantial enhancement in the performance of neural networks trained with the new protocol. Standardized labeling and RIFE applications significantly improved machine learning tool efficiency for constructing 3D LA models. This novel approach supports integrating state-of-the-art machine learning methods into broader in silico pipelines for predicting ablation outcomes in AF patients.

Brain tumor segmentation with deep learning: Current approaches and future perspectives.

Verma A, Yadav AK

pubmed logopapersJun 1 2025
Accurate brain tumor segmentation from MRI images is critical in the medical industry, directly impacts the efficacy of diagnostic and treatment plans. Accurate segmentation of tumor region can be challenging, especially when noise and abnormalities are present. This research provides a systematic review of automatic brain tumor segmentation techniques, with a specific focus on the design of network architectures. The review categorizes existing methods into unsupervised and supervised learning techniques, as well as machine learning and deep learning approaches within supervised techniques. Deep learning techniques are thoroughly reviewed, with a particular focus on CNN-based, U-Net-based, transfer learning-based, transformer-based, and hybrid transformer-based methods. This survey encompasses a broad spectrum of automatic segmentation methodologies, from traditional machine learning approaches to advanced deep learning frameworks. It provides an in-depth comparison of performance metrics, model efficiency, and robustness across multiple datasets, particularly the BraTS dataset. The study further examines multi-modal MRI imaging and its influence on segmentation accuracy, addressing domain adaptation, class imbalance, and generalization challenges. The analysis highlights the current challenges in Computer-aided Diagnostic (CAD) systems, examining how different models and imaging sequences impact performance. Recent advancements in deep learning, especially the widespread use of U-Net architectures, have significantly enhanced medical image segmentation. This review critically evaluates these developments, focusing the iterative improvements in U-Net models that have driven progress in brain tumor segmentation. Furthermore, it explores various techniques for improving U-Net performance for medical applications, focussing on its potential for improving diagnostic and treatment planning procedures. The efficiency of these automated segmentation approaches is rigorously evaluated using the BraTS dataset, a benchmark dataset, part of the annual Multimodal Brain Tumor Segmentation Challenge (MICCAI). This evaluation provides insights into the current state-of-the-art and identifies key areas for future research and development.

Evaluating the prognostic significance of artificial intelligence-delineated gross tumor volume and prostate volume measurements for prostate radiotherapy.

Adleman J, McLaughlin PY, Tsui JMG, Buzurovic I, Harris T, Hudson J, Urribarri J, Cail DW, Nguyen PL, Orio PF, Lee LK, King MT

pubmed logopapersJun 1 2025
Artificial intelligence (AI) may extract prognostic information from MRI for localized prostate cancer. We evaluate whether AI-derived prostate and gross tumor volume (GTV) are associated with toxicity and oncologic outcomes after radiotherapy. We conducted a retrospective study of patients, who underwent radiotherapy between 2010 and 2017. We trained an AI segmentation algorithm to contour the prostate and GTV from patients treated with external-beam RT, and applied the algorithm to those treated with brachytherapy. AI prostate and GTV volumes were calculated from segmentation results. We evaluated whether AI GTV volume was associated with biochemical failure (BF) and metastasis. We evaluated whether AI prostate volume was associated with acute and late grade 2+ genitourinary toxicity, and International Prostate Symptom Score (IPSS) resolution for monotherapy and combination sets, separately. We identified 187 patients who received brachytherapy (monotherapy (N = 154) or combination therapy (N = 33)). AI GTV volume was associated with BF (hazard ratio (HR):1.28[1.14,1.44];p < 0.001) and metastasis (HR:1.34[1.18,1.53;p < 0.001). For the monotherapy subset, AI prostate volume was associated with both acute (adjusted odds ratio:1.16[1.07,1.25];p < 0.001) and late grade 2 + genitourinary toxicity (adjusted HR:1.04[1.01,1.07];p = 0.01), but not IPSS resolution (0.99[0.97,1.00];p = 0.13). For the combination therapy subset, AI prostate volume was not associated with either acute (p = 0.72) or late (p = 0.75) grade 2 + urinary toxicity. However, AI prostate volume was associated with IPSS resolution (0.96[0.93, 0.99];p = 0.01). AI-derived prostate and GTV volumes may be prognostic for toxicity and oncologic outcomes after RT. Such information may aid in treatment decision-making, given differences in outcomes among RT treatment modalities.
Page 134 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.