Sort by:
Page 104 of 4034028 results

MCA-GAN: A lightweight Multi-scale Context-Aware Generative Adversarial Network for MRI reconstruction.

Hou B, Du H

pubmed logopapersAug 6 2025
Magnetic Resonance Imaging (MRI) is widely utilized in medical imaging due to its high resolution and non-invasive nature. However, the prolonged acquisition time significantly limits its clinical applicability. Although traditional compressed sensing (CS) techniques can accelerate MRI acquisition, they often lead to degraded reconstruction quality under high undersampling rates. Deep learning-based methods, including CNN- and GAN-based approaches, have improved reconstruction performance, yet are limited by their local receptive fields, making it challenging to effectively capture long-range dependencies. Moreover, these models typically exhibit high computational complexity, which hinders their efficient deployment in practical scenarios. To address these challenges, we propose a lightweight Multi-scale Context-Aware Generative Adversarial Network (MCA-GAN), which enhances MRI reconstruction through dual-domain generators that collaboratively optimize both k-space and image-domain representations. MCA-GAN integrates several lightweight modules, including Depthwise Separable Local Attention (DWLA) for efficient local feature extraction, Adaptive Group Rearrangement Block (AGRB) for dynamic inter-group feature optimization, Multi-Scale Spatial Context Modulation Bridge (MSCMB) for multi-scale feature fusion in skip connections, and Channel-Spatial Multi-Scale Self-Attention (CSMS) for improved global context modeling. Extensive experiments conducted on the IXI, MICCAI 2013, and MRNet knee datasets demonstrate that MCA-GAN consistently outperforms existing methods in terms of PSNR and SSIM. Compared to SepGAN, the latest lightweight model, MCA-GAN achieves a 27.3% reduction in parameter size and a 19.6% reduction in computational complexity, while attaining the shortest reconstruction time among all compared methods. Furthermore, MCA-GAN exhibits robust performance across various undersampling masks and acceleration rates. Cross-dataset generalization experiments further confirm its ability to maintain competitive reconstruction quality, underscoring its strong generalization potential. Overall, MCA-GAN improves MRI reconstruction quality while significantly reducing computational cost through a lightweight architecture and multi-scale feature fusion, offering an efficient and accurate solution for accelerated MRI.

Advanced Multi-Architecture Deep Learning Framework for BIRADS-Based Mammographic Image Retrieval: Comprehensive Performance Analysis with Super-Ensemble Optimization

MD Shaikh Rahman, Feiroz Humayara, Syed Maudud E Rabbi, Muhammad Mahbubur Rashid

arxiv logopreprintAug 6 2025
Content-based mammographic image retrieval systems require exact BIRADS categorical matching across five distinct classes, presenting significantly greater complexity than binary classification tasks commonly addressed in literature. Current medical image retrieval studies suffer from methodological limitations including inadequate sample sizes, improper data splitting, and insufficient statistical validation that hinder clinical translation. We developed a comprehensive evaluation framework systematically comparing CNN architectures (DenseNet121, ResNet50, VGG16) with advanced training strategies including sophisticated fine-tuning, metric learning, and super-ensemble optimization. Our evaluation employed rigorous stratified data splitting (50%/20%/30% train/validation/test), 602 test queries, and systematic validation using bootstrap confidence intervals with 1,000 samples. Advanced fine-tuning with differential learning rates achieved substantial improvements: DenseNet121 (34.79% precision@10, 19.64% improvement) and ResNet50 (34.54%, 19.58% improvement). Super-ensemble optimization combining complementary architectures achieved 36.33% precision@10 (95% CI: [34.78%, 37.88%]), representing 24.93% improvement over baseline and providing 3.6 relevant cases per query. Statistical analysis revealed significant performance differences between optimization strategies (p<0.001) with large effect sizes (Cohen's d>0.8), while maintaining practical search efficiency (2.8milliseconds). Performance significantly exceeds realistic expectations for 5-class medical retrieval tasks, where literature suggests 20-25% precision@10 represents achievable performance for exact BIRADS matching. Our framework establishes new performance benchmarks while providing evidence-based architecture selection guidelines for clinical deployment in diagnostic support and quality assurance applications.

Deep learning-based radiomics does not improve residual cancer burden prediction post-chemotherapy in LIMA breast MRI trial.

Janse MHA, Janssen LM, Wolters-van der Ben EJM, Moman MR, Viergever MA, van Diest PJ, Gilhuijs KGA

pubmed logopapersAug 6 2025
This study aimed to evaluate the potential additional value of deep radiomics for assessing residual cancer burden (RCB) in locally advanced breast cancer, after neoadjuvant chemotherapy (NAC) but before surgery, compared to standard predictors: tumor volume and subtype. This retrospective study used a 105-patient single-institution training set and a 41-patient external test set from three institutions in the LIMA trial. DCE-MRI was performed before and after NAC, and RCB was determined post-surgery. Three networks (nnU-Net, Attention U-net and vector-quantized encoder-decoder) were trained for tumor segmentation. For each network, deep features were extracted from the bottleneck layer and used to train random forest regression models to predict RCB score. Models were compared to (1) a model trained on tumor volume and (2) a model combining tumor volume and subtype. The potential complementary performance of combining deep radiomics with a clinical-radiological model was assessed. From the predicted RCB score, three metrics were calculated: area under the curve (AUC) for categories RCB-0/RCB-I versus RCB-II/III, pathological complete response (pCR) versus non-pCR, and Spearman's correlation. Deep radiomics models had an AUC between 0.68-0.74 for pCR and 0.68-0.79 for RCB, while the volume-only model had an AUC of 0.74 and 0.70 for pCR and RCB, respectively. Spearman's correlation varied from 0.45-0.51 (deep radiomics) to 0.53 (combined model). No statistical difference between models was observed. Segmentation network-derived deep radiomics contain similar information to tumor volume and subtype for inferring pCR and RCB after NAC, but do not complement standard clinical predictors in the LIMA trial. Question It is unknown if and which deep radiomics approach is most suitable to extract relevant features to assess neoadjuvant chemotherapy response on breast MRI. Findings Radiomic features extracted from deep-learning networks yield similar results in predicting neoadjuvant chemotherapy response as tumor volume and subtype in the LIMA study. However, they do not provide complementary information. Clinical relevance For predicting response to neoadjuvant chemotherapy in breast cancer patients, tumor volume on MRI and subtype remain important predictors of treatment outcome; deep radiomics might be an alternative when determining tumor volume and/or subtype is not feasible.

Predictive Modeling of Osteonecrosis of the Femoral Head Progression Using MobileNetV3_Large and Long Short-Term Memory Network: Novel Approach.

Kong G, Zhang Q, Liu D, Pan J, Liu K

pubmed logopapersAug 6 2025
The assessment of osteonecrosis of the femoral head (ONFH) often presents challenges in accuracy and efficiency. Traditional methods rely on imaging studies and clinical judgment, prompting the need for advanced approaches. This study aims to use deep learning algorithms to enhance disease assessment and prediction in ONFH, optimizing treatment strategies. The primary objective of this research is to analyze pathological images of ONFH using advanced deep learning algorithms to evaluate treatment response, vascular reconstruction, and disease progression. By identifying the most effective algorithm, this study seeks to equip clinicians with precise tools for disease assessment and prediction. Magnetic resonance imaging (MRI) data from 30 patients diagnosed with ONFH were collected, totaling 1200 slices, which included 675 slices with lesions and 225 normal slices. The dataset was divided into training (630 slices), validation (135 slices), and test (135 slices) sets. A total of 10 deep learning algorithms were tested for training and optimization, and MobileNetV3_Large was identified as the optimal model for subsequent analyses. This model was applied for quantifying vascular reconstruction, evaluating treatment responses, and assessing lesion progression. In addition, a long short-term memory (LSTM) model was integrated for the dynamic prediction of time-series data. The MobileNetV3_Large model demonstrated an accuracy of 96.5% (95% CI 95.1%-97.8%) and a recall of 94.8% (95% CI 93.2%-96.4%) in ONFH diagnosis, significantly outperforming DenseNet201 (87.3%; P<.05). Quantitative evaluation of treatment responses showed that vascularized bone grafting resulted in an average increase of 12.4 mm in vascular length (95% CI 11.2-13.6 mm; P<.01) and an increase of 2.7 in branch count (95% CI 2.3-3.1; P<.01) among the 30 patients. The model achieved an AUC of 0.92 (95% CI 0.90-0.94) for predicting lesion progression, outperforming traditional methods like ResNet50 (AUC=0.85; P<.01). Predictions were consistent with clinical observations in 92.5% of cases (24/26). The application of deep learning algorithms in examining treatment response, vascular reconstruction, and disease progression in ONFH presents notable advantages. This study offers clinicians a precise tool for disease assessment and highlights the significance of using advanced technological solutions in health care practice.

Dynamic neural network modulation associated with rumination in major depressive disorder: a prospective observational comparative analysis of cognitive behavioral therapy and pharmacotherapy.

Katayama N, Shinagawa K, Hirano J, Kobayashi Y, Nakagawa A, Umeda S, Kamiya K, Tajima M, Amano M, Nogami W, Ihara S, Noda S, Terasawa Y, Kikuchi T, Mimura M, Uchida H

pubmed logopapersAug 6 2025
Cognitive behavioral therapy (CBT) and pharmacotherapy are primary treatments for major depressive disorder (MDD). However, their differential effects on the neural networks associated with rumination, or repetitive negative thinking, remain poorly understood. This study included 135 participants, whose rumination severity was measured using the rumination response scale (RRS) and whose resting brain activity was measured using functional magnetic resonance imaging (fMRI) at baseline and after 16 weeks. MDD patients received either standard CBT based on Beck's manual (n = 28) or pharmacotherapy (n = 32). Using a hidden Markov model, we observed that MDD patients exhibited increased activity in the default mode network (DMN) and decreased occupancies in the sensorimotor and central executive networks (CEN). The DMN occurrence rate correlated positively with rumination severity. CBT, while not specifically designed to target rumination, reduced DMN occurrence rate and facilitated transitions toward a CEN-dominant brain state as part of broader therapeutic effects. Pharmacotherapy shifted DMN activity to the posterior region of the brain. These findings suggest that CBT and pharmacotherapy modulate brain network dynamics related to rumination through distinct therapeutic pathways.

BrainSignsNET: A Deep Learning Model for 3D Anatomical Landmark Detection in the Human Brain Imaging

shirzadeh barough, s., Ventura, C., Bilgel, M., Albert, M., Miller, M. I., Moghekar, A.

medrxiv logopreprintAug 5 2025
Accurate detection of anatomical landmarks in brain Magnetic Resonance Imaging (MRI) scans is essential for reliable spatial normalization, image alignment, and quantitative neuroimaging analyses. In this study, we introduce BrainSignsNET, a deep learning framework designed for robust three-dimensional (3D) landmark detection. Our approach leverages a multi-task 3D convolutional neural network that integrates an attention decoder branch with a multi-class decoder branch to generate precise 3D heatmaps, from which landmark coordinates are extracted. The model was trained and internally validated on T1-weighted Magnetization-Prepared Rapid Gradient-Echo (MPRAGE) scans from the Alzheimers Disease Neuroimaging Initiative (ADNI), the Baltimore Longitudinal Study of Aging (BLSA), and the Biomarkers of Cognitive Decline in Adults at Risk for AD (BIOCARD) datasets and externally validated on a clinical dataset from the Johns Hopkins Hydrocephalus Clinic. The study encompassed 14,472 scans from 6,299 participants, representing a diverse demographic profile with a significant proportion of older adult participants, particularly those over 70 years of age. Extensive preprocessing and data augmentation strategies, including traditional MRI corrections and tailored 3D transformations, ensured data consistency and improved model generalizability. Performance metrics demonstrated that on internal validation BrainSignsNET achieved an overall mean Euclidean distance of 2.32 {+/-} 0.41 mm and 94.8% of landmarks localized within their anatomically defined 3D volumes in the external validation dataset. This improvement in accurate anatomical landmark detection on brain MRI scans should benefit many imaging tasks, including registration, alignment, and quantitative analyses.

ERDES: A Benchmark Video Dataset for Retinal Detachment and Macular Status Classification in Ocular Ultrasound

Pouyan Navard, Yasemin Ozkut, Srikar Adhikari, Elaine Situ-LaCasse, Josie Acuña, Adrienne Yarnish, Alper Yilmaz

arxiv logopreprintAug 5 2025
Retinal detachment (RD) is a vision-threatening condition that requires timely intervention to preserve vision. Macular involvement -- whether the macula is still intact (macula-intact) or detached (macula-detached) -- is the key determinant of visual outcomes and treatment urgency. Point-of-care ultrasound (POCUS) offers a fast, non-invasive, cost-effective, and accessible imaging modality widely used in diverse clinical settings to detect RD. However, ultrasound image interpretation is limited by a lack of expertise among healthcare providers, especially in resource-limited settings. Deep learning offers the potential to automate ultrasound-based assessment of RD. However, there are no ML ultrasound algorithms currently available for clinical use to detect RD and no prior research has been done on assessing macular status using ultrasound in RD cases -- an essential distinction for surgical prioritization. Moreover, no public dataset currently supports macular-based RD classification using ultrasound video clips. We introduce Eye Retinal DEtachment ultraSound, ERDES, the first open-access dataset of ocular ultrasound clips labeled for (i) presence of retinal detachment and (ii) macula-intact versus macula-detached status. The dataset is intended to facilitate the development and evaluation of machine learning models for detecting retinal detachment. We also provide baseline benchmarks using multiple spatiotemporal convolutional neural network (CNN) architectures. All clips, labels, and training code are publicly available at https://osupcvlab.github.io/ERDES/.

Automated vertebral bone quality score measurement on lumbar MRI using deep learning: Development and validation of an AI algorithm.

Jayasuriya NM, Feng E, Nathani KR, Delawan M, Katsos K, Bhagra O, Freedman BA, Bydon M

pubmed logopapersAug 5 2025
Bone health is a critical determinant of spine surgery outcomes, yet many patients undergo procedures without adequate preoperative assessment due to limitations in current bone quality assessment methods. This study aimed to develop and validate an artificial intelligence-based algorithm that predicts Vertebral Bone Quality (VBQ) scores from routine MRI scans, enabling improved preoperative identification of patients at risk for poor surgical outcomes. This study utilized 257 lumbar spine T1-weighted MRI scans from the SPIDER challenge dataset. VBQ scores were calculated through a three-step process: selecting the mid-sagittal slice, measuring vertebral body signal intensity from L1-L4, and normalizing by cerebrospinal fluid signal intensity. A YOLOv8 model was developed to automate region of interest placement and VBQ score calculation. The system was validated against manual annotations from 47 lumbar spine surgery patients, with performance evaluated using precision, recall, mean average precision, intraclass correlation coefficient, Pearson correlation, RMSE, and mean error. The YOLOv8 model demonstrated high accuracy in vertebral body detection (precision: 0.9429, recall: 0.9076, [email protected]: 0.9403, mAP@[0.5:0.95]: 0.8288). Strong interrater reliability was observed with ICC values of 0.95 (human-human), 0.88 and 0.93 (human-AI). Pearson correlations for VBQ scores between human and AI measurements were 0.86 and 0.9, with RMSE values of 0.58 and 0.42 respectively. The AI-based algorithm accurately predicts VBQ scores from routine lumbar MRIs. This approach has potential to enhance early identification and intervention for patients with poor bone health, leading to improved surgical outcomes. Further external validation is recommended to ensure generalizability and clinical applicability.

Innovative machine learning approach for liver fibrosis and disease severity evaluation in MAFLD patients using MRI fat content analysis.

Hou M, Zhu Y, Zhou H, Zhou S, Zhang J, Zhang Y, Liu X

pubmed logopapersAug 5 2025
This study employed machine learning models to quantitatively analyze liver fat content from MRI images for the evaluation of liver fibrosis and disease severity in patients with metabolic dysfunction-associated fatty liver disease (MAFLD). A total of 26 confirmed MAFLD cases, along with MRI image sequences obtained from public repositories, were included to perform a comprehensive assessment. Radiomics features-such as contrast, correlation, homogeneity, energy, and entropy-were extracted and used to construct a random forest classification model with optimized hyperparameters. The model achieved outstanding performance, with an accuracy of 96.8%, sensitivity of 95.7%, specificity of 97.8%, and an F1-score of 96.8%, demonstrating its strong capability in accurately evaluating the degree of liver fibrosis and overall disease severity in MAFLD patients. The integration of machine learning with MRI-based analysis offers a promising approach to enhancing clinical decision-making and guiding treatment strategies, underscoring the potential of advanced technologies to improve diagnostic precision and disease management in MAFLD.

Integration of Spatiotemporal Dynamics and Structural Connectivity for Automated Epileptogenic Zone Localization in Temporal Lobe Epilepsy.

Xiao L, Zheng Q, Li S, Wei Y, Si W, Pan Y

pubmed logopapersAug 5 2025
Accurate localization of the epileptogenic zone (EZ) is essential for surgical success in temporal lobe epilepsy. While stereoelectroencephalography (SEEG) and structural magnetic resonance imaging (MRI) provide complementary insights, existing unimodal methods fail to fully capture epileptogenic brain activity, and multimodal fusion remains challenging due to data complexity and surgeon-dependent interpretations. To address these issues, we proposed a novel multimodal framework to improve EZ localization with SEEG-drived electrophysiology with structural connectivity in temporal lobe epilepsy. By retrospectively analyzing SEEG, post-implant Computed Tomography (CT) and MRI (T1 & Diffusion Tensor Imaging (DTI)) data from 15 patients, we reconstructed SEEG electrode positions and obtained the SEEG and structural connectivity fusion features. We then proposed a spatiotemporal co-attention deep neural network (ST-CANet) to identify the fusion features, categorizing electrodes into seizure onset zone (SOZ), propagation zone (PZ), and non-involved zone (NIZ). Anatomical EZ boundaries were delineated by fusing the electrode position and classification information on brain atlas. The proposed method was evaluated based on the identification and localization performance of three epilepsy-related zones. The experiment results demonstrate that our method achieves 98.08% average accuracy and outperforms other identification methods, and improves the localization with Dice similarity coefficients (DSC) of 95.65% (SOZ), 92.13% (PZ), and 99.61% (NIZ), aligning with clinically validated surgical resection areas. This multimodal fusion strategy based on electrophysiological and structural connectivity information promises to assist neurosurgeons in accurately localizing EZ and may find broader applications in preoperative planning for epilepsy surgeries.
Page 104 of 4034028 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.