Sort by:
Page 4 of 1601593 results

A Scalable Distributed Framework for Multimodal GigaVoxel Image Registration

Rohit Jena, Vedant Zope, Pratik Chaudhari, James C. Gee

arxiv logopreprintSep 29 2025
In this work, we propose FFDP, a set of IO-aware non-GEMM fused kernels supplemented with a distributed framework for image registration at unprecedented scales. Image registration is an inverse problem fundamental to biomedical and life sciences, but algorithms have not scaled in tandem with image acquisition capabilities. Our framework complements existing model parallelism techniques proposed for large-scale transformer training by optimizing non-GEMM bottlenecks and enabling convolution-aware tensor sharding. We demonstrate unprecedented capabilities by performing multimodal registration of a 100 micron ex-vivo human brain MRI volume at native resolution - an inverse problem more than 570x larger than a standard clinical datum in about a minute using only 8 A6000 GPUs. FFDP accelerates existing state-of-the-art optimization and deep learning registration pipelines by upto 6 - 7x while reducing peak memory consumption by 20 - 59%. Comparative analysis on a 250 micron dataset shows that FFDP can fit upto 64x larger problems than existing SOTA on a single GPU, and highlights both the performance and efficiency gains of FFDP compared to SOTA image registration methods.

Novel multi-task learning for Alzheimer's stage classification using hippocampal MRI segmentation, feature fusion, and nomogram modeling.

Hu W, Du Q, Wei L, Wang D, Zhang G

pubmed logopapersSep 29 2025
To develop and validate a comprehensive and interpretable framework for multi-class classification of Alzheimer's disease (AD) progression stages based on hippocampal MRI, integrating radiomic, deep, and clinical features. This retrospective multi-center study included 2956 patients across four AD stages (Non-Demented, Very Mild Demented, Mild Demented, Moderate Demented). T1-weighted MRI scans were processed through a standardized pipeline involving hippocampal segmentation using four models (U-Net, nnU-Net, Swin-UNet, MedT). Radiomic features (n = 215) were extracted using the SERA platform, and deep features (n = 256) were learned using an LSTM network with attention applied to hippocampal slices. Fused features were harmonized with ComBat and filtered by ICC (≥ 0.75), followed by LASSO-based feature selection. Classification was performed using five machine learning models, including Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF), Multilayer Perceptron (MLP), and eXtreme Gradient Boosting (XGBoost). Model interpretability was addressed using SHAP, and a nomogram and decision curve analysis (DCA) were developed. Additionally, an end-to-end 3D CNN-LSTM model and two transformer-based benchmarks (Vision Transformer, Swin Transformer) were trained for comparative evaluation. MedT achieved the best hippocampal segmentation (Dice = 92.03% external). Fused features yielded the highest classification performance with XGBoost (external accuracy = 92.8%, AUC = 94.2%). SHAP identified MMSE, hippocampal volume, and APOE ε4 as top contributors. The nomogram accurately predicted early-stage AD with clinical utility confirmed by DCA. The end-to-end model performed acceptably (AUC = 84.0%) but lagged behind the fused pipeline. Statistical tests confirmed significant performance advantages for feature fusion and MedT-based segmentation. This study demonstrates that integrating radiomics, deep learning, and clinical data from hippocampal MRI enables accurate and interpretable classification of AD stages. The proposed framework is robust, generalizable, and clinically actionable, representing a scalable solution for AD diagnostics.

Automated deep U-Net model for ischemic stroke lesion segmentation in the sub-acute phase.

E R, Bevi AR

pubmed logopapersSep 29 2025
Manual segmentation of sub-acute ischemic stroke lesions in fluid-attenuated inversion recovery magnetic resonance imaging (FLAIR MRI) is time-consuming and subject to inter-observer variability, limiting clinical workflow efficiency. To develop and validate an automated deep learning framework for accurate segmentation of sub-acute ischemic stroke lesions in FLAIR MRI using rigorous validation methodology. We propose a novel multi-path residual U-Net(U-shaped network) architecture with six parallel pathways per block (depths 0-5 convolutional layers) and 2.34 million trainable parameters. Hyperparameters were systematically optimized using 5-fold cross-validation across 60 configurations. We addressed intensity inhomogeneity using N4 bias field correction and employed strict patient-level data partitioning (18 training, 5 validation, 5 test patients) to prevent data leakage. Statistical analysis utilized bias-corrected bootstrap confidence intervals and Bonferroni correction for multiple comparisons. Our model achieved a validation dice similarity coefficient (DSC) of 0.85 ± 0.12 (95% CI: 0.79-0.91), a sensitivity of 0.82 ± 0.15, a specificity of 0.95 ± 0.04, and a Hausdorff distance of 14.1 ± 5.8 mm. Test set performance remained consistent (DSC: 0.89 ± 0.07), confirming generalizability. Computational efficiency was demonstrated with 45 ms inference time per slice. The architecture demonstrated statistically significant improvements over DRANet (p = 0.003), 2D CNN (p = 0.001), and Attention U-Net (p = 0.001), while achieving competitive performance comparable to CSNet (p = 0.68). The proposed framework demonstrates robust performance for automated stroke lesion segmentation with rigorous statistical validation. However, multi-site validation across diverse clinical environments remains essential before clinical implementation.

MMRQA: Signal-Enhanced Multimodal Large Language Models for MRI Quality Assessment

Fankai Jia, Daisong Gan, Zhe Zhang, Zhaochi Wen, Chenchen Dan, Dong Liang, Haifeng Wang

arxiv logopreprintSep 29 2025
Magnetic resonance imaging (MRI) quality assessment is crucial for clinical decision-making, yet remains challenging due to data scarcity and protocol variability. Traditional approaches face fundamental trade-offs: signal-based methods like MRIQC provide quantitative metrics but lack semantic understanding, while deep learning approaches achieve high accuracy but sacrifice interpretability. To address these limitations, we introduce the Multimodal MRI Quality Assessment (MMRQA) framework, pioneering the integration of multimodal large language models (MLLMs) with acquisition-aware signal processing. MMRQA combines three key innovations: robust metric extraction via MRQy augmented with simulated artifacts, structured transformation of metrics into question-answer pairs using Qwen, and parameter-efficient fusion through Low-Rank Adaptation (LoRA) of LLaVA-OneVision. Evaluated on MR-ART, FastMRI, and MyConnectome benchmarks, MMRQA achieves state-of-the-art performance with strong zero-shot generalization, as validated by comprehensive ablation studies. By bridging quantitative analysis with semantic reasoning, our framework generates clinically interpretable outputs that enhance quality control in dynamic medical settings.

Classification of anterior cruciate ligament tears in knee magnetic resonance images using pre-trained model and custom model.

Thangaperumal S, Murugan PR, Hossen J, Wong WK, Ng PK

pubmed logopapersSep 29 2025
An anterior cruciate ligament (ACL) tear is a prevalent knee injury among athletes, and aged people with osteoporosis are at increased risk for it. For early detection and treatment, precise and rapid identification of ACL tears is significant. A fully automated system that can identify ACL tear is necessary to aid healthcare providers in determining the nature of injuries detected on Magnetic Resonance Imaging (MRI) scans. Two Convolutional Neural Networks (CNN), the pretrained model and the CustomNet model are trained and tested using 581 MRI scans of the knee. Feature extraction is done with the pre-trained ResNet-18 model, and the ISOMAP algorithm is used in the CustomNet model. Linear and nonlinear dimensionality reduction techniques are employed to extract the needed features from the image. For the ResNet-18 model, the accuracy rate ranges between 86% and 92% for various data partitions. After performing PCA, the improved classification rate ranges between 92% and 96.2%. The CustomNet model's accuracy rate ranges from 40 to 70%, 70-90%, 60-70%, and 50-70% for different hyperparameter ensembles. Five-fold cross validation is implemented in CustomNet and it achieved an overall accuracy of 85.6%. These two models demonstrate superior efficiency and accuracy in classifying normal and ACL torn Knee MR images.

Convolutional neural network models of structural MRI for discriminating categories of cognitive impairment: a systematic review and meta-analysis.

Dong X, Li Y, Hao J, Zhou P, Yang C, Ai Y, He M, Zhang W, Hu H

pubmed logopapersSep 29 2025
Alzheimer's disease (AD) and mild cognitive impairment (MCI) pose significant challenges to public health and underscore the need for accurate and early diagnostic tools. Structural magnetic resonance imaging (sMRI) combined with advanced analytical techniques like convolutional neural networks (CNNs) seemed to offer a promising avenue for the diagnosis of these conditions. This systematic review and meta-analysis aimed to evaluate the diagnostic performance of CNN algorithms applied to sMRI data in differentiating between AD, MCI, and normal cognition (NC). Following the PRISMA-DTA guidelines, a comprehensive literature search was carried out in PubMed and Web of Science databases for studies published between 2018 and 2024. Studies were included if they employed CNNs for the diagnostic classification of sMRI data from participants with AD, MCI, or NC. The methodological quality of the included studies was assessed using the QUADAS-2 and METRICS tools. Data extraction and statistical analysis were performed to calculate pooled diagnostic accuracy metrics. A total of 21 studies were included in the study, comprising 16,139 participants in the analysis. The pooled sensitivity and specificity of CNN algorithms for differentiating AD from NC were 0.92 and 0.91, respectively. For distinguishing MCI from NC, the pooled sensitivity and specificity were 0.74 and 0.79, respectively. The algorithms also showed a moderate ability to differentiate AD from MCI, with a pooled sensitivity and specificity of 0.73 and 0.79, respectively. In the pMCI versus sMCI classification, a pooled sensitivity was 0.69 and a specificity was 0.81. Heterogeneity across studies was significant, as indicated by meta-regression results. CNN algorithms demonstrated promising diagnostic performance in differentiating AD, MCI, and NC using sMRI data. The highest accuracy was observed in distinguishing AD from NC and the lowest accuracy observed in distinguishing pMCI from sMCI. These findings suggest that CNN-based radiomics has the potential to serve as a valuable tool in the diagnostic armamentarium for neurodegenerative diseases. However, the heterogeneity among studies indicates a need for further methodological refinement and validation. This systematic review was registered in PROSPERO (Registration ID: CRD42022295408).

An efficient deep learning network for brain stroke detection using salp shuffled shepherded optimization.

Xue X, Viswapriya SE, Rajeswari D, Homod RZ, Khalaf OI

pubmed logopapersSep 29 2025
Brain strokes (BS) are potentially life-threatening cerebrovascular conditions and the second highest contributor to mortality. They include hemorrhagic and ischemic strokes, which vary greatly in size, shape, and location, posing significant challenges for automated identification. Magnetic Resonance Imaging (MRI) brain imaging using Diffusion Weighted Imaging (DWI) will show fluid balance changes very early. Due to their higher sensitivity, MRI scans are more accurate than Computed Tomography (CT) scans. Salp Shuffled Shepherded EfficientNet (S3ET-NET), a new deep learning model in this research work, could propose the detection of brain stroke using brain MRI. The MRI images are pre-processed by a Gaussian bilateral (GB) filter to reduce the noise distortion in the input images. The Ghost Net model derives suitable features from the pre-processed images. The extracted images will have some optimal features that were selected by applying the Salp Shuffled Shepherded Optimization (S3O) algorithm. The Efficient Net model is utilized for classifying brain stroke cases, such as normal, Ischemic stroke (IS), and hemorrhagic stroke (HS). According to the result, the proposed S3ET-NET attains a 99.41% reliability rate. In contrast to Link Net, Mobile Net, and Google Net, the proposed Ghost Net improves detection accuracy by 1.16, 1.94, and 3.14%, respectively. The suggested Efficient Net outperforms ResNet50, zNet-mRMR-NB, and DNN in the accuracy range, improving by 3.20, 5.22, and 4.21%, respectively.

Clinical and MRI markers for acute vs chronic temporomandibular disorders using a machine learning and deep neural networks.

Lee YH, Jeon S, Kim DH, Auh QS, Lee JH, Noh YK

pubmed logopapersSep 29 2025
Exploring the transition from acute to chronic temporomandibular disorders (TMD) remains challenging due to the multifactorial nature of the disease. This study aims to identify clinical, behavioral, and imaging-based predictors that contribute to symptom chronicity in patients with TMD. We enrolled 239 patients with TMD (161 women, 78 men; mean age 35.60 ± 17.93 years), classified as acute ( < 6 months) or chronic ( ≥ 6 months) based on symptom duration. TMD was diagnosed according to the Diagnostic Criteria for TMD (DC/TMD Axis I). Clinical data, sleep-related variables, and temporomandibular joint magnetic resonance imaging (MRI) were collected. MRI assessments included anterior disc displacement (ADD), joint space narrowing, osteoarthritis, and effusion using 3 T T2-weighted and proton density scans. Predictors were evaluated using logistic regression and deep neural networks (DNN), and performance was compared. Chronic TMD is observed in 51.05% of patients. Compared to acute cases, chronic TMD is more frequently associated with TMJ noise (70.5%), bruxism (31.1%), and higher pain intensity (VAS: 4.82 ± 2.47). They also have shorter sleep and higher STOP-Bang scores, indicating greater risk of obstructive sleep apnea. MRI findings reveal increased prevalence of ADD (86.9%), TMJ-OA (82.0%), and joint space narrowing (88.5%) in chronic TMD. Logistic regression achieves an AUROC of 0.7550 (95% CI: 0.6550-0.8550), identifying TMJ noise, bruxism, VAS, sleep disturbance, STOP-Bang≥5, ADD, and joint space narrowing as significant predictors. The DNN model improves accuracy to 79.49% compared to 75.50%, though the difference is not statistically significant (p = 0.3067). Behavioral and TMJ-related structural factors are key predictors of chronic TMD and may aid early identification. Timely recognition may support personalized strategies and improve outcomes.

Simulating Post-Neoadjuvant Chemotherapy Breast Cancer MRI via Diffusion Model with Prompt Tuning

Jonghun Kim, Hyunjin Park

arxiv logopreprintSep 29 2025
Neoadjuvant chemotherapy (NAC) is a common therapy option before the main surgery for breast cancer. Response to NAC is monitored using follow-up dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Accurate prediction of NAC response helps with treatment planning. Here, we adopt maximum intensity projection images from DCE-MRI to generate post-treatment images (i.e., 3 or 12 weeks after NAC) from pre-treatment images leveraging the emerging diffusion model. We introduce prompt tuning to account for the known clinical factors affecting response to NAC. Our model performed better than other generative models in image quality metrics. Our model was better at generating images that reflected changes in tumor size according to pCR compared to other models. Ablation study confirmed the design choices of our method. Our study has the potential to help with precision medicine.

Diagnostic accuracy of a machine learning model using radiomics features from breast synthetic MRI.

Matsuda T, Matsuda M, Haque H, Fuchibe S, Matsumoto M, Shiraishi Y, Nobe Y, Kuwabara K, Toshimori W, Okada K, Kawaguchi N, Kurata M, Kamei Y, Kitazawa R, Kido T

pubmed logopapersSep 29 2025
In breast magnetic resonance imaging (MRI), the differentiation between benign and malignant breast masses relies on the Breast Imaging Reporting and Data System Magnetic Resonance Imaging (BI-RADS-MRI) lexicon. While BI-RADS-MRI classification demonstrates high sensitivity, specificities vary. This study aimed to evaluate the feasibility of machine learning models utilizing radiomics features derived from synthetic MRI to distinguish benign from malignant breast masses. Patients who underwent breast MRI, including a multi-dynamic multi-echo (MDME) sequence using 3.0 T MRI, and had histopathologically diagnosed enhanced breast mass lesions were retrospectively included. Clinical features, lesion shape features, texture features, and textural evaluation metrics were extracted. Machine learning models were trained and evaluated, and an ensemble model integrating BI-RADS and the machine learning model was also assessed. A total of 199 lesions (48 benign, 151 malignant) in 199 patients were included in the cross-validation dataset, while 43 lesions (15 benign, 28 malignant) in 40 new patients were included in the test dataset. For the test dataset, the sensitivity, specificity, accuracy, and area under the curve (AUC) of the receiver operating characteristic for BI-RADS were 100%, 33.3%, 76.7%, and 0.667, respectively. The logistic regression model yielded 64.3% sensitivity, 80.0% specificity, 69.8% accuracy, and an AUC of 0.707. The ensemble model achieved 82.1% sensitivity, 86.7% specificity, 83.7% accuracy, and an AUC of 0.883. The AUC of the ensemble model was significantly larger than that of both BI-RADS and the machine learning model. The ensemble model integrating BI-RADS and machine learning improved lesion classification. The online version contains supplementary material available at 10.1186/s12880-025-01930-8.
Page 4 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.