Sort by:
Page 2 of 1601593 results

Advanced MRI based Alzheimer's diagnosis through ensemble learning techniques.

Sriram S, Nivethitha V, Arun Kaarthic TP, Archita S, Murugan T

pubmed logopapersSep 30 2025
Alzheimer's Disease is a condition that affects the brain and causes changes in behavior and memory loss while making it hard to carry out tasks properly. It's vital to spot the illness early, for effective treatment. MRI technology has advanced in detecting Alzheimer's by using machine learning and deep learning models. These models use neural networks to analyze brain MRI results automatically and identify key indicators of Alzheimer's disease. In this study, we used MRI data to train a CNN for diagnosing and categorizing the four stages of Alzheimer's disease with deep learning techniques, offering significant advantages in identifying patterns in medical imaging for this neurodegenerative condition compared to using a CNN exclusively trained for this purpose. They evaluated ResNet50, InceptionResNetv2 as well as a CNN specifically trained for their study and found that combining the models led to highly accurate results. The accuracy rates for the trained CNN model stood at 90.76%, InceptionResNetv2 at 86.84%, and ResNet50 at 90.27%. In this trial run of the experiment conducted by combining all three models collaboratively resulted in an accuracy rate of 94.27% compared to the accuracy rates of each model working individually.

Transformer Classification of Breast Lesions: The BreastDCEDL_AMBL Benchmark Dataset and 0.92 AUC Baseline

Naomi Fridman, Anat Goldstein

arxiv logopreprintSep 30 2025
The error is caused by special characters that arXiv's system doesn't recognize. Here's the cleaned version with all problematic characters replaced: Breast magnetic resonance imaging is a critical tool for cancer detection and treatment planning, but its clinical utility is hindered by poor specificity, leading to high false-positive rates and unnecessary biopsies. This study introduces a transformer-based framework for automated classification of breast lesions in dynamic contrast-enhanced MRI, addressing the challenge of distinguishing benign from malignant findings. We implemented a SegFormer architecture that achieved an AUC of 0.92 for lesion-level classification, with 100% sensitivity and 67% specificity at the patient level - potentially eliminating one-third of unnecessary biopsies without missing malignancies. The model quantifies malignant pixel distribution via semantic segmentation, producing interpretable spatial predictions that support clinical decision-making. To establish reproducible benchmarks, we curated BreastDCEDL_AMBL by transforming The Cancer Imaging Archive's AMBL collection into a standardized deep learning dataset with 88 patients and 133 annotated lesions (89 benign, 44 malignant). This resource addresses a key infrastructure gap, as existing public datasets lack benign lesion annotations, limiting benign-malignant classification research. Training incorporated an expanded cohort of over 1,200 patients through integration with BreastDCEDL datasets, validating transfer learning approaches despite primary tumor-only annotations. Public release of the dataset, models, and evaluation protocols provides the first standardized benchmark for DCE-MRI lesion classification, enabling methodological advancement toward clinical deployment.

A Multimodal LLM Approach for Visual Question Answering on Multiparametric 3D Brain MRI

Arvind Murari Vepa, Yannan Yu, Jingru Gan, Anthony Cuturrufo, Weikai Li, Wei Wang, Fabien Scalzo, Yizhou Sun

arxiv logopreprintSep 30 2025
We introduce mpLLM, a prompt-conditioned hierarchical mixture-of-experts (MoE) architecture for visual question answering over multi-parametric 3D brain MRI (mpMRI). mpLLM routes across modality-level and token-level projection experts to fuse multiple interrelated 3D modalities, enabling efficient training without image--report pretraining. To address limited image-text paired supervision, mpLLM integrates a synthetic visual question answering (VQA) protocol that generates medically relevant VQA from segmentation annotations, and we collaborate with medical experts for clinical validation. mpLLM outperforms strong medical VLM baselines by 5.3% on average across multiple mpMRI datasets. Our study features three main contributions: (1) the first clinically validated VQA dataset for 3D brain mpMRI, (2) a novel multimodal LLM that handles multiple interrelated 3D modalities, and (3) strong empirical results that demonstrate the medical utility of our methodology. Ablations highlight the importance of modality-level and token-level experts and prompt-conditioned routing. We have included our source code in the supplementary materials and will release our dataset upon publication.

Early diagnosis of knee osteoarthritis severity using vision transformer.

Panwar P, Chaurasia S, Gangrade J, Bilandi A

pubmed logopapersSep 30 2025
Knee Osteoarthritis (K-OA) is characterized as a progressive joint condition with global prevalence, exhibiting deterioration over time and impacting a significant portion of the population. It happens because joints wear out slowly. The main reason for osteoarthritis is the wearing away of the cushion in the joints, which makes the bones rub together. This causes feelings of stiffness, unease, and difficulty moving. Persons with osteoarthritis find it hard to do simple things like walking, standing, or going up stairs. Besides that, it can also make people feel sad or worried because of the ongoing pain and trouble it causes. Knee osteoarthritis exerts a sustained impact on both the economy and society. Typically, radiologists assess knee health through MRI or X-ray images, assigning KL-grades. MRI excels in visualizing soft tissues like cartilage, menisci, and ligaments, directly revealing cartilage degeneration and joint inflammation crucial for osteoarthritis (OA) diagnosis. In contrast, X-rays primarily show bone, only inferring cartilage loss through joint space narrowing-a late indicator of OA. This makes MRI superior for detecting early changes and subtle lesions often missed by X-rays. However, manual diagnosis of Knee osteoarthritis is laborious and time-consuming. In response, deep learning methodologies such as vision transformer (ViT) has been implemented to enhance efficiency and streamline workflows in clinical settings. This research leverages ViT for Knee Osteoarthritis KL grading, achieving an accuracy of 88%. It illustrates that employing a simple transfer learning technique with this model yields superior performance compared to more intricate architectures.

Multi-modal Liver Segmentation and Fibrosis Staging Using Real-world MRI Images

Yang Zhou, Kunhao Yuan, Ye Wei, Jishizhan Chen

arxiv logopreprintSep 30 2025
Liver fibrosis represents the accumulation of excessive extracellular matrix caused by sustained hepatic injury. It disrupts normal lobular architecture and function, increasing the chances of cirrhosis and liver failure. Precise staging of fibrosis for early diagnosis and intervention is often invasive, which carries risks and complications. To address this challenge, recent advances in artificial intelligence-based liver segmentation and fibrosis staging offer a non-invasive alternative. As a result, the CARE 2025 Challenge aimed for automated methods to quantify and analyse liver fibrosis in real-world scenarios, using multi-centre, multi-modal, and multi-phase MRI data. This challenge included tasks of precise liver segmentation (LiSeg) and fibrosis staging (LiFS). In this study, we developed an automated pipeline for both tasks across all the provided MRI modalities. This pipeline integrates pseudo-labelling based on multi-modal co-registration, liver segmentation using deep neural networks, and liver fibrosis staging based on shape, textural, appearance, and directional (STAD) features derived from segmentation masks and MRI images. By solely using the released data with limited annotations, our proposed pipeline demonstrated excellent generalisability for all MRI modalities, achieving top-tier performance across all competition subtasks. This approach provides a rapid and reproducible framework for quantitative MRI-based liver fibrosis assessment, supporting early diagnosis and clinical decision-making. Code is available at https://github.com/YangForever/care2025_liver_biodreamer.

Novel multi-task learning for Alzheimer's stage classification using hippocampal MRI segmentation, feature fusion, and nomogram modeling.

Hu W, Du Q, Wei L, Wang D, Zhang G

pubmed logopapersSep 29 2025
To develop and validate a comprehensive and interpretable framework for multi-class classification of Alzheimer's disease (AD) progression stages based on hippocampal MRI, integrating radiomic, deep, and clinical features. This retrospective multi-center study included 2956 patients across four AD stages (Non-Demented, Very Mild Demented, Mild Demented, Moderate Demented). T1-weighted MRI scans were processed through a standardized pipeline involving hippocampal segmentation using four models (U-Net, nnU-Net, Swin-UNet, MedT). Radiomic features (n = 215) were extracted using the SERA platform, and deep features (n = 256) were learned using an LSTM network with attention applied to hippocampal slices. Fused features were harmonized with ComBat and filtered by ICC (≥ 0.75), followed by LASSO-based feature selection. Classification was performed using five machine learning models, including Logistic Regression (LR), Support Vector Machine (SVM), Random Forest (RF), Multilayer Perceptron (MLP), and eXtreme Gradient Boosting (XGBoost). Model interpretability was addressed using SHAP, and a nomogram and decision curve analysis (DCA) were developed. Additionally, an end-to-end 3D CNN-LSTM model and two transformer-based benchmarks (Vision Transformer, Swin Transformer) were trained for comparative evaluation. MedT achieved the best hippocampal segmentation (Dice = 92.03% external). Fused features yielded the highest classification performance with XGBoost (external accuracy = 92.8%, AUC = 94.2%). SHAP identified MMSE, hippocampal volume, and APOE ε4 as top contributors. The nomogram accurately predicted early-stage AD with clinical utility confirmed by DCA. The end-to-end model performed acceptably (AUC = 84.0%) but lagged behind the fused pipeline. Statistical tests confirmed significant performance advantages for feature fusion and MedT-based segmentation. This study demonstrates that integrating radiomics, deep learning, and clinical data from hippocampal MRI enables accurate and interpretable classification of AD stages. The proposed framework is robust, generalizable, and clinically actionable, representing a scalable solution for AD diagnostics.

Automated deep U-Net model for ischemic stroke lesion segmentation in the sub-acute phase.

E R, Bevi AR

pubmed logopapersSep 29 2025
Manual segmentation of sub-acute ischemic stroke lesions in fluid-attenuated inversion recovery magnetic resonance imaging (FLAIR MRI) is time-consuming and subject to inter-observer variability, limiting clinical workflow efficiency. To develop and validate an automated deep learning framework for accurate segmentation of sub-acute ischemic stroke lesions in FLAIR MRI using rigorous validation methodology. We propose a novel multi-path residual U-Net(U-shaped network) architecture with six parallel pathways per block (depths 0-5 convolutional layers) and 2.34 million trainable parameters. Hyperparameters were systematically optimized using 5-fold cross-validation across 60 configurations. We addressed intensity inhomogeneity using N4 bias field correction and employed strict patient-level data partitioning (18 training, 5 validation, 5 test patients) to prevent data leakage. Statistical analysis utilized bias-corrected bootstrap confidence intervals and Bonferroni correction for multiple comparisons. Our model achieved a validation dice similarity coefficient (DSC) of 0.85 ± 0.12 (95% CI: 0.79-0.91), a sensitivity of 0.82 ± 0.15, a specificity of 0.95 ± 0.04, and a Hausdorff distance of 14.1 ± 5.8 mm. Test set performance remained consistent (DSC: 0.89 ± 0.07), confirming generalizability. Computational efficiency was demonstrated with 45 ms inference time per slice. The architecture demonstrated statistically significant improvements over DRANet (p = 0.003), 2D CNN (p = 0.001), and Attention U-Net (p = 0.001), while achieving competitive performance comparable to CSNet (p = 0.68). The proposed framework demonstrates robust performance for automated stroke lesion segmentation with rigorous statistical validation. However, multi-site validation across diverse clinical environments remains essential before clinical implementation.

Clinical and MRI markers for acute vs chronic temporomandibular disorders using a machine learning and deep neural networks.

Lee YH, Jeon S, Kim DH, Auh QS, Lee JH, Noh YK

pubmed logopapersSep 29 2025
Exploring the transition from acute to chronic temporomandibular disorders (TMD) remains challenging due to the multifactorial nature of the disease. This study aims to identify clinical, behavioral, and imaging-based predictors that contribute to symptom chronicity in patients with TMD. We enrolled 239 patients with TMD (161 women, 78 men; mean age 35.60 ± 17.93 years), classified as acute ( < 6 months) or chronic ( ≥ 6 months) based on symptom duration. TMD was diagnosed according to the Diagnostic Criteria for TMD (DC/TMD Axis I). Clinical data, sleep-related variables, and temporomandibular joint magnetic resonance imaging (MRI) were collected. MRI assessments included anterior disc displacement (ADD), joint space narrowing, osteoarthritis, and effusion using 3 T T2-weighted and proton density scans. Predictors were evaluated using logistic regression and deep neural networks (DNN), and performance was compared. Chronic TMD is observed in 51.05% of patients. Compared to acute cases, chronic TMD is more frequently associated with TMJ noise (70.5%), bruxism (31.1%), and higher pain intensity (VAS: 4.82 ± 2.47). They also have shorter sleep and higher STOP-Bang scores, indicating greater risk of obstructive sleep apnea. MRI findings reveal increased prevalence of ADD (86.9%), TMJ-OA (82.0%), and joint space narrowing (88.5%) in chronic TMD. Logistic regression achieves an AUROC of 0.7550 (95% CI: 0.6550-0.8550), identifying TMJ noise, bruxism, VAS, sleep disturbance, STOP-Bang≥5, ADD, and joint space narrowing as significant predictors. The DNN model improves accuracy to 79.49% compared to 75.50%, though the difference is not statistically significant (p = 0.3067). Behavioral and TMJ-related structural factors are key predictors of chronic TMD and may aid early identification. Timely recognition may support personalized strategies and improve outcomes.

An efficient deep learning network for brain stroke detection using salp shuffled shepherded optimization.

Xue X, Viswapriya SE, Rajeswari D, Homod RZ, Khalaf OI

pubmed logopapersSep 29 2025
Brain strokes (BS) are potentially life-threatening cerebrovascular conditions and the second highest contributor to mortality. They include hemorrhagic and ischemic strokes, which vary greatly in size, shape, and location, posing significant challenges for automated identification. Magnetic Resonance Imaging (MRI) brain imaging using Diffusion Weighted Imaging (DWI) will show fluid balance changes very early. Due to their higher sensitivity, MRI scans are more accurate than Computed Tomography (CT) scans. Salp Shuffled Shepherded EfficientNet (S3ET-NET), a new deep learning model in this research work, could propose the detection of brain stroke using brain MRI. The MRI images are pre-processed by a Gaussian bilateral (GB) filter to reduce the noise distortion in the input images. The Ghost Net model derives suitable features from the pre-processed images. The extracted images will have some optimal features that were selected by applying the Salp Shuffled Shepherded Optimization (S3O) algorithm. The Efficient Net model is utilized for classifying brain stroke cases, such as normal, Ischemic stroke (IS), and hemorrhagic stroke (HS). According to the result, the proposed S3ET-NET attains a 99.41% reliability rate. In contrast to Link Net, Mobile Net, and Google Net, the proposed Ghost Net improves detection accuracy by 1.16, 1.94, and 3.14%, respectively. The suggested Efficient Net outperforms ResNet50, zNet-mRMR-NB, and DNN in the accuracy range, improving by 3.20, 5.22, and 4.21%, respectively.

A deep learning algorithm for automatic 3D segmentation and quantification of hamstrings musculotendon injury from MRI.

Riem L, DuCharme O, Coggins A, Kenney A, Cousins M, Feng X, Hein R, Buford M, Lee K, Opar D, Heiderscheit B, Blemker SS

pubmed logopapersSep 29 2025
In high-velocity sports, hamstring strain injuries are common causes of missed play and have high rates of reinjury. Evaluating the severity and location of a hamstring strain injury, currently graded by a clinician using a semiqualitative muscle injury classification score (e.g. as one method, British Athletics Muscle Injury Classification - BAMIC) to describe edema presence and location, aids in guiding athlete recovery. In this study, automated artificial intelligence (AI) models were developed and deployed to automatically segment edema, hamstring muscle and tendon structures using T2-weighted and T1-weighted magnetic resonance images (MRI), respectively. MR scans were collected from collegiate football athletes at time-of-hamstring injury and return to sport. Volume, length, and cross-sectional (CSA) measurements were performed on all structures and subregions (i.e. free tendon and aponeurosis). The edema and hamstring muscle/tendon AI models compared favorably with ground-truth segmentations. AI volumetric output correlated with ground truth for edema (R = 0.97), hamstring muscles (R ≥ 0.99), and hamstring tendon (R ≥ 0.42) structures. Edema volume and percentage of muscle impacted by edema significantly increased with clinical BAMIC grade (p < 0.05). Taken together, these results demonstrate a promising new approach for AI-based quantification of edema which reflects differing levels of injury severity and supports clinical validity. Main Body.
Page 2 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.