Sort by:
Page 13 of 1161159 results

BrainSignsNET: A Deep Learning Model for 3D Anatomical Landmark Detection in the Human Brain Imaging

shirzadeh barough, s., Ventura, C., Bilgel, M., Albert, M., Miller, M. I., Moghekar, A.

medrxiv logopreprintAug 5 2025
Accurate detection of anatomical landmarks in brain Magnetic Resonance Imaging (MRI) scans is essential for reliable spatial normalization, image alignment, and quantitative neuroimaging analyses. In this study, we introduce BrainSignsNET, a deep learning framework designed for robust three-dimensional (3D) landmark detection. Our approach leverages a multi-task 3D convolutional neural network that integrates an attention decoder branch with a multi-class decoder branch to generate precise 3D heatmaps, from which landmark coordinates are extracted. The model was trained and internally validated on T1-weighted Magnetization-Prepared Rapid Gradient-Echo (MPRAGE) scans from the Alzheimers Disease Neuroimaging Initiative (ADNI), the Baltimore Longitudinal Study of Aging (BLSA), and the Biomarkers of Cognitive Decline in Adults at Risk for AD (BIOCARD) datasets and externally validated on a clinical dataset from the Johns Hopkins Hydrocephalus Clinic. The study encompassed 14,472 scans from 6,299 participants, representing a diverse demographic profile with a significant proportion of older adult participants, particularly those over 70 years of age. Extensive preprocessing and data augmentation strategies, including traditional MRI corrections and tailored 3D transformations, ensured data consistency and improved model generalizability. Performance metrics demonstrated that on internal validation BrainSignsNET achieved an overall mean Euclidean distance of 2.32 {+/-} 0.41 mm and 94.8% of landmarks localized within their anatomically defined 3D volumes in the external validation dataset. This improvement in accurate anatomical landmark detection on brain MRI scans should benefit many imaging tasks, including registration, alignment, and quantitative analyses.

Multi-Center 3D CNN for Parkinson's disease diagnosis and prognosis using clinical and T1-weighted MRI data.

Basaia S, Sarasso E, Sciancalepore F, Balestrino R, Musicco S, Pisano S, Stankovic I, Tomic A, Micco R, Tessitore A, Salvi M, Meiburger KM, Kostic VS, Molinari F, Agosta F, Filippi M

pubmed logopapersAug 5 2025
Parkinson's disease (PD) presents challenges in early diagnosis and progression prediction. Recent advancements in machine learning, particularly convolutional-neural-networks (CNNs), show promise in enhancing diagnostic accuracy and prognostic capabilities using neuroimaging data. The aims of this study were: (i) develop a 3D-CNN based on MRI to distinguish controls and PD patients and (ii) employ CNN to predict the progression of PD. Three cohorts were selected: 86 mild, 62 moderate-to-severe PD patients, and 60 controls; 14 mild-PD patients and 14 controls from Parkinson's Progression Markers Initiative database, and 38 de novo mild-PD patients and 38 controls. All participants underwent MRI scans and clinical evaluation at baseline and over 2-years. PD subjects were classified in two clusters of different progression using k-means clustering based on baseline and follow-up UDPRS-III scores. A 3D-CNN was built and tested on PD patients and controls, with binary classifications: controls vs moderate-to-severe PD, controls vs mild-PD, and two clusters of PD progression. The effect of transfer learning was also tested. CNN effectively differentiated moderate-to-severe PD from controls (74% accuracy) using MRI data alone. Transfer learning significantly improved performance in distinguishing mild-PD from controls (64% accuracy). For predicting disease progression, the model achieved over 70% accuracy by combining MRI and clinical data. Brain regions most influential in the CNN's decisions were visualized. CNN, integrating multimodal data and transfer learning, provides encouraging results toward early-stage classification and progression monitoring in PD. Its explainability through activation maps offers potential for clinical application in early diagnosis and personalized monitoring.

Scaling Artificial Intelligence for Prostate Cancer Detection on MRI towards Population-Based Screening and Primary Diagnosis in a Global, Multiethnic Population (Study Protocol)

Anindo Saha, Joeran S. Bosma, Jasper J. Twilt, Alexander B. C. D. Ng, Aqua Asif, Kirti Magudia, Peder Larson, Qinglin Xie, Xiaodong Zhang, Chi Pham Minh, Samuel N. Gitau, Ivo G. Schoots, Martijn F. Boomsma, Renato Cuocolo, Nikolaos Papanikolaou, Daniele Regge, Derya Yakar, Mattijs Elschot, Jeroen Veltman, Baris Turkbey, Nancy A. Obuchowski, Jurgen J. Fütterer, Anwar R. Padhani, Hashim U. Ahmed, Tobias Nordström, Martin Eklund, Veeru Kasivisvanathan, Maarten de Rooij, Henkjan Huisman

arxiv logopreprintAug 4 2025
In this intercontinental, confirmatory study, we include a retrospective cohort of 22,481 MRI examinations (21,288 patients; 46 cities in 22 countries) to train and externally validate the PI-CAI-2B model, i.e., an efficient, next-generation iteration of the state-of-the-art AI system that was developed for detecting Gleason grade group $\geq$2 prostate cancer on MRI during the PI-CAI study. Of these examinations, 20,471 cases (19,278 patients; 26 cities in 14 countries) from two EU Horizon projects (ProCAncer-I, COMFORT) and 12 independent centers based in Europe, North America, Asia and Africa, are used for training and internal testing. Additionally, 2010 cases (2010 patients; 20 external cities in 12 countries) from population-based screening (STHLM3-MRI, IP1-PROSTAGRAM trials) and primary diagnostic settings (PRIME trial) based in Europe, North and South Americas, Asia and Australia, are used for external testing. Primary endpoint is the proportion of AI-based assessments in agreement with the standard of care diagnoses (i.e., clinical assessments made by expert uropathologists on histopathology, if available, or at least two expert urogenital radiologists in consensus; with access to patient history and peer consultation) in the detection of Gleason grade group $\geq$2 prostate cancer within the external testing cohorts. Our statistical analysis plan is prespecified with a hypothesis of diagnostic interchangeability to the standard of care at the PI-RADS $\geq$3 (primary diagnosis) or $\geq$4 (screening) cut-off, considering an absolute margin of 0.05 and reader estimates derived from the PI-CAI observer study (62 radiologists reading 400 cases). Secondary measures comprise the area under the receiver operating characteristic curve (AUROC) of the AI system stratified by imaging quality, patient age and patient ethnicity to identify underlying biases (if any).

Open-radiomics: a collection of standardized datasets and a technical protocol for reproducible radiomics machine learning pipelines.

Namdar K, Wagner MW, Ertl-Wagner BB, Khalvati F

pubmed logopapersAug 4 2025
As an important branch of machine learning pipelines in medical imaging, radiomics faces two major challenges namely reproducibility and accessibility. In this work, we introduce open-radiomics, a set of radiomics datasets along with a comprehensive radiomics pipeline based on our proposed technical protocol to investigate the effects of radiomics feature extraction on the reproducibility of the results. We curated large-scale radiomics datasets based on three open-source datasets; BraTS 2020 for high-grade glioma (HGG) versus low-grade glioma (LGG) classification and survival analysis, BraTS 2023 for O6-methylguanine-DNA methyltransferase (MGMT) classification, and non-small cell lung cancer (NSCLC) survival analysis from the Cancer Imaging Archive (TCIA). We used the BraTS 2020 open-source Magnetic Resonance Imaging (MRI) dataset to demonstrate how our proposed technical protocol could be utilized in radiomics-based studies. The cohort includes 369 adult patients with brain tumors (76 LGG, and 293 HGG). Using PyRadiomics library for LGG vs. HGG classification, we created 288 radiomics datasets; the combinations of 4 MRI sequences, 3 binWidths, 6 image normalization methods, and 4 tumor subregions. We used Random Forest classifiers, and for each radiomics dataset, we repeated the training-validation-test (60%/20%/20%) experiment with different data splits and model random states 100 times (28,800 test results) and calculated the Area Under the Receiver Operating Characteristic Curve (AUROC). Unlike binWidth and image normalization, the tumor subregion and imaging sequence significantly affected performance of the models. T1 contrast-enhanced sequence and the union of Necrotic and the non-enhancing tumor core subregions resulted in the highest AUROCs (average test AUROC 0.951, 95% confidence interval of (0.949, 0.952)). Although several settings and data splits (28 out of 28800) yielded test AUROC of 1, they were irreproducible. Our experiments demonstrate the sources of variability in radiomics pipelines (e.g., tumor subregion) can have a significant impact on the results, which may lead to superficial perfect performances that are irreproducible. Not applicable.

Machine learning of whole-brain resting-state fMRI signatures for individualized grading of frontal gliomas.

Hu Y, Cao X, Chen H, Geng D, Lv K

pubmed logopapersAug 4 2025
Accurate preoperative grading of gliomas is critical for therapeutic planning and prognostic evaluation. We developed a noninvasive machine learning model leveraging whole-brain resting-state functional magnetic resonance imaging (rs-fMRI) biomarkers to discriminate high-grade (HGGs) and low-grade gliomas (LGGs) in the frontal lobe. This retrospective study included 138 patients (78 LGGs, 60 HGGs) with left frontal gliomas. A total of 7134 features were extracted from the mean amplitude of low-frequency fluctuation (mALFF), mean fractional ALFF, mean percentage amplitude of fluctuation (mPerAF), mean regional homogeneity (mReHo) maps and resting-state functional connectivity (RSFC) matrix. Twelve predictive features were selected through Mann-Whitney U test, correlation analysis and least absolute shrinkage and selection operator method. The patients were stratified and randomized into the training and testing datasets with a 7:3 ratio. The logical regression, random forest, support vector machine (SVM) and adaptive boosting algorithms were used to establish models. The model performance was evaluated using area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. The selected 12 features included 7 RSFC features, 4 mPerAF features, and 1 mReHo feature. Based on these features, the model was established using the SVM had an optimal performance. The accuracy in the training and testing datasets was 0.957 and 0.727, respectively. The area under the receiver operating characteristic curves was 0.972 and 0.799, respectively. Our whole-brain rs-fMRI radiomics approach provides an objective tool for preoperative glioma stratification. The biological interpretability of selected features reflects distinct neuroplasticity patterns between LGGs and HGGs, advancing understanding of glioma-network interactions.

An integrated predictive model for Alzheimer's disease progression from cognitively normal subjects using generated MRI and interpretable AI.

Aghaei A, Moghaddam ME

pubmed logopapersAug 4 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that begins with subtle cognitive changes and advances to severe impairment. Early diagnosis is crucial for effective intervention and management. In this study, we propose an integrated framework that leverages ensemble transfer learning, generative modeling, and automatic ROI extraction techniques to predict the progression of Alzheimer's disease from cognitively normal (CN) subjects. Using the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset, we employ a three-stage process: (1) estimating the probability of transitioning from CN to mild cognitive impairment (MCI) using ensemble transfer learning, (2) generating future MRI images using Transformer-based Generative Adversarial Network (ViT-GANs) to simulate disease progression after two years, and (3) predicting AD using a 3D convolutional neural network (CNN) with calibrated probabilities using isotonic regression and interpreting critical regions of interest (ROIs) with Gradient-weighted Class Activation Mapping (Grad-CAM). However, the proposed method has generality and may work when sufficient data for simulating brain changes after three years or more is available; in the training phase, regarding available data, brain changes after 2 years have been considered. Our approach addresses the challenge of limited longitudinal data by creating high-quality synthetic images and improving model transparency by identifying key brain regions involved in disease progression. The proposed method demonstrates high accuracy and F1-score, 0.85 and 0.86, respectively, in CN to AD prediction up to 10 years, offering a potential tool for early diagnosis and personalized intervention strategies in Alzheimer's disease.

Glioblastoma Overall Survival Prediction With Vision Transformers

Yin Lin, Riccardo Barbieri, Domenico Aquino, Giuseppe Lauria, Marina Grisoli, Elena De Momi, Alberto Redaelli, Simona Ferrante

arxiv logopreprintAug 4 2025
Glioblastoma is one of the most aggressive and common brain tumors, with a median survival of 10-15 months. Predicting Overall Survival (OS) is critical for personalizing treatment strategies and aligning clinical decisions with patient outcomes. In this study, we propose a novel Artificial Intelligence (AI) approach for OS prediction using Magnetic Resonance Imaging (MRI) images, exploiting Vision Transformers (ViTs) to extract hidden features directly from MRI images, eliminating the need of tumor segmentation. Unlike traditional approaches, our method simplifies the workflow and reduces computational resource requirements. The proposed model was evaluated on the BRATS dataset, reaching an accuracy of 62.5% on the test set, comparable to the top-performing methods. Additionally, it demonstrated balanced performance across precision, recall, and F1 score, overcoming the best model in these metrics. The dataset size limits the generalization of the ViT which typically requires larger datasets compared to convolutional neural networks. This limitation in generalization is observed across all the cited studies. This work highlights the applicability of ViTs for downsampled medical imaging tasks and establishes a foundation for OS prediction models that are computationally efficient and do not rely on segmentation.

Accurate and Interpretable Postmenstrual Age Prediction via Multimodal Large Language Model

Qifan Chen, Jin Cui, Cindy Duan, Yushuo Han, Yifei Shi

arxiv logopreprintAug 4 2025
Accurate estimation of postmenstrual age (PMA) at scan is crucial for assessing neonatal development and health. While deep learning models have achieved high accuracy in predicting PMA from brain MRI, they often function as black boxes, offering limited transparency and interpretability in clinical decision support. In this work, we address the dual challenge of accuracy and interpretability by adapting a multimodal large language model (MLLM) to perform both precise PMA prediction and clinically relevant explanation generation. We introduce a parameter-efficient fine-tuning (PEFT) strategy using instruction tuning and Low-Rank Adaptation (LoRA) applied to the Qwen2.5-VL-7B model. The model is trained on four 2D cortical surface projection maps derived from neonatal MRI scans. By employing distinct prompts for training and inference, our approach enables the MLLM to handle a regression task during training and generate clinically relevant explanations during inference. The fine-tuned model achieves a low prediction error with a 95 percent confidence interval of 0.78 to 1.52 weeks, while producing interpretable outputs grounded in developmental features, marking a significant step toward transparent and trustworthy AI systems in perinatal neuroscience.

Automated detection of lacunes in brain MR images using SAM with robust prompts using self-distillation and anatomy-informed priors.

Deepika P, Shanker G, Narayanan R, Sundaresan V

pubmed logopapersAug 4 2025
Lacunes, which are small fluid-filled cavities in the brain, are signs of cerebral small vessel disease and have been clinically associated with various neurodegenerative and cerebrovascular diseases. Hence, accurate detection of lacunes is crucial and is one of the initial steps for the precise diagnosis of these diseases. However, developing a robust and consistently reliable method for detecting lacunes is challenging because of the heterogeneity in their appearance, contrast, shape, and size. In this study, we propose a lacune detection method using the Segment Anything Model (SAM), guided by point prompts from a candidate prompt generator. The prompt generator initially detects potential lacunes with a high sensitivity using a composite loss function. The true lacunes are then selected using SAM by discriminating their characteristics from mimics such as the sulcus and enlarged perivascular spaces, imitating the clinicians' strategy of examining the potential lacunes along all three axes. False positives are further reduced by adaptive thresholds based on the region wise prevalence of lacunes. We evaluated our method on two diverse, multi-centric MRI datasets, VALDO and ISLES, comprising only FLAIR sequences. Despite diverse imaging conditions and significant variations in slice thickness (0.5-6 mm), our method achieved sensitivities of 84% and 92%, with average false positive rates of 0.05 and 0.06 per slice in ISLES and VALDO datasets respectively. The proposed method demonstrates robust performance across varied imaging conditions and outperformed the state-of-the-art methods, demonstrating its effectiveness in lacune detection and quantification.

Machine Learning and MRI-Based Whole-Organ Magnetic Resonance Imaging Score (WORMS): A Novel Approach to Enhancing Genicular Artery Embolization Outcomes in Knee Osteoarthritis.

Dablan A, Özgül H, Arslan MF, Türksayar O, Cingöz M, Mutlu IN, Erdim C, Guzelbey T, Kılıckesmez O

pubmed logopapersAug 4 2025
To evaluate the feasibility of machine learning (ML) models using preprocedural MRI-based Whole-Organ Magnetic Resonance Imaging Score (WORMS) and clinical parameters to predict treatment response after genicular artery embolization in patients with knee osteoarthritis. This retrospective study included 66 patients (72 knees) who underwent GAE between December 2022 and June 2024. Preprocedural assessments included WORMS and Kellgren-Lawrence grading. Clinical response was defined as a ≥ 50% reduction in Visual Analog Scale (VAS) score. Feature selection was performed using recursive feature elimination and correlation analysis. Multiple ML algorithms (Random Forest, Support Vector Machine, Logistic Regression) were trained using stratified fivefold cross-validation. Conventional statistical analyses assessed group differences and correlations. Of 72 knees, 33 (45.8%) achieved a clinically significant response. Responders showed significantly lower WORMSs for cartilage, bone marrow, and total joint damage (p < 0.05). The Random Forest model demonstrated the best performance, with an accuracy of 81.8%, AUC-ROC of 86.2%, sensitivity of 90%, and specificity of 75%. Key predictive features included total WORMS, ligament score, and baseline VAS. Bone marrow score showed the strongest correlation with VAS reduction (r = -0.430, p < 0.001). ML models integrating WORMS and clinical data suggest that greater cartilage loss, bone marrow edema, joint damage, and higher baseline VAS scores may help to identify patients less likely to respond to GAE for knee OA.
Page 13 of 1161159 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.