Sort by:
Page 12 of 1161159 results

BrainSignsNET: A Deep Learning Model for 3D Anatomical Landmark Detection in the Human Brain Imaging

shirzadeh barough, s., Ventura, C., Bilgel, M., Albert, M., Miller, M. I., Moghekar, A.

medrxiv logopreprintAug 5 2025
Accurate detection of anatomical landmarks in brain Magnetic Resonance Imaging (MRI) scans is essential for reliable spatial normalization, image alignment, and quantitative neuroimaging analyses. In this study, we introduce BrainSignsNET, a deep learning framework designed for robust three-dimensional (3D) landmark detection. Our approach leverages a multi-task 3D convolutional neural network that integrates an attention decoder branch with a multi-class decoder branch to generate precise 3D heatmaps, from which landmark coordinates are extracted. The model was trained and internally validated on T1-weighted Magnetization-Prepared Rapid Gradient-Echo (MPRAGE) scans from the Alzheimers Disease Neuroimaging Initiative (ADNI), the Baltimore Longitudinal Study of Aging (BLSA), and the Biomarkers of Cognitive Decline in Adults at Risk for AD (BIOCARD) datasets and externally validated on a clinical dataset from the Johns Hopkins Hydrocephalus Clinic. The study encompassed 14,472 scans from 6,299 participants, representing a diverse demographic profile with a significant proportion of older adult participants, particularly those over 70 years of age. Extensive preprocessing and data augmentation strategies, including traditional MRI corrections and tailored 3D transformations, ensured data consistency and improved model generalizability. Performance metrics demonstrated that on internal validation BrainSignsNET achieved an overall mean Euclidean distance of 2.32 {+/-} 0.41 mm and 94.8% of landmarks localized within their anatomically defined 3D volumes in the external validation dataset. This improvement in accurate anatomical landmark detection on brain MRI scans should benefit many imaging tasks, including registration, alignment, and quantitative analyses.

Automated vertebral bone quality score measurement on lumbar MRI using deep learning: Development and validation of an AI algorithm.

Jayasuriya NM, Feng E, Nathani KR, Delawan M, Katsos K, Bhagra O, Freedman BA, Bydon M

pubmed logopapersAug 5 2025
Bone health is a critical determinant of spine surgery outcomes, yet many patients undergo procedures without adequate preoperative assessment due to limitations in current bone quality assessment methods. This study aimed to develop and validate an artificial intelligence-based algorithm that predicts Vertebral Bone Quality (VBQ) scores from routine MRI scans, enabling improved preoperative identification of patients at risk for poor surgical outcomes. This study utilized 257 lumbar spine T1-weighted MRI scans from the SPIDER challenge dataset. VBQ scores were calculated through a three-step process: selecting the mid-sagittal slice, measuring vertebral body signal intensity from L1-L4, and normalizing by cerebrospinal fluid signal intensity. A YOLOv8 model was developed to automate region of interest placement and VBQ score calculation. The system was validated against manual annotations from 47 lumbar spine surgery patients, with performance evaluated using precision, recall, mean average precision, intraclass correlation coefficient, Pearson correlation, RMSE, and mean error. The YOLOv8 model demonstrated high accuracy in vertebral body detection (precision: 0.9429, recall: 0.9076, [email protected]: 0.9403, mAP@[0.5:0.95]: 0.8288). Strong interrater reliability was observed with ICC values of 0.95 (human-human), 0.88 and 0.93 (human-AI). Pearson correlations for VBQ scores between human and AI measurements were 0.86 and 0.9, with RMSE values of 0.58 and 0.42 respectively. The AI-based algorithm accurately predicts VBQ scores from routine lumbar MRIs. This approach has potential to enhance early identification and intervention for patients with poor bone health, leading to improved surgical outcomes. Further external validation is recommended to ensure generalizability and clinical applicability.

Utilizing 3D fast spin echo anatomical imaging to reduce the number of contrast preparations in <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> quantification of knee cartilage using learning-based methods.

Zhong J, Huang C, Yu Z, Xiao F, Blu T, Li S, Ong TM, Ho KK, Chan Q, Griffith JF, Chen W

pubmed logopapersAug 5 2025
To propose and evaluate an accelerated <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> quantification method that combines <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted fast spin echo (FSE) images and proton density (PD)-weighted anatomical FSE images, leveraging deep learning models for <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> mapping. The goal is to reduce scan time and facilitate integration into routine clinical workflows for osteoarthritis (OA) assessment. This retrospective study utilized MRI data from 40 participants (30 OA patients and 10 healthy volunteers). A volume of PD-weighted anatomical FSE images and a volume of <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images acquired at a non-zero spin-lock time were used as input to train deep learning models, including a 2D U-Net and a multi-layer perceptron (MLP). <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> maps generated by these models were compared with ground truth maps derived from a traditional non-linear least squares (NLLS) fitting method using four <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images. Evaluation metrics included mean absolute error (MAE), mean absolute percentage error (MAPE), regional error (RE), and regional percentage error (RPE). The best-performed deep learning models achieved RPEs below 5% across all evaluated scenarios. This performance was consistent even in reduced acquisition settings that included only one PD-weighted image and one <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted image, where NLLS methods cannot be applied. Furthermore, the results were comparable to those obtained with NLLS when longer acquisitions with four <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> -weighted images were used. The proposed approach enables efficient <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn> <mi>ρ</mi></mrow> </msub> </mrow> <annotation>$$ {T}_{1\rho } $$</annotation></semantics> </math> mapping using PD-weighted anatomical images, reducing scan time while maintaining clinical standards. This method has the potential to facilitate the integration of quantitative MRI techniques into routine clinical practice, benefiting OA diagnosis and monitoring.

Multi-Center 3D CNN for Parkinson's disease diagnosis and prognosis using clinical and T1-weighted MRI data.

Basaia S, Sarasso E, Sciancalepore F, Balestrino R, Musicco S, Pisano S, Stankovic I, Tomic A, Micco R, Tessitore A, Salvi M, Meiburger KM, Kostic VS, Molinari F, Agosta F, Filippi M

pubmed logopapersAug 5 2025
Parkinson's disease (PD) presents challenges in early diagnosis and progression prediction. Recent advancements in machine learning, particularly convolutional-neural-networks (CNNs), show promise in enhancing diagnostic accuracy and prognostic capabilities using neuroimaging data. The aims of this study were: (i) develop a 3D-CNN based on MRI to distinguish controls and PD patients and (ii) employ CNN to predict the progression of PD. Three cohorts were selected: 86 mild, 62 moderate-to-severe PD patients, and 60 controls; 14 mild-PD patients and 14 controls from Parkinson's Progression Markers Initiative database, and 38 de novo mild-PD patients and 38 controls. All participants underwent MRI scans and clinical evaluation at baseline and over 2-years. PD subjects were classified in two clusters of different progression using k-means clustering based on baseline and follow-up UDPRS-III scores. A 3D-CNN was built and tested on PD patients and controls, with binary classifications: controls vs moderate-to-severe PD, controls vs mild-PD, and two clusters of PD progression. The effect of transfer learning was also tested. CNN effectively differentiated moderate-to-severe PD from controls (74% accuracy) using MRI data alone. Transfer learning significantly improved performance in distinguishing mild-PD from controls (64% accuracy). For predicting disease progression, the model achieved over 70% accuracy by combining MRI and clinical data. Brain regions most influential in the CNN's decisions were visualized. CNN, integrating multimodal data and transfer learning, provides encouraging results toward early-stage classification and progression monitoring in PD. Its explainability through activation maps offers potential for clinical application in early diagnosis and personalized monitoring.

Policy to Assist Iteratively Local Segmentation: Optimising Modality and Location Selection for Prostate Cancer Localisation

Xiangcen Wu, Shaheer U. Saeed, Yipei Wang, Ester Bonmati Coll, Yipeng Hu

arxiv logopreprintAug 5 2025
Radiologists often mix medical image reading strategies, including inspection of individual modalities and local image regions, using information at different locations from different images independently as well as concurrently. In this paper, we propose a recommend system to assist machine learning-based segmentation models, by suggesting appropriate image portions along with the best modality, such that prostate cancer segmentation performance can be maximised. Our approach trains a policy network that assists tumor localisation, by recommending both the optimal imaging modality and the specific sections of interest for review. During training, a pre-trained segmentation network mimics radiologist inspection on individual or variable combinations of these imaging modalities and their sections - selected by the policy network. Taking the locally segmented regions as an input for the next step, this dynamic decision making process iterates until all cancers are best localised. We validate our method using a data set of 1325 labelled multiparametric MRI images from prostate cancer patients, demonstrating its potential to improve annotation efficiency and segmentation accuracy, especially when challenging pathology is present. Experimental results show that our approach can surpass standard segmentation networks. Perhaps more interestingly, our trained agent independently developed its own optimal strategy, which may or may not be consistent with current radiologist guidelines such as PI-RADS. This observation also suggests a promising interactive application, in which the proposed policy networks assist human radiologists.

Point-Based Shape Representation Generation with a Correspondence-Preserving Diffusion Model

Shen Zhu, Yinzhu Jin, Ifrah Zawar, P. Thomas Fletcher

arxiv logopreprintAug 5 2025
We propose a diffusion model designed to generate point-based shape representations with correspondences. Traditional statistical shape models have considered point correspondences extensively, but current deep learning methods do not take them into account, focusing on unordered point clouds instead. Current deep generative models for point clouds do not address generating shapes with point correspondences between generated shapes. This work aims to formulate a diffusion model that is capable of generating realistic point-based shape representations, which preserve point correspondences that are present in the training data. Using shape representation data with correspondences derived from Open Access Series of Imaging Studies 3 (OASIS-3), we demonstrate that our correspondence-preserving model effectively generates point-based hippocampal shape representations that are highly realistic compared to existing methods. We further demonstrate the applications of our generative model by downstream tasks, such as conditional generation of healthy and AD subjects and predicting morphological changes of disease progression by counterfactual generation.

A Novel Multimodal Framework for Early Detection of Alzheimers Disease Using Deep Learning

Tatwadarshi P Nagarhalli, Sanket Patil, Vishal Pande, Uday Aswalekar, Prafulla Patil

arxiv logopreprintAug 5 2025
Alzheimers Disease (AD) is a progressive neurodegenerative disorder that poses significant challenges in its early diagnosis, often leading to delayed treatment and poorer outcomes for patients. Traditional diagnostic methods, typically reliant on single data modalities, fall short of capturing the multifaceted nature of the disease. In this paper, we propose a novel multimodal framework for the early detection of AD that integrates data from three primary sources: MRI imaging, cognitive assessments, and biomarkers. This framework employs Convolutional Neural Networks (CNN) for analyzing MRI images and Long Short-Term Memory (LSTM) networks for processing cognitive and biomarker data. The system enhances diagnostic accuracy and reliability by aggregating results from these distinct modalities using advanced techniques like weighted averaging, even in incomplete data. The multimodal approach not only improves the robustness of the detection process but also enables the identification of AD at its earliest stages, offering a significant advantage over conventional methods. The integration of biomarkers and cognitive tests is particularly crucial, as these can detect Alzheimer's long before the onset of clinical symptoms, thereby facilitating earlier intervention and potentially altering the course of the disease. This research demonstrates that the proposed framework has the potential to revolutionize the early detection of AD, paving the way for more timely and effective treatments

Innovative machine learning approach for liver fibrosis and disease severity evaluation in MAFLD patients using MRI fat content analysis.

Hou M, Zhu Y, Zhou H, Zhou S, Zhang J, Zhang Y, Liu X

pubmed logopapersAug 5 2025
This study employed machine learning models to quantitatively analyze liver fat content from MRI images for the evaluation of liver fibrosis and disease severity in patients with metabolic dysfunction-associated fatty liver disease (MAFLD). A total of 26 confirmed MAFLD cases, along with MRI image sequences obtained from public repositories, were included to perform a comprehensive assessment. Radiomics features-such as contrast, correlation, homogeneity, energy, and entropy-were extracted and used to construct a random forest classification model with optimized hyperparameters. The model achieved outstanding performance, with an accuracy of 96.8%, sensitivity of 95.7%, specificity of 97.8%, and an F1-score of 96.8%, demonstrating its strong capability in accurately evaluating the degree of liver fibrosis and overall disease severity in MAFLD patients. The integration of machine learning with MRI-based analysis offers a promising approach to enhancing clinical decision-making and guiding treatment strategies, underscoring the potential of advanced technologies to improve diagnostic precision and disease management in MAFLD.

Modeling differences in neurodevelopmental maturity of the reading network using support vector regression on functional connectivity data

Lasnick, O. H. M., Luo, J., Kinnie, B., Kamal, S., Low, S., Marrouch, N., Hoeft, F.

biorxiv logopreprintAug 5 2025
The construction of growth charts trained to predict age or developmental deviation (the brain-age index) based on structural/functional properties of the brain may be informative of childrens neurodevelopmental trajectories. When applied to both typically and atypically developing populations, results may indicate that a particular condition is associated with atypical maturation of certain brain networks. Here, we focus on the relationship between reading disorder (RD) and maturation of functional connectivity (FC) patterns in the prototypical reading/language network using a cross-sectional sample of N = 742 participants aged 6-21 years. A support vector regression model is trained to predict chronological age from FC data derived from a whole-brain model as well as multiple reduced models, which are trained on FC data generated from a successively smaller number of regions in the brains reading network. We hypothesized that the trained models would show systematic underestimation of brain network maturity for poor readers, particularly for the models trained with reading/language regions. Comparisons of the different models predictions revealed that while the whole-brain model outperforms the others in terms of overall prediction accuracy, all models successfully predicted brain maturity, including the one trained with the smallest amount of FC data. In addition, all models showed that reading ability affected the brain-age gap, with poor readers ages being underestimated and advanced readers ages being overestimated. Exploratory results demonstrated that the most important regions and connections for prediction were derived from the default mode and frontoparietal control networks. GlossaryDevelopmental dyslexia / reading disorder (RD): A specific learning disorder affecting reading ability in the absence of any other explanatory condition such as intellectual disability or visual impairment Support vector regression (SVR): A supervised machine learning technique which predicts continuous outcomes (such as chronological age) rather than classifying each observation; finds the best-fit function within a defined error margin Principal component analysis (PCA): A dimensionality reduction technique that transforms a high-dimensional dataset with many features per observation into a reduced set of principal components for each observation; each component is a linear combination of several original (correlated) features, and the final set of components are all orthogonal (uncorrelated) to one another Brain-age index: A numerical index quantifying deviation from the brains typical developmental trajectory for a single individual; may be based on a variety of morphometric or functional properties of the brain, resulting in different estimates for the same participant depending on the imaging modality used Brain-age gap (BAG): The difference, given in units of time, between a participants true chronological age and a predictive models estimated age for that participant based on brain data (Actual - Predicted); may be used as a brain-age index HighlightsO_LIA machine learning model trained on functional data predicted participants ages C_LIO_LIThe model showed variability in age prediction accuracy based on reading skills C_LIO_LIThe model highly weighted data from frontoparietal and default mode regions C_LIO_LINeural markers of reading and language are diffusely represented in the brain C_LI

Brain tumor segmentation by optimizing deep learning U-Net model.

Asiri AA, Hussain L, Irfan M, Mehdar KM, Awais M, Alelyani M, Alshuhri M, Alghamdi AJ, Alamri S, Nadeem MA

pubmed logopapersAug 5 2025
BackgroundMagnetic Resonance Imaging (MRI) is a cornerstone in diagnosing brain tumors. However, the complex nature of these tumors makes accurate segmentation in MRI images a demanding task.ObjectiveAccurate brain tumor segmentation remains a critical challenge in medical image analysis, with early detection crucial for improving patient outcomes.MethodsTo develop and evaluate a novel UNet-based architecture for improved brain tumor segmentation in MRI images. This paper presents a novel UNet-based architecture for improved brain tumor segmentation. The UNet model architecture incorporates Leaky ReLU activation, batch normalization, and regularization to enhance training and performance. The model consists of varying numbers of layers and kernel sizes to capture different levels of detail. To address the issue of class imbalance in medical image segmentation, we employ focused loss and generalized Dice (GDL) loss functions.ResultsThe proposed model was evaluated on the BraTS'2020 dataset, achieving an accuracy of 99.64% and Dice coefficients of 0.8984, 0.8431, and 0.8824 for necrotic core, edema, and enhancing tumor regions, respectively.ConclusionThese findings demonstrate the efficacy of our approach in accurately predicting tumors, which has the potential to enhance diagnostic systems and improve patient outcomes.
Page 12 of 1161159 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.