Sort by:
Page 83 of 3703696 results

Enhanced Detection of Age-Related and Cognitive Declines Using Automated Hippocampal-To-Ventricle Ratio in Alzheimer's Patients.

Fernandez-Lozano S, Fonov V, Schoemaker D, Pruessner J, Potvin O, Duchesne S, Collins DL

pubmed logopapersAug 1 2025
The hippocampal-to-ventricle ratio (HVR) is a biomarker of medial temporal atrophy, particularly useful in the assessment of neurodegeneration in diseases such as Alzheimer's disease (AD). To minimize subjectivity and inter-rater variability, an automated, accurate, precise, and reliable segmentation technique for the hippocampus (HC) and surrounding cerebro-spinal fluid (CSF) filled spaces-such as the temporal horns of the lateral ventricles-is essential. We trained and evaluated three automated methods for the segmentation of both HC and CSF (Multi-Atlas Label Fusion (MALF), Nonlinear Patch-Based Segmentation (NLPB), and a Convolutional Neural Network (CNN)). We then evaluated these methods, including the widely used FreeSurfer technique, using baseline T1w MRIs of 1641 participants from the AD Neuroimaging Initiative study with various degree of atrophy associated with their cognitive status on the spectrum from cognitively healthy to clinically probable AD. Our gold standard consisted in manual segmentation of HC and CSF from 80 cognitively healthy individuals. We calculated HC volumes and HVR and compared all methods in terms of segmentation reliability, similarity across methods, sensitivity in detecting between-group differences and associations with age, scores of the learning subtest of the Rey Auditory Verbal Learning Test (RAVLT) and the Alzheimer's Disease Assessment Scale 13 (ADAS13) scores. Cross validation demonstrated that the CNN method yielded more accurate HC and CSF segmentations when compared to MALF and NLPB, demonstrating higher volumetric overlap (Dice Kappa = 0.94) and correlation (rho = 0.99) with the manual labels. It was also the most reliable method in clinical data application, showing minimal failures. Our comparisons yielded high correlations between FreeSurfer, CNN and NLPB volumetric values. HVR yielded higher control:AD effect sizes than HC volumes among all segmentation methods, reinforcing the significance of HVR in clinical distinction. The positive association with age was significantly stronger for HVR compared to HC volumes on all methods except FreeSurfer. Memory associations with HC volumes or HVR were only significant for individuals with mild cognitive impairment. Finally, the HC volumes and HVR showed comparable negative associations with ADAS13, particularly in the mild cognitive impairment cohort. This study provides an evaluation of automated segmentation methods centered to estimate HVR, emphasizing the superior performance of a CNN-based algorithm. The findings underscore the pivotal role of accurate segmentation in HVR calculations for precise clinical applications, contributing valuable insights into medial temporal lobe atrophy in neurodegenerative disorders, especially AD.

BEA-CACE: branch-endpoint-aware double-DQN for coronary artery centerline extraction in CT angiography images.

Zhang Y, Luo G, Wang W, Cao S, Dong S, Yu D, Wang X, Wang K

pubmed logopapersAug 1 2025
In order to automate the centerline extraction of the coronary tree, three challenges must be addressed: tracking branches automatically, passing through plaques successfully, and detecting endpoints accurately. This study aims to develop a method to solve the three challenges. We propose a branch-endpoint-aware coronary centerline extraction framework. The framework consists of a deep reinforcement learning-based tracker and a 3D dilated CNN-based detector. The tracker is designed to predict the actions of an agent with the objective of tracking the centerline. The detector identifies bifurcation points and endpoints, assisting the tracker in tracking branches and terminating the tracking process automatically. The detector can also estimate the radius values of the coronary artery. The method achieves the state-of-the-art performance in both the centerline extraction and radius estimate. Furthermore, the method necessitates minimal user interaction to extract a coronary tree, a feature that surpasses other interactive methods. The method can track branches automatically, pass through plaques successfully and detect endpoints accurately. Compared with other interactive methods that require multiple seeds, our method only needs one seed to extract the entire coronary tree.

Acute lymphoblastic leukemia diagnosis using machine learning techniques based on selected features.

El Houby EMF

pubmed logopapersAug 1 2025
Cancer is considered one of the deadliest diseases worldwide. Early detection of cancer can significantly improve patient survival rates. In recent years, computer-aided diagnosis (CAD) systems have been increasingly employed in cancer diagnosis through various medical image modalities. These systems play a critical role in enhancing diagnostic accuracy, reducing physician workload, providing consistent second opinions, and contributing to the efficiency of the medical industry. Acute lymphoblastic leukemia (ALL) is a fast-progressing blood cancer that primarily affects children but can also occur in adults. Early and accurate diagnosis of ALL is crucial for effective treatment and improved outcomes, making it a vital area for CAD system development. In this research, a CAD system for ALL diagnosis has been developed. It contains four phases which are preprocessing, segmentation, feature extraction and selection phase, and classification of suspicious regions as normal or abnormal. The proposed system was applied to microscopic blood images to classify each case as ALL or normal. Three classifiers which are Naïve Bayes (NB), Support Vector Machine (SVM) and K-nearest Neighbor (K-NN) were utilized to classify the images based on selected features. Ant Colony Optimization (ACO) was combined with the classifiers as a feature selection method to identify the optimal subset of features among the extracted features from segmented cell parts that yield the highest classification accuracy. The NB classifier achieved the best performance, with accuracy, sensitivity, and specificity of 96.15%, 97.56, and 94.59%, respectively.

Brain Age Prediction: Deep Models Need a Hand to Generalize.

Rajabli R, Soltaninejad M, Fonov VS, Bzdok D, Collins DL

pubmed logopapersAug 1 2025
Predicting brain age from T1-weighted MRI is a promising marker for understanding brain aging and its associated conditions. While deep learning models have shown success in reducing the mean absolute error (MAE) of predicted brain age, concerns about robust and accurate generalization in new data limit their clinical applicability. The large number of trainable parameters, combined with limited medical imaging training data, contributes to this challenge, often resulting in a generalization gap where there is a significant discrepancy between model performance on training data versus unseen data. In this study, we assess a deep model, SFCN-reg, based on the VGG-16 architecture, and address the generalization gap through comprehensive preprocessing, extensive data augmentation, and model regularization. Using training data from the UK Biobank, we demonstrate substantial improvements in model performance. Specifically, our approach reduces the generalization MAE by 47% (from 5.25 to 2.79 years) in the Alzheimer's Disease Neuroimaging Initiative dataset and by 12% (from 4.35 to 3.75 years) in the Australian Imaging, Biomarker and Lifestyle dataset. Furthermore, we achieve up to 13% reduction in scan-rescan error (from 0.80 to 0.70 years) while enhancing the model's robustness to registration errors. Feature importance maps highlight anatomical regions used to predict age. These results highlight the critical role of high-quality preprocessing and robust training techniques in improving accuracy and narrowing the generalization gap, both necessary steps toward the clinical use of brain age prediction models. Our study makes valuable contributions to neuroimaging research by offering a potential pathway to improve the clinical applicability of deep learning models.

Moving Beyond CT Body Composition Analysis: Using Style Transfer for Bringing CT-Based Fully-Automated Body Composition Analysis to T2-Weighted MRI Sequences.

Haubold J, Pollok OB, Holtkamp M, Salhöfer L, Schmidt CS, Bojahr C, Straus J, Schaarschmidt BM, Borys K, Kohnke J, Wen Y, Opitz M, Umutlu L, Forsting M, Friedrich CM, Nensa F, Hosch R

pubmed logopapersAug 1 2025
Deep learning for body composition analysis (BCA) is gaining traction in clinical research, offering rapid and automated ways to measure body features like muscle or fat volume. However, most current methods prioritize computed tomography (CT) over magnetic resonance imaging (MRI). This study presents a deep learning approach for automatic BCA using MR T2-weighted sequences. Initial BCA segmentations (10 body regions and 4 body parts) were generated by mapping CT segmentations from body and organ analysis (BOA) model to synthetic MR images created using an in-house trained CycleGAN. In total, 30 synthetic data pairs were used to train an initial nnU-Net V2 in 3D, and this preliminary model was then applied to segment 120 real T2-weighted MRI sequences from 120 patients (46% female) with a median age of 56 (interquartile range, 17.75), generating early segmentation proposals. These proposals were refined by human annotators, and nnU-Net V2 2D and 3D models were trained using 5-fold cross-validation on this optimized dataset of real MR images. Performance was evaluated using Sørensen-Dice, Surface Dice, and Hausdorff Distance metrics including 95% confidence intervals for cross-validation and ensemble models. The 3D ensemble segmentation model achieved the highest Dice scores for the body region classes: bone 0.926 (95% confidence interval [CI], 0.914-0.937), muscle 0.968 (95% CI, 0.961-0.975), subcutaneous fat 0.98 (95% CI, 0.971-0.986), nervous system 0.973 (95% CI, 0.965-0.98), thoracic cavity 0.978 (95% CI, 0.969-0.984), abdominal cavity 0.989 (95% CI, 0.986-0.991), mediastinum 0.92 (95% CI, 0.901-0.936), pericardium 0.945 (95% CI, 0.924-0.96), brain 0.966 (95% CI, 0.927-0.989), and glands 0.905 (95% CI, 0.886-0.921). Furthermore, body part 2D ensemble model reached the highest Dice scores for all labels: arms 0.952 (95% CI, 0.937-0.965), head + neck 0.965 (95% CI, 0.953-0.976), legs 0.978 (95% CI, 0.968-0.988), and torso 0.99 (95% CI, 0.988-0.991). The overall average Dice across body parts (2D = 0.971, 3D = 0.969, P = ns) and body regions (2D = 0.935, 3D = 0.955, P < 0.001) ensemble models indicates stable performance across all classes. The presented approach facilitates efficient and automated extraction of BCA parameters from T2-weighted MRI sequences, providing precise and detailed body composition information across various regions and body parts.

Effect of spatial resolution on the diagnostic performance of machine-learning radiomics model in lung adenocarcinoma: comparisons between normal- and high-spatial-resolution imaging for predicting invasiveness.

Yanagawa M, Nagatani Y, Hata A, Sumikawa H, Moriya H, Iwano S, Tsuchiya N, Iwasawa T, Ohno Y, Tomiyama N

pubmed logopapersJul 31 2025
To construct two machine learning radiomics (MLR) for invasive adenocarcinoma (IVA) prediction using normal-spatial-resolution (NSR) and high-spatial-resolution (HSR) training cohorts, and to validate models (model-NSR and -HSR) in another test cohort while comparing independent radiologists' (R1, R2) performance with and without model-HSR. In this retrospective multicenter study, all CT images were reconstructed using NSR data (512 matrix, 0.5-mm thickness) and HSR data (2048 matrix, 0.25-mm thickness). Nodules were divided into training (n = 61 non-IVA, n = 165 IVA) and test sets (n = 36 non-IVA, n = 203 IVA). Two MLR models were developed with 18 significant factors for the NSR model and 19 significant factors for the HSR model from 172 radiomics features using random forest. Area under the receiver operator characteristic curves (AUC) was analyzed using DeLong's test in the test set. Accuracy (acc), sensitivity (sen), and specificity (spc) of R1 and R2 with and without model-HSR were compared using McNemar test. 437 patients (70 ± 9 years, 203 men) had 465 nodules (n = 368, IVA). Model-HSR AUCs were significantly higher than model-NSR in training (0.839 vs. 0.723) and test sets (0.863 vs. 0.718) (p < 0.05). R1's acc (87.2%) and sen (93.1%) with model-HSR were significantly higher than without (77.0% and 79.3%) (p < 0.0001). R2's acc (83.7%) and sen (86.7%) with model-HSR might be equal or higher than without (83.7% and 85.7%, respectively), but not significant (p > 0.50). Spc of R1 (52.8%) and R2 (66.7%) with model-HSR might be lower than without (63.9% and 72.2%, respectively), but not significant (p > 0.21). HSR-based MLR model significantly increased IVA diagnostic performance compared to NSR, supporting radiologists without compromising accuracy and sensitivity. However, this benefit came at the cost of reduced specificity, potentially increasing false positives, which may lead to unnecessary examinations or overtreatment in clinical settings.

A Trust-Guided Approach to MR Image Reconstruction with Side Information.

Atalik A, Chopra S, Sodickson DK

pubmed logopapersJul 31 2025
Reducing MRI scan times can improve patient care and lower healthcare costs. Many acceleration methods are designed to reconstruct diagnostic-quality images from sparse k-space data, via an ill-posed or ill-conditioned linear inverse problem (LIP). To address the resulting ambiguities, it is crucial to incorporate prior knowledge into the optimization problem, e.g., in the form of regularization. Another form of prior knowledge less commonly used in medical imaging is the readily available auxiliary data (a.k.a. side information) obtained from sources other than the current acquisition. In this paper, we present the Trust-Guided Variational Network (TGVN), an end-to-end deep learning framework that effectively and reliably integrates side information into LIPs. We demonstrate its effectiveness in multi-coil, multi-contrast MRI reconstruction, where incomplete or low-SNR measurements from one contrast are used as side information to reconstruct high-quality images of another contrast from heavily under-sampled data. TGVN is robust across different contrasts, anatomies, and field strengths. Compared to baselines utilizing side information, TGVN achieves superior image quality while preserving subtle pathological features even at challenging acceleration levels, drastically speeding up acquisition while minimizing hallucinations. Source code and dataset splits are available on github.com/sodicksonlab/TGVN.

SAM-Med3D: A Vision Foundation Model for General-Purpose Segmentation on Volumetric Medical Images.

Wang H, Guo S, Ye J, Deng Z, Cheng J, Li T, Chen J, Su Y, Huang Z, Shen Y, zzzzFu B, Zhang S, He J

pubmed logopapersJul 31 2025
Existing volumetric medical image segmentation models are typically task-specific, excelling at specific targets but struggling to generalize across anatomical structures or modalities. This limitation restricts their broader clinical use. In this article, we introduce segment anything model (SAM)-Med3D, a vision foundation model (VFM) for general-purpose segmentation on volumetric medical images. Given only a few 3-D prompt points, SAM-Med3D can accurately segment diverse anatomical structures and lesions across various modalities. To achieve this, we gather and preprocess a large-scale 3-D medical image segmentation dataset, SA-Med3D-140K, from 70 public datasets and 8K licensed private cases from hospitals. This dataset includes 22K 3-D images and 143K corresponding masks. SAM-Med3D, a promptable segmentation model characterized by its fully learnable 3-D structure, is trained on this dataset using a two-stage procedure and exhibits impressive performance on both seen and unseen segmentation targets. We comprehensively evaluate SAM-Med3D on 16 datasets covering diverse medical scenarios, including different anatomical structures, modalities, targets, and zero-shot transferability to new/unseen tasks. The evaluation demonstrates the efficiency and efficacy of SAM-Med3D, as well as its promising application to diverse downstream tasks as a pretrained model. Our approach illustrates that substantial medical resources can be harnessed to develop a general-purpose medical AI for various potential applications. Our dataset, code, and models are available at: https://github.com/uni-medical/SAM-Med3D.

Enhanced Detection, Using Deep Learning Technology, of Medial Meniscal Posterior Horn Ramp Lesions in Patients with ACL Injury.

Park HJ, Ham S, Shim E, Suh DH, Kim JG

pubmed logopapersJul 31 2025
Meniscal ramp lesions can impact knee stability, particularly when associated with anterior cruciate ligament (ACL) injuries. Although magnetic resonance imaging (MRI) is the primary diagnostic tool, its diagnostic accuracy remains suboptimal. We aimed to determine whether deep learning technology could enhance MRI-based ramp lesion detection. We reviewed the records of 236 patients who underwent arthroscopic procedures documenting ACL injuries and the status of the medial meniscal posterior horn. A deep learning model was developed using MRI data for ramp lesion detection. Ramp lesion risk factors among patients who underwent ACL reconstruction were analyzed using logistic regression, extreme gradient boosting (XGBoost), and random forest models and were integrated into a final prediction model using Swin Transformer Large architecture. The deep learning model using MRI data demonstrated superior overall diagnostic performance to the clinicians' assessment (accuracy of 73.3% compared with 68.1%, specificity of 78.0% compared with 62.9%, and sensitivity of 64.7% compared with 76.4%). Incorporating risk factors (age, posteromedial tibial bone marrow edema, and lateral meniscal tears) improved the model's accuracy to 80.7%, with a sensitivity of 81.8% and a specificity of 80.9%. Integrating deep learning with MRI data and risk factors significantly enhanced diagnostic accuracy for ramp lesions, surpassing that of the model using MRI alone and that of clinicians. This study highlights the potential of artificial intelligence to provide clinicians with more accurate diagnostic tools for detecting ramp lesions, potentially enhancing treatment and patient outcomes. Diagnostic Level III. See Instructions for Authors for a complete description of levels of evidence.

Effectiveness of Radiomics-Based Machine Learning Models in Differentiating Pancreatitis and Pancreatic Ductal Adenocarcinoma: Systematic Review and Meta-Analysis.

Zhang L, Li D, Su T, Xiao T, Zhao S

pubmed logopapersJul 31 2025
Pancreatic ductal adenocarcinoma (PDAC) and mass-forming pancreatitis (MFP) share similar clinical, laboratory, and imaging features, making accurate diagnosis challenging. Nevertheless, PDAC is highly malignant with a poor prognosis, whereas MFP is an inflammatory condition typically responding well to medical or interventional therapies. Some investigators have explored radiomics-based machine learning (ML) models for distinguishing PDAC from MFP. However, systematic evidence supporting the feasibility of these models is insufficient, presenting a notable challenge for clinical application. This study intended to review the diagnostic performance of radiomics-based ML models in differentiating PDAC from MFP, summarize the methodological quality of the included studies, and provide evidence-based guidance for optimizing radiomics-based ML models and advancing their clinical use. PubMed, Embase, Cochrane, and Web of Science were searched for relevant studies up to June 29, 2024. Eligible studies comprised English cohort, case-control, or cross-sectional designs that applied fully developed radiomics-based ML models-including traditional and deep radiomics-to differentiate PDAC from MFP, while also reporting their diagnostic performance. Studies without full text, limited to image segmentation, or insufficient outcome metrics were excluded. Methodological quality was appraised by means of the radiomics quality score. Since the limited applicability of QUADAS-2 in radiomics-based ML studies, the risk of bias was not formally assessed. Pooled sensitivity, specificity, area under the curve of summary receiver operating characteristics (SROC), likelihood ratios, and diagnostic odds ratio were estimated through a bivariate mixed-effects model. Results were presented with forest plots, SROC curves, and Fagan's nomogram. Subgroup analysis was performed to appraise the diagnostic performance of radiomics-based ML models across various imaging modalities, including computed tomography (CT), magnetic resonance imaging, positron emission tomography-CT, and endoscopic ultrasound. This meta-analysis included 24 studies with 14,406 cases, including 7635 PDAC cases. All studies adopted a case-control design, with 5 conducted across multiple centers. Most studies used CT as the primary imaging modality. The radiomics quality score scores ranged from 5 points (14%) to 17 points (47%), with an average score of 9 (25%). The radiomics-based ML models demonstrated high diagnostic performance. Based on the independent validation sets, the pooled sensitivity, specificity, area under the curve of SROC, positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 0.92 (95% CI 0.91-0.94), 0.90 (95% CI 0.85-0.94), 0.94 (95% CI 0.74-0.99), 9.3 (95% CI 6.0-14.2), 0.08 (95% CI 0.07-0.11), and 110 (95% CI 62-194), respectively. Radiomics-based ML models demonstrate high diagnostic accuracy in differentiating PDAC from MFP, underscoring their potential as noninvasive tools for clinical decision-making. Nonetheless, the overall methodological quality was moderate due to limitations in external validation, standardized protocols, and reproducibility. These findings support the promise of radiomics in clinical diagnostics while highlighting the need for more rigorous, multicenter research to enhance model generalizability and clinical applicability.
Page 83 of 3703696 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.