Sort by:
Page 340 of 3863853 results

Automated ventricular segmentation in pediatric hydrocephalus: how close are we?

Taha BR, Luo G, Naik A, Sabal L, Sun J, McGovern RA, Sandoval-Garcia C, Guillaume DJ

pubmed logopapersMay 23 2025
The explosive growth of available high-quality imaging data coupled with new progress in hardware capabilities has enabled a new era of unprecedented performance in brain segmentation tasks. Despite the explosion of new data released by consortiums and groups around the world, most published, closed, or openly available segmentation models have either a limited or an unknown role in pediatric brains. This study explores the utility of state-of-the-art automated ventricular segmentation tools applied to pediatric hydrocephalus. Two popular, fast, whole-brain segmentation tools were used (FastSurfer and QuickNAT) to automatically segment the lateral ventricles and evaluate their accuracy in children with hydrocephalus. Forty scans from 32 patients were included in this study. The patients underwent imaging at the University of Minnesota Medical Center or satellite clinics, were between 0 and 18 years old, had an ICD-10 diagnosis that included the word hydrocephalus, and had at least one T1-weighted pre- or postcontrast MPRAGE sequence. Patients with poor quality scans were excluded. Dice similarity coefficient (DSC) scores were used to compare segmentation outputs against manually segmented lateral ventricles. Overall, both models performed poorly with DSCs of 0.61 for each segmentation tool. No statistically significant difference was noted between model performance (p = 0.86). Using a multivariate linear regression to examine factors associated with higher DSC performance, male gender (p = 0.66), presence of ventricular catheter (p = 0.72), and MRI magnet strength (p = 0.23) were not statistically significant factors. However, younger age (p = 0.03) and larger ventricular volumes (p = 0.01) were significantly associated with lower DSC values. A large-scale visualization of 196 scans in both models showed characteristic patterns of segmentation failure in larger ventricles. Significant gaps exist in current cutting-edge segmentation models when applied to pediatric hydrocephalus. Researchers will need to address these types of gaps in performance through thoughtful consideration of their training data before reaching the ultimate goal of clinical deployment.

Deep Learning and Radiomic Signatures Associated with Tumor Immune Heterogeneity Predict Microvascular Invasion in Colon Cancer.

Jia J, Wang J, Zhang Y, Bai G, Han L, Niu Y

pubmed logopapersMay 23 2025
This study aims to develop and validate a deep learning radiomics signature (DLRS) that integrates radiomics and deep learning features for the non-invasive prediction of microvascular invasion (MVI) in patients with colon cancer (CC). Furthermore, the study explores the potential association between DLRS and tumor immune heterogeneity. This study is a multi-center retrospective study that included a total of 1007 patients with colon cancer (CC) from three medical centers and The Cancer Genome Atlas (TCGA-COAD) database. Patients from Medical Centers 1 and 2 were divided into a training cohort (n = 592) and an internal validation cohort (n = 255) in a 7:3 ratio. Medical Center 3 (n = 135) and the TCGA-COAD database (n = 25) were used as external validation cohorts. Radiomics and deep learning features were extracted from contrast-enhanced venous-phase CT images. Feature selection was performed using machine learning algorithms, and three predictive models were developed: a radiomics model, a deep learning (DL) model, and a combined deep learning radiomics (DLR) model. The predictive performance of each model was evaluated using multiple metrics, including the area under the curve (AUC), sensitivity, and specificity. Additionally, differential gene expression analysis was conducted on RNA-seq data from the TCGA-COAD dataset to explore the association between the DLRS and tumor immune heterogeneity within the tumor microenvironment. Compared to the standalone radiomics and deep learning models, DLR fusion model demonstrated superior predictive performance. The AUC for the internal validation cohort was 0.883 (95% CI: 0.828-0.937), while the AUC for the external validation cohort reached 0.855 (95% CI: 0.775-0.935). Furthermore, stratifying patients from the TCGA-COAD dataset into high-risk and low-risk groups based on the DLRS revealed significant differences in immune cell infiltration and immune checkpoint expression between the two groups (P < 0.05). The contrast-enhanced CT-based DLR fusion model developed in this study effectively predicts the MVI status in patients with CC. This model serves as a non-invasive preoperative assessment tool and reveals a potential association between the DLRS and immune heterogeneity within the tumor microenvironment, providing insights to optimize individualized treatment strategies.

Renal Transplant Survival Prediction From Unsupervised Deep Learning-Based Radiomics on Early Dynamic Contrast-Enhanced MRI.

Milecki L, Bodard S, Kalogeiton V, Poinard F, Tissier AM, Boudhabhay I, Correas JM, Anglicheau D, Vakalopoulou M, Timsit MO

pubmed logopapersMay 23 2025
End-stage renal disease is characterized by an irreversible decline in kidney function. Despite a risk of chronic dysfunction of the transplanted kidney, renal transplantation is considered the most effective solution among available treatment options. Clinical attributes of graft survival prediction, such as allocation variables or results of pathological examinations, have been widely studied. Nevertheless, medical imaging is clinically used only to assess current transplant status. This study investigated the use of unsupervised deep learning-based algorithms to identify rich radiomic features that may be linked to graft survival from early dynamic contrast-enhanced magnetic resonance imaging data of renal transplants. A retrospective cohort of 108 transplanted patients (mean age 50 +/- 15, 67 men) undergoing systematic magnetic resonance imaging follow-up examinations (2013 to 2015) was used to train deep convolutional neural network models based on an unsupervised contrastive learning approach. 5-year graft survival analysis was performed from the obtained artificial intelligence radiomics features using penalized Cox models and Kaplan-Meier estimates. Using a validation set of 48 patients (mean age 54 +/- 13, 30 men) having 1-month post-transplantation magnetic resonance imaging examinations, the proposed approach demonstrated promising 5-year graft survival capability with a 72.7% concordance index from the artificial intelligence radiomics features. Unsupervised clustering of these radiomics features enabled statistically significant stratification of patients (p=0.029). This proof-of-concept study exposed the promising capability of artificial intelligence algorithms to extract relevant radiomics features that enable renal transplant survival prediction. Further studies are needed to demonstrate the robustness of this technique, and to identify appropriate procedures for integration of such an approach into multimodal and clinical settings.

Meta-analysis of AI-based pulmonary embolism detection: How reliable are deep learning models?

Lanza E, Ammirabile A, Francone M

pubmed logopapersMay 23 2025
Deep learning (DL)-based methods show promise in detecting pulmonary embolism (PE) on CT pulmonary angiography (CTPA), potentially improving diagnostic accuracy and workflow efficiency. This meta-analysis aimed to (1) determine pooled performance estimates of DL algorithms for PE detection; and (2) compare the diagnostic efficacy of convolutional neural network (CNN)- versus U-Net-based architectures. Following PRISMA guidelines, we searched PubMed and EMBASE through April 15, 2025 for English-language studies (2010-2025) reporting DL models for PE detection with extractable 2 × 2 data or performance metrics. True/false positives and negatives were reconstructed when necessary under an assumed 50 % PE prevalence (with 0.5 continuity correction). We approximated AUROC as the mean of sensitivity and specificity if not directly reported. Sensitivity, specificity, accuracy, PPV and NPV were pooled using a DerSimonian-Laird random-effects model with Freeman-Tukey transformation; AUROC values were combined via a fixed-effect inverse-variance approach. Heterogeneity was assessed by Cochran's Q and I<sup>2</sup>. Subgroup analyses contrasted CNN versus U-Net models. Twenty-four studies (n = 22,984 patients) met inclusion criteria. Pooled estimates were: AUROC 0.895 (95 % CI: 0.874-0.917), sensitivity 0.894 (0.856-0.923), specificity 0.871 (0.831-0.903), accuracy 0.857 (0.833-0.882), PPV 0.832 (0.794-0.869) and NPV 0.902 (0.874-0.929). Between-study heterogeneity was high (I<sup>2</sup> ≈ 97 % for sensitivity/specificity). U-Net models exhibited higher sensitivity (0.899 vs 0.893) and CNN models higher specificity (0.926 vs 0.900); subgroup Q-tests confirmed significant differences for both sensitivity (p = 0.0002) and specificity (p < 0.001). DL algorithms demonstrate high diagnostic accuracy for PE detection on CTPA, with complementary strengths: U-Net architectures excel in true-positive identification, whereas CNNs yield fewer false positives. However, marked heterogeneity underscores the need for standardized, prospective validation before routine clinical implementation.

A general survey on medical image super-resolution via deep learning.

Yu M, Xu Z, Lukasiewicz T

pubmed logopapersMay 23 2025
Medical image super-resolution (SR) is a classic regression task in low-level vision. Limited by hardware limitations, acquisition time, low radiation dose, and other factors, the spatial resolution of some medical images is not sufficient. To address this problem, many different SR methods have been proposed. Especially in recent years, medical image SR networks based on deep learning have been vigorously developed. This survey provides a modular and detailed introduction to the key components of medical image SR technology based on deep learning. In this paper, we first introduce the background concepts of deep learning and medical image SR task. Subsequently, we present a comprehensive analysis of the key components from the perspectives of effective architecture, upsampling module, learning strategy, and image quality assessment of medical image SR networks. Furthermore, we focus on the urgent problems that need to be addressed in the medical image SR task based on deep learning. And finally we summarize the trends and challenges of future development.

Validation and comparison of three different methods for automated identification of distal femoral landmarks in 3D.

Berger L, Brößner P, Ehreiser S, Tokunaga K, Okamoto M, Radermacher K

pubmed logopapersMay 23 2025
Identification of bony landmarks in medical images is of high importance for 3D planning in orthopaedic surgery. Automated landmark identification has the potential to optimize clinical routines and allows for the scientific analysis of large databases. To the authors' knowledge, no direct comparison of different methods for automated landmark detection on the same dataset has been published to date. We compared 3 methods for automated femoral landmark identification: an artificial neural network, a statistical shape model and a geometric approach. All methods were compared against manual measurements of two raters on the task of identifying 6 femoral landmarks on CT data or derived surface models of 202 femora. The accuracy of the methods was in the range of the manual measurements and comparable to those reported in previous studies. The geometric approach showed a significantly higher average deviation compared to the manually selected reference landmarks, while there was no statistically significant difference for the neural network and the SSM. All fully automated methods show potential for use, depending on the use case. Characteristics of the different methods, such as the input data required (raw CT/segmented bone surface models, amount of training data required) and/or the methods robustness, can be used for method selection in the individual application.

AI in Action: A Roadmap from the Radiology AI Council for Effective Model Evaluation and Deployment.

Trivedi H, Khosravi B, Gichoya J, Benson L, Dyckman D, Galt J, Howard B, Kikano E, Kunjummen J, Lall N, Li X, Patel S, Safdar N, Salastekar N, Segovis C, van Assen M, Harri P

pubmed logopapersMay 23 2025
As the integration of artificial intelligence (AI) into radiology workflows continues to evolve, establishing standardized processes for the evaluation and deployment of AI models is crucial to ensure success. This paper outlines the creation of a Radiology AI Council at a large academic center and subsequent development of framework in the form of a rubric to formalize the evaluation of radiology AI models and onboard them into clinical workflows. The rubric aims to address the challenges faced during the deployment of AI models, such as real-world model performance, workflow implementation, resource allocation, return on investment (ROI), and impact to the broader health system. Using this comprehensive rubric, the council aims to ensure that the process for selecting AI models is both standardized and transparent. This paper outlines the steps taken to establish this rubric, its components, and initial results from evaluation of 13 models over an 8-month period. We emphasize the importance of holistic model evaluation beyond performance metrics, and transparency and objectivity in AI model evaluation with the goal of improving the efficacy and safety of AI models in radiology.

MRI-based habitat analysis for Intratumoral heterogeneity quantification combined with deep learning for HER2 status prediction in breast cancer.

Li QY, Liang Y, Zhang L, Li JH, Wang BJ, Wang CF

pubmed logopapersMay 23 2025
Human epidermal growth factor receptor 2 (HER2) is a crucial determinant of breast cancer prognosis and treatment options. The study aimed to establish an MRI-based habitat model to quantify intratumoral heterogeneity (ITH) and evaluate its potential in predicting HER2 expression status. Data from 340 patients with pathologically confirmed invasive breast cancer were retrospectively analyzed. Two tasks were designed for this study: Task 1 distinguished between HER2-positive and HER2-negative breast cancer. Task 2 distinguished between HER2-low and HER2-zero breast cancer. We developed the ITH, deep learning (DL), and radiomics signatures based on the features extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Clinical independent predictors were determined by multivariable logistic regression. Finally, a combined model was constructed by integrating the clinical independent predictors, ITH signature, and DL signature. The area under the receiver operating characteristic curve (AUC) served as the standard for assessing the performance of models. In task 1, the ITH signature performed well in the training set (AUC = 0.855) and the validation set (AUC = 0.842). In task 2, the AUCs of the ITH signature were 0.844 and 0.840, respectively, which still showed good prediction performance. In the validation sets of both tasks, the combined model exhibited the best prediction performance, with AUCs of 0.912 and 0.917 respectively, making it the optimal model. A combined model integrating clinical independent predictors, ITH signature, and DL signature can predict HER2 expression status preoperatively and noninvasively.

Highlights of the Society for Cardiovascular Magnetic Resonance (SCMR) 2025 Conference: leading the way to accessible, efficient and sustainable CMR.

Prieto C, Allen BD, Azevedo CF, Lima BB, Lam CZ, Mills R, Huisman M, Gonzales RA, Weingärtner S, Christodoulou AG, Rochitte C, Markl M

pubmed logopapersMay 23 2025
The 28th Annual Scientific Sessions of the Society for Cardiovascular Magnetic Resonance (SCMR) took place from January 29 to February 1, 2025, in Washington, D.C. SCMR 2025 brought together a diverse group of 1714 cardiologists, radiologists, scientists, and technologists from more than 80 countries to discuss emerging trends and the latest developments in cardiovascular magnetic resonance (CMR). The conference centered on the theme "Leading the Way to Accessible, Sustainable, and Efficient CMR," highlighting innovations aimed at making CMR more clinically efficient, widely accessible, and environmentally sustainable. The program featured 728 abstracts and case presentations with an acceptance rate of 86% (728/849), including Early Career Award abstracts, oral abstracts, oral cases and rapid-fire sessions, covering a broad range of CMR topics. It also offered engaging invited lectures across eight main parallel tracks and included four plenary sessions, two gold medalists, and one keynote speaker, with a total of 826 faculty participating. Focused sessions on accessibility, efficiency, and sustainability provided a platform for discussing current challenges and exploring future directions, while the newly introduced CMR Innovations Track showcased innovative session formats and fostered greater collaboration between researchers, clinicians, and industry. For the first time, SCMR 2025 also offered the opportunity for attendees to obtain CMR Level 1 Training Verification, integrated into the program. Additionally, expert case reading sessions and hands-on interactive workshops allowed participants to engage with real-world clinical scenarios and deepen their understanding through practical experience. Key highlights included plenary sessions on a variety of important topics, such as expanding boundaries, health equity, women's cardiovascular disease and a patient-clinician testimonial that emphasized the profound value of patient-centered research and collaboration. The scientific sessions covered a wide range of topics, from clinical applications in cardiomyopathies, congenital heart disease, and vascular imaging to women's heart health and environmental sustainability. Technical topics included novel reconstruction, motion correction, quantitative CMR, contrast agents, novel field strengths, and artificial intelligence applications, among many others. This paper summarizes the key themes and discussions from SCMR 2025, highlighting the collaborative efforts that are driving the future of CMR and underscoring the Society's unwavering commitment to research, education, and clinical excellence.

Automated Detection of Severe Cerebral Edema Using Explainable Deep Transfer Learning after Hypoxic Ischemic Brain Injury.

Wang Z, Kulpanowski AM, Copen WA, Rosenthal ES, Dodelson JA, McCrory DE, Edlow BL, Kimberly WT, Amorim E, Westover M, Ning M, Zabihi M, Schaefer PW, Malhotra R, Giacino JT, Greer DM, Wu O

pubmed logopapersMay 23 2025
Substantial gaps exist in the neuroprognostication of cardiac arrest patients who remain comatose after the restoration of spontaneous circulation. Most studies focus on predicting survival, a measure confounded by the withdrawal of life-sustaining treatment decisions. Severe cerebral edema (SCE) may serve as an objective proximal imaging-based surrogate of neurologic injury. We retrospectively analyzed data from 288 patients to automate SCE detection with machine learning (ML) and to test the hypothesis that the quantitative values produced by these algorithms (ML_SCE) can improve predictions of neurologic outcomes. Ground-truth SCE (GT_SCE) classification was based on radiology reports. The model attained a cross-validated testing accuracy of 87% [95% CI: 84%, 89%] for detecting SCE. Attention maps explaining SCE classification focused on cisternal regions (p<0.05). Multivariable analyses showed that older age (p<0.001), non-shockable initial cardiac rhythm (p=0.004), and greater ML_SCE values (p<0.001) were significant predictors of poor neurologic outcomes, with GT_SCE (p=0.064) as a non-significant covariate. Our results support the feasibility of automated SCE detection. Future prospective studies with standardized neurologic assessments are needed to substantiate the utility of quantitative ML_SCE values to improve neuroprognostication.
Page 340 of 3863853 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.