Sort by:
Page 27 of 36351 results

Automated Detection of Severe Cerebral Edema Using Explainable Deep Transfer Learning after Hypoxic Ischemic Brain Injury.

Wang Z, Kulpanowski AM, Copen WA, Rosenthal ES, Dodelson JA, McCrory DE, Edlow BL, Kimberly WT, Amorim E, Westover M, Ning M, Zabihi M, Schaefer PW, Malhotra R, Giacino JT, Greer DM, Wu O

pubmed logopapersMay 23 2025
Substantial gaps exist in the neuroprognostication of cardiac arrest patients who remain comatose after the restoration of spontaneous circulation. Most studies focus on predicting survival, a measure confounded by the withdrawal of life-sustaining treatment decisions. Severe cerebral edema (SCE) may serve as an objective proximal imaging-based surrogate of neurologic injury. We retrospectively analyzed data from 288 patients to automate SCE detection with machine learning (ML) and to test the hypothesis that the quantitative values produced by these algorithms (ML_SCE) can improve predictions of neurologic outcomes. Ground-truth SCE (GT_SCE) classification was based on radiology reports. The model attained a cross-validated testing accuracy of 87% [95% CI: 84%, 89%] for detecting SCE. Attention maps explaining SCE classification focused on cisternal regions (p<0.05). Multivariable analyses showed that older age (p<0.001), non-shockable initial cardiac rhythm (p=0.004), and greater ML_SCE values (p<0.001) were significant predictors of poor neurologic outcomes, with GT_SCE (p=0.064) as a non-significant covariate. Our results support the feasibility of automated SCE detection. Future prospective studies with standardized neurologic assessments are needed to substantiate the utility of quantitative ML_SCE values to improve neuroprognostication.

Artificial Intelligence enhanced R1 maps can improve lesion detection in focal epilepsy in children

Doumou, G., D'Arco, F., Figini, M., Lin, H., Lorio, S., Piper, R., O'Muircheartaigh, J., Cross, H., Weiskopf, N., Alexander, D., Carmichael, D. W.

medrxiv logopreprintMay 23 2025
Background and purposeMRI is critical for the detection of subtle cortical pathology in epilepsy surgery assessment. This can be aided by improved MRI quality and resolution using ultra-high field (7T). But poor access and long scan durations limit widespread use, particularly in a paediatric setting. AI-based learning approaches may provide similar information by enhancing data obtained with conventional MRI (3T). We used a convolutional neural network trained on matched 3T and 7T images to enhance quantitative R1-maps (longitudinal relaxation rate) obtained at 3T in paediatric epilepsy patients and to determine their potential clinical value for lesion identification. Materials and MethodsA 3D U-Net was trained using paired patches from 3T and 7T R1-maps from n=10 healthy volunteers. The trained network was applied to enhance paediatric focal epilepsy 3T R1 images from a different scanner/site (n=17 MRI lesion positive / n=14 MR-negative). Radiological review assessed image quality, as well as lesion identification and visualization of enhanced maps in comparison to the 3T R1-maps without clinical information. Lesion appearance was then compared to 3D-FLAIR. ResultsAI enhanced R1 maps were superior in terms of image quality in comparison to the original 3T R1 maps, while preserving and enhancing the visibility of lesions. After exclusion of 5/31 patients (due to movement artefact or incomplete data), lesions were detected in AI Enhanced R1 maps for 14/15 (93%) MR-positive and 4/11 (36%) MR-negative patients. ConclusionAI enhanced R1 maps improved the visibility of lesions in MR positive patients, as well as providing higher sensitivity in the MR-negative group compared to either the original 3T R1-maps or 3D-FLAIR. This provides promising initial evidence that 3T quantitative maps can outperform conventional 3T imaging via enhancement by an AI model trained on 7T MRI data, without the need for pathology-specific information.

Multimodal fusion model for prognostic prediction and radiotherapy response assessment in head and neck squamous cell carcinoma.

Tian R, Hou F, Zhang H, Yu G, Yang P, Li J, Yuan T, Chen X, Chen Y, Hao Y, Yao Y, Zhao H, Yu P, Fang H, Song L, Li A, Liu Z, Lv H, Yu D, Cheng H, Mao N, Song X

pubmed logopapersMay 23 2025
Accurate prediction of prognosis and postoperative radiotherapy response is critical for personalized treatment in head and neck squamous cell carcinoma (HNSCC). We developed a multimodal deep learning model (MDLM) integrating computed tomography, whole-slide images, and clinical features from 1087 HNSCC patients across multiple centers. The MDLM exhibited good performance in predicting overall survival (OS) and disease-free survival in external test cohorts. Additionally, the MDLM outperformed unimodal models. Patients with a high-risk score who underwent postoperative radiotherapy exhibited prolonged OS compared to those who did not (P = 0.016), whereas no significant improvement in OS was observed among patients with a low-risk score (P = 0.898). Biological exploration indicated that the model may be related to changes in the cytochrome P450 metabolic pathway, tumor microenvironment, and myeloid-derived cell subpopulations. Overall, the MDLM effectively predicts prognosis and postoperative radiotherapy response, offering a promising tool for personalized HNSCC therapy.

Integrating multi-omics data with artificial intelligence to decipher the role of tumor-infiltrating lymphocytes in tumor immunotherapy.

Xie T, Xue H, Huang A, Yan H, Yuan J

pubmed logopapersMay 23 2025
Tumor-infiltrating lymphocytes (TILs) are capable of recognizing tumor antigens, impacting tumor prognosis, predicting the efficacy of neoadjuvant therapies, contributing to the development of new cell-based immunotherapies, studying the tumor immune microenvironment, and identifying novel biomarkers. Traditional methods for evaluating TILs primarily rely on histopathological examination using standard hematoxylin and eosin staining or immunohistochemical staining, with manual cell counting under a microscope. These methods are time-consuming and subject to significant observer variability and error. Recently, artificial intelligence (AI) has rapidly advanced in the field of medical imaging, particularly with deep learning algorithms based on convolutional neural networks. AI has shown promise as a powerful tool for the quantitative evaluation of tumor biomarkers. The advent of AI offers new opportunities for the automated and standardized assessment of TILs. This review provides an overview of the advancements in the application of AI for assessing TILs from multiple perspectives. It specifically focuses on AI-driven approaches for identifying TILs in tumor tissue images, automating TILs quantification, recognizing TILs subpopulations, and analyzing the spatial distribution patterns of TILs. The review aims to elucidate the prognostic value of TILs in various cancers, as well as their predictive capacity for responses to immunotherapy and neoadjuvant therapy. Furthermore, the review explores the integration of AI with other emerging technologies, such as single-cell sequencing, multiplex immunofluorescence, spatial transcriptomics, and multimodal approaches, to enhance the comprehensive study of TILs and further elucidate their clinical utility in tumor treatment and prognosis.

Brightness-Invariant Tracking Estimation in Tagged MRI

Zhangxing Bian, Shuwen Wei, Xiao Liang, Yuan-Chiao Lu, Samuel W. Remedios, Fangxu Xing, Jonghye Woo, Dzung L. Pham, Aaron Carass, Philip V. Bayly, Jiachen Zhuo, Ahmed Alshareef, Jerry L. Prince

arxiv logopreprintMay 23 2025
Magnetic resonance (MR) tagging is an imaging technique for noninvasively tracking tissue motion in vivo by creating a visible pattern of magnetization saturation (tags) that deforms with the tissue. Due to longitudinal relaxation and progression to steady-state, the tags and tissue brightnesses change over time, which makes tracking with optical flow methods error-prone. Although Fourier methods can alleviate these problems, they are also sensitive to brightness changes as well as spectral spreading due to motion. To address these problems, we introduce the brightness-invariant tracking estimation (BRITE) technique for tagged MRI. BRITE disentangles the anatomy from the tag pattern in the observed tagged image sequence and simultaneously estimates the Lagrangian motion. The inherent ill-posedness of this problem is addressed by leveraging the expressive power of denoising diffusion probabilistic models to represent the probabilistic distribution of the underlying anatomy and the flexibility of physics-informed neural networks to estimate biologically-plausible motion. A set of tagged MR images of a gel phantom was acquired with various tag periods and imaging flip angles to demonstrate the impact of brightness variations and to validate our method. The results show that BRITE achieves more accurate motion and strain estimates as compared to other state of the art methods, while also being resistant to tag fading.

Highlights of the Society for Cardiovascular Magnetic Resonance (SCMR) 2025 Conference: leading the way to accessible, efficient and sustainable CMR.

Prieto C, Allen BD, Azevedo CF, Lima BB, Lam CZ, Mills R, Huisman M, Gonzales RA, Weingärtner S, Christodoulou AG, Rochitte C, Markl M

pubmed logopapersMay 23 2025
The 28th Annual Scientific Sessions of the Society for Cardiovascular Magnetic Resonance (SCMR) took place from January 29 to February 1, 2025, in Washington, D.C. SCMR 2025 brought together a diverse group of 1714 cardiologists, radiologists, scientists, and technologists from more than 80 countries to discuss emerging trends and the latest developments in cardiovascular magnetic resonance (CMR). The conference centered on the theme "Leading the Way to Accessible, Sustainable, and Efficient CMR," highlighting innovations aimed at making CMR more clinically efficient, widely accessible, and environmentally sustainable. The program featured 728 abstracts and case presentations with an acceptance rate of 86% (728/849), including Early Career Award abstracts, oral abstracts, oral cases and rapid-fire sessions, covering a broad range of CMR topics. It also offered engaging invited lectures across eight main parallel tracks and included four plenary sessions, two gold medalists, and one keynote speaker, with a total of 826 faculty participating. Focused sessions on accessibility, efficiency, and sustainability provided a platform for discussing current challenges and exploring future directions, while the newly introduced CMR Innovations Track showcased innovative session formats and fostered greater collaboration between researchers, clinicians, and industry. For the first time, SCMR 2025 also offered the opportunity for attendees to obtain CMR Level 1 Training Verification, integrated into the program. Additionally, expert case reading sessions and hands-on interactive workshops allowed participants to engage with real-world clinical scenarios and deepen their understanding through practical experience. Key highlights included plenary sessions on a variety of important topics, such as expanding boundaries, health equity, women's cardiovascular disease and a patient-clinician testimonial that emphasized the profound value of patient-centered research and collaboration. The scientific sessions covered a wide range of topics, from clinical applications in cardiomyopathies, congenital heart disease, and vascular imaging to women's heart health and environmental sustainability. Technical topics included novel reconstruction, motion correction, quantitative CMR, contrast agents, novel field strengths, and artificial intelligence applications, among many others. This paper summarizes the key themes and discussions from SCMR 2025, highlighting the collaborative efforts that are driving the future of CMR and underscoring the Society's unwavering commitment to research, education, and clinical excellence.

AutoMiSeg: Automatic Medical Image Segmentation via Test-Time Adaptation of Foundation Models

Xingjian Li, Qifeng Wu, Colleen Que, Yiran Ding, Adithya S. Ubaradka, Jianhua Xing, Tianyang Wang, Min Xu

arxiv logopreprintMay 23 2025
Medical image segmentation is vital for clinical diagnosis, yet current deep learning methods often demand extensive expert effort, i.e., either through annotating large training datasets or providing prompts at inference time for each new case. This paper introduces a zero-shot and automatic segmentation pipeline that combines off-the-shelf vision-language and segmentation foundation models. Given a medical image and a task definition (e.g., "segment the optic disc in an eye fundus image"), our method uses a grounding model to generate an initial bounding box, followed by a visual prompt boosting module that enhance the prompts, which are then processed by a promptable segmentation model to produce the final mask. To address the challenges of domain gap and result verification, we introduce a test-time adaptation framework featuring a set of learnable adaptors that align the medical inputs with foundation model representations. Its hyperparameters are optimized via Bayesian Optimization, guided by a proxy validation model without requiring ground-truth labels. Our pipeline offers an annotation-efficient and scalable solution for zero-shot medical image segmentation across diverse tasks. Our pipeline is evaluated on seven diverse medical imaging datasets and shows promising results. By proper decomposition and test-time adaptation, our fully automatic pipeline performs competitively with weakly-prompted interactive foundation models.

Explainable Anatomy-Guided AI for Prostate MRI: Foundation Models and In Silico Clinical Trials for Virtual Biopsy-based Risk Assessment

Danial Khan, Zohaib Salahuddin, Yumeng Zhang, Sheng Kuang, Shruti Atul Mali, Henry C. Woodruff, Sina Amirrajab, Rachel Cavill, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Adrian Galiana-Bordera, Paula Jimenez Gomez, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
We present a fully automated, anatomically guided deep learning pipeline for prostate cancer (PCa) risk stratification using routine MRI. The pipeline integrates three key components: an nnU-Net module for segmenting the prostate gland and its zones on axial T2-weighted MRI; a classification module based on the UMedPT Swin Transformer foundation model, fine-tuned on 3D patches with optional anatomical priors and clinical data; and a VAE-GAN framework for generating counterfactual heatmaps that localize decision-driving image regions. The system was developed using 1,500 PI-CAI cases for segmentation and 617 biparametric MRIs with metadata from the CHAIMELEON challenge for classification (split into 70% training, 10% validation, and 20% testing). Segmentation achieved mean Dice scores of 0.95 (gland), 0.94 (peripheral zone), and 0.92 (transition zone). Incorporating gland priors improved AUC from 0.69 to 0.72, with a three-scale ensemble achieving top performance (AUC = 0.79, composite score = 0.76), outperforming the 2024 CHAIMELEON challenge winners. Counterfactual heatmaps reliably highlighted lesions within segmented regions, enhancing model interpretability. In a prospective multi-center in-silico trial with 20 clinicians, AI assistance increased diagnostic accuracy from 0.72 to 0.77 and Cohen's kappa from 0.43 to 0.53, while reducing review time per case by 40%. These results demonstrate that anatomy-aware foundation models with counterfactual explainability can enable accurate, interpretable, and efficient PCa risk assessment, supporting their potential use as virtual biopsies in clinical practice.

Renal Transplant Survival Prediction From Unsupervised Deep Learning-Based Radiomics on Early Dynamic Contrast-Enhanced MRI.

Milecki L, Bodard S, Kalogeiton V, Poinard F, Tissier AM, Boudhabhay I, Correas JM, Anglicheau D, Vakalopoulou M, Timsit MO

pubmed logopapersMay 23 2025
End-stage renal disease is characterized by an irreversible decline in kidney function. Despite a risk of chronic dysfunction of the transplanted kidney, renal transplantation is considered the most effective solution among available treatment options. Clinical attributes of graft survival prediction, such as allocation variables or results of pathological examinations, have been widely studied. Nevertheless, medical imaging is clinically used only to assess current transplant status. This study investigated the use of unsupervised deep learning-based algorithms to identify rich radiomic features that may be linked to graft survival from early dynamic contrast-enhanced magnetic resonance imaging data of renal transplants. A retrospective cohort of 108 transplanted patients (mean age 50 +/- 15, 67 men) undergoing systematic magnetic resonance imaging follow-up examinations (2013 to 2015) was used to train deep convolutional neural network models based on an unsupervised contrastive learning approach. 5-year graft survival analysis was performed from the obtained artificial intelligence radiomics features using penalized Cox models and Kaplan-Meier estimates. Using a validation set of 48 patients (mean age 54 +/- 13, 30 men) having 1-month post-transplantation magnetic resonance imaging examinations, the proposed approach demonstrated promising 5-year graft survival capability with a 72.7% concordance index from the artificial intelligence radiomics features. Unsupervised clustering of these radiomics features enabled statistically significant stratification of patients (p=0.029). This proof-of-concept study exposed the promising capability of artificial intelligence algorithms to extract relevant radiomics features that enable renal transplant survival prediction. Further studies are needed to demonstrate the robustness of this technique, and to identify appropriate procedures for integration of such an approach into multimodal and clinical settings.

Ovarian Cancer Screening: Recommendations and Future Prospects.

Chiu S, Staley H, Jeevananthan P, Mascarenhas S, Fotopoulou C, Rockall A

pubmed logopapersMay 23 2025
Ovarian cancer remains a significant cause of mortality among women, largely due to challenges in early detection. Current screening strategies, including transvaginal ultrasound and CA125 testing, have limited sensitivity and specificity, particularly in asymptomatic women or those with early-stage disease. The European Society of Gynaecological Oncology, the European Society for Medical Oncology, the European Society of Pathology, and other health organizations currently do not recommend routine population-based screening for ovarian cancer due to the high rates of false-positives and the absence of a reliable early detection method.This review examines existing ovarian cancer screening guidelines and explores recent advances in diagnostic technologies including radiomics, artificial intelligence, point-of-care testing, and novel detection methods.Emerging technologies show promise with respect to improving ovarian cancer detection by enhancing sensitivity and specificity compared to traditional methods. Artificial intelligence and radiomics have potential for revolutionizing ovarian cancer screening by identifying subtle diagnostic patterns, while liquid biopsy-based approaches and cell-free DNA profiling enable tumor-specific biomarker detection. Minimally invasive methods, such as intrauterine lavage and salivary diagnostics, provide avenues for population-wide applicability. However, large-scale validation is required to establish these techniques as effective and reliable screening options. · Current ovarian cancer screening methods lack sensitivity and specificity for early-stage detection.. · Emerging technologies like artificial intelligence, radiomics, and liquid biopsy offer improved diagnostic accuracy.. · Large-scale clinical validation is required, particularly for baseline-risk populations.. · Chiu S, Staley H, Jeevananthan P et al. Ovarian Cancer Screening: Recommendations and Future Prospects. Rofo 2025; DOI 10.1055/a-2589-5696.
Page 27 of 36351 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.