Sort by:
Page 331 of 3433422 results

Multi-view contrastive learning and symptom extraction insights for medical report generation.

Bai Q, Zou X, Alhaskawi A, Dong Y, Zhou H, Ezzi SHA, Kota VG, AbdullaAbdulla MHH, Abdalbary SA, Hu X, Lu H

pubmed logopapersMay 23 2025
The task of generating medical reports automatically is of paramount importance in modern healthcare, offering a substantial reduction in the workload of radiologists and accelerating the processes of clinical diagnosis and treatment. Current challenges include handling limited sample sizes and interpreting intricate multi-modal and multi-view medical data. In order to improve the accuracy and efficiency for radiologists, we conducted this investigation. This study aims to present a novel methodology for medical report generation that leverages Multi-View Contrastive Learning (MVCL) applied to MRI data, combined with a Symptom Consultant (SC) for extracting medical insights, to improve the quality and efficiency of automated medical report generation. We introduce an advanced MVCL framework that maximizes the potential of multi-view MRI data to enhance visual feature extraction. Alongside, the SC component is employed to distill critical medical insights from symptom descriptions. These components are integrated within a transformer decoder architecture, which is then applied to the Deep Wrist dataset for model training and evaluation. Our experimental analysis on the Deep Wrist dataset reveals that our proposed integration of MVCL and SC significantly outperforms the baseline model in terms of accuracy and relevance of the generated medical reports. The results indicate that our approach is particularly effective in capturing and utilizing the complex information inherent in multi-modal and multi-view medical datasets. The combination of MVCL and SC constitutes a powerful approach to medical report generation, addressing the existing challenges in the field. The demonstrated superiority of our model over traditional methods holds promise for substantial improvements in clinical diagnosis and automated report generation, indicating a significant stride forward in medical technology.

Ovarian Cancer Screening: Recommendations and Future Prospects.

Chiu S, Staley H, Jeevananthan P, Mascarenhas S, Fotopoulou C, Rockall A

pubmed logopapersMay 23 2025
Ovarian cancer remains a significant cause of mortality among women, largely due to challenges in early detection. Current screening strategies, including transvaginal ultrasound and CA125 testing, have limited sensitivity and specificity, particularly in asymptomatic women or those with early-stage disease. The European Society of Gynaecological Oncology, the European Society for Medical Oncology, the European Society of Pathology, and other health organizations currently do not recommend routine population-based screening for ovarian cancer due to the high rates of false-positives and the absence of a reliable early detection method.This review examines existing ovarian cancer screening guidelines and explores recent advances in diagnostic technologies including radiomics, artificial intelligence, point-of-care testing, and novel detection methods.Emerging technologies show promise with respect to improving ovarian cancer detection by enhancing sensitivity and specificity compared to traditional methods. Artificial intelligence and radiomics have potential for revolutionizing ovarian cancer screening by identifying subtle diagnostic patterns, while liquid biopsy-based approaches and cell-free DNA profiling enable tumor-specific biomarker detection. Minimally invasive methods, such as intrauterine lavage and salivary diagnostics, provide avenues for population-wide applicability. However, large-scale validation is required to establish these techniques as effective and reliable screening options. · Current ovarian cancer screening methods lack sensitivity and specificity for early-stage detection.. · Emerging technologies like artificial intelligence, radiomics, and liquid biopsy offer improved diagnostic accuracy.. · Large-scale clinical validation is required, particularly for baseline-risk populations.. · Chiu S, Staley H, Jeevananthan P et al. Ovarian Cancer Screening: Recommendations and Future Prospects. Rofo 2025; DOI 10.1055/a-2589-5696.

Automated ventricular segmentation in pediatric hydrocephalus: how close are we?

Taha BR, Luo G, Naik A, Sabal L, Sun J, McGovern RA, Sandoval-Garcia C, Guillaume DJ

pubmed logopapersMay 23 2025
The explosive growth of available high-quality imaging data coupled with new progress in hardware capabilities has enabled a new era of unprecedented performance in brain segmentation tasks. Despite the explosion of new data released by consortiums and groups around the world, most published, closed, or openly available segmentation models have either a limited or an unknown role in pediatric brains. This study explores the utility of state-of-the-art automated ventricular segmentation tools applied to pediatric hydrocephalus. Two popular, fast, whole-brain segmentation tools were used (FastSurfer and QuickNAT) to automatically segment the lateral ventricles and evaluate their accuracy in children with hydrocephalus. Forty scans from 32 patients were included in this study. The patients underwent imaging at the University of Minnesota Medical Center or satellite clinics, were between 0 and 18 years old, had an ICD-10 diagnosis that included the word hydrocephalus, and had at least one T1-weighted pre- or postcontrast MPRAGE sequence. Patients with poor quality scans were excluded. Dice similarity coefficient (DSC) scores were used to compare segmentation outputs against manually segmented lateral ventricles. Overall, both models performed poorly with DSCs of 0.61 for each segmentation tool. No statistically significant difference was noted between model performance (p = 0.86). Using a multivariate linear regression to examine factors associated with higher DSC performance, male gender (p = 0.66), presence of ventricular catheter (p = 0.72), and MRI magnet strength (p = 0.23) were not statistically significant factors. However, younger age (p = 0.03) and larger ventricular volumes (p = 0.01) were significantly associated with lower DSC values. A large-scale visualization of 196 scans in both models showed characteristic patterns of segmentation failure in larger ventricles. Significant gaps exist in current cutting-edge segmentation models when applied to pediatric hydrocephalus. Researchers will need to address these types of gaps in performance through thoughtful consideration of their training data before reaching the ultimate goal of clinical deployment.

Deep Learning and Radiomic Signatures Associated with Tumor Immune Heterogeneity Predict Microvascular Invasion in Colon Cancer.

Jia J, Wang J, Zhang Y, Bai G, Han L, Niu Y

pubmed logopapersMay 23 2025
This study aims to develop and validate a deep learning radiomics signature (DLRS) that integrates radiomics and deep learning features for the non-invasive prediction of microvascular invasion (MVI) in patients with colon cancer (CC). Furthermore, the study explores the potential association between DLRS and tumor immune heterogeneity. This study is a multi-center retrospective study that included a total of 1007 patients with colon cancer (CC) from three medical centers and The Cancer Genome Atlas (TCGA-COAD) database. Patients from Medical Centers 1 and 2 were divided into a training cohort (n = 592) and an internal validation cohort (n = 255) in a 7:3 ratio. Medical Center 3 (n = 135) and the TCGA-COAD database (n = 25) were used as external validation cohorts. Radiomics and deep learning features were extracted from contrast-enhanced venous-phase CT images. Feature selection was performed using machine learning algorithms, and three predictive models were developed: a radiomics model, a deep learning (DL) model, and a combined deep learning radiomics (DLR) model. The predictive performance of each model was evaluated using multiple metrics, including the area under the curve (AUC), sensitivity, and specificity. Additionally, differential gene expression analysis was conducted on RNA-seq data from the TCGA-COAD dataset to explore the association between the DLRS and tumor immune heterogeneity within the tumor microenvironment. Compared to the standalone radiomics and deep learning models, DLR fusion model demonstrated superior predictive performance. The AUC for the internal validation cohort was 0.883 (95% CI: 0.828-0.937), while the AUC for the external validation cohort reached 0.855 (95% CI: 0.775-0.935). Furthermore, stratifying patients from the TCGA-COAD dataset into high-risk and low-risk groups based on the DLRS revealed significant differences in immune cell infiltration and immune checkpoint expression between the two groups (P < 0.05). The contrast-enhanced CT-based DLR fusion model developed in this study effectively predicts the MVI status in patients with CC. This model serves as a non-invasive preoperative assessment tool and reveals a potential association between the DLRS and immune heterogeneity within the tumor microenvironment, providing insights to optimize individualized treatment strategies.

Renal Transplant Survival Prediction From Unsupervised Deep Learning-Based Radiomics on Early Dynamic Contrast-Enhanced MRI.

Milecki L, Bodard S, Kalogeiton V, Poinard F, Tissier AM, Boudhabhay I, Correas JM, Anglicheau D, Vakalopoulou M, Timsit MO

pubmed logopapersMay 23 2025
End-stage renal disease is characterized by an irreversible decline in kidney function. Despite a risk of chronic dysfunction of the transplanted kidney, renal transplantation is considered the most effective solution among available treatment options. Clinical attributes of graft survival prediction, such as allocation variables or results of pathological examinations, have been widely studied. Nevertheless, medical imaging is clinically used only to assess current transplant status. This study investigated the use of unsupervised deep learning-based algorithms to identify rich radiomic features that may be linked to graft survival from early dynamic contrast-enhanced magnetic resonance imaging data of renal transplants. A retrospective cohort of 108 transplanted patients (mean age 50 +/- 15, 67 men) undergoing systematic magnetic resonance imaging follow-up examinations (2013 to 2015) was used to train deep convolutional neural network models based on an unsupervised contrastive learning approach. 5-year graft survival analysis was performed from the obtained artificial intelligence radiomics features using penalized Cox models and Kaplan-Meier estimates. Using a validation set of 48 patients (mean age 54 +/- 13, 30 men) having 1-month post-transplantation magnetic resonance imaging examinations, the proposed approach demonstrated promising 5-year graft survival capability with a 72.7% concordance index from the artificial intelligence radiomics features. Unsupervised clustering of these radiomics features enabled statistically significant stratification of patients (p=0.029). This proof-of-concept study exposed the promising capability of artificial intelligence algorithms to extract relevant radiomics features that enable renal transplant survival prediction. Further studies are needed to demonstrate the robustness of this technique, and to identify appropriate procedures for integration of such an approach into multimodal and clinical settings.

Explainable Anatomy-Guided AI for Prostate MRI: Foundation Models and In Silico Clinical Trials for Virtual Biopsy-based Risk Assessment

Danial Khan, Zohaib Salahuddin, Yumeng Zhang, Sheng Kuang, Shruti Atul Mali, Henry C. Woodruff, Sina Amirrajab, Rachel Cavill, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Adrian Galiana-Bordera, Paula Jimenez Gomez, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
We present a fully automated, anatomically guided deep learning pipeline for prostate cancer (PCa) risk stratification using routine MRI. The pipeline integrates three key components: an nnU-Net module for segmenting the prostate gland and its zones on axial T2-weighted MRI; a classification module based on the UMedPT Swin Transformer foundation model, fine-tuned on 3D patches with optional anatomical priors and clinical data; and a VAE-GAN framework for generating counterfactual heatmaps that localize decision-driving image regions. The system was developed using 1,500 PI-CAI cases for segmentation and 617 biparametric MRIs with metadata from the CHAIMELEON challenge for classification (split into 70% training, 10% validation, and 20% testing). Segmentation achieved mean Dice scores of 0.95 (gland), 0.94 (peripheral zone), and 0.92 (transition zone). Incorporating gland priors improved AUC from 0.69 to 0.72, with a three-scale ensemble achieving top performance (AUC = 0.79, composite score = 0.76), outperforming the 2024 CHAIMELEON challenge winners. Counterfactual heatmaps reliably highlighted lesions within segmented regions, enhancing model interpretability. In a prospective multi-center in-silico trial with 20 clinicians, AI assistance increased diagnostic accuracy from 0.72 to 0.77 and Cohen's kappa from 0.43 to 0.53, while reducing review time per case by 40%. These results demonstrate that anatomy-aware foundation models with counterfactual explainability can enable accurate, interpretable, and efficient PCa risk assessment, supporting their potential use as virtual biopsies in clinical practice.

Pixels to Prognosis: Harmonized Multi-Region CT-Radiomics and Foundation-Model Signatures Across Multicentre NSCLC Data

Shruti Atul Mali, Zohaib Salahuddin, Danial Khan, Yumeng Zhang, Henry C. Woodruff, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
Purpose: To evaluate the impact of harmonization and multi-region CT image feature integration on survival prediction in non-small cell lung cancer (NSCLC) patients, using handcrafted radiomics, pretrained foundation model (FM) features, and clinical data from a multicenter dataset. Methods: We analyzed CT scans and clinical data from 876 NSCLC patients (604 training, 272 test) across five centers. Features were extracted from the whole lung, tumor, mediastinal nodes, coronary arteries, and coronary artery calcium (CAC). Handcrafted radiomics and FM deep features were harmonized using ComBat, reconstruction kernel normalization (RKN), and RKN+ComBat. Regularized Cox models predicted overall survival; performance was assessed using the concordance index (C-index), 5-year time-dependent area under the curve (t-AUC), and hazard ratio (HR). SHapley Additive exPlanations (SHAP) values explained feature contributions. A consensus model used agreement across top region of interest (ROI) models to stratify patient risk. Results: TNM staging showed prognostic utility (C-index = 0.67; HR = 2.70; t-AUC = 0.85). The clinical + tumor radiomics model with ComBat achieved a C-index of 0.7552 and t-AUC of 0.8820. FM features (50-voxel cubes) combined with clinical data yielded the highest performance (C-index = 0.7616; t-AUC = 0.8866). An ensemble of all ROIs and FM features reached a C-index of 0.7142 and t-AUC of 0.7885. The consensus model, covering 78% of valid test cases, achieved a t-AUC of 0.92, sensitivity of 97.6%, and specificity of 66.7%. Conclusion: Harmonization and multi-region feature integration improve survival prediction in multicenter NSCLC data. Combining interpretable radiomics, FM features, and consensus modeling enables robust risk stratification across imaging centers.

A Foundation Model Framework for Multi-View MRI Classification of Extramural Vascular Invasion and Mesorectal Fascia Invasion in Rectal Cancer

Yumeng Zhang, Zohaib Salahuddin, Danial Khan, Shruti Atul Mali, Henry C. Woodruff, Sina Amirrajab, Eduardo Ibor-Crespo, Ana Jimenez-Pastor, Luis Marti-Bonmati, Philippe Lambin

arxiv logopreprintMay 23 2025
Background: Accurate MRI-based identification of extramural vascular invasion (EVI) and mesorectal fascia invasion (MFI) is pivotal for risk-stratified management of rectal cancer, yet visual assessment is subjective and vulnerable to inter-institutional variability. Purpose: To develop and externally evaluate a multicenter, foundation-model-driven framework that automatically classifies EVI and MFI on axial and sagittal T2-weighted MRI. Methods: This retrospective study used 331 pre-treatment rectal cancer MRI examinations from three European hospitals. After TotalSegmentator-guided rectal patch extraction, a self-supervised frequency-domain harmonization pipeline was trained to minimize scanner-related contrast shifts. Four classifiers were compared: ResNet50, SeResNet, the universal biomedical pretrained transformer (UMedPT) with a lightweight MLP head, and a logistic-regression variant using frozen UMedPT features (UMedPT_LR). Results: UMedPT_LR achieved the best EVI detection when axial and sagittal features were fused (AUC = 0.82; sensitivity = 0.75; F1 score = 0.73), surpassing the Chaimeleon Grand-Challenge winner (AUC = 0.74). The highest MFI performance was attained by UMedPT on axial harmonized images (AUC = 0.77), surpassing the Chaimeleon Grand-Challenge winner (AUC = 0.75). Frequency-domain harmonization improved MFI classification but variably affected EVI performance. Conventional CNNs (ResNet50, SeResNet) underperformed, especially in F1 score and balanced accuracy. Conclusion: These findings demonstrate that combining foundation model features, harmonization, and multi-view fusion significantly enhances diagnostic performance in rectal MRI.

AutoMiSeg: Automatic Medical Image Segmentation via Test-Time Adaptation of Foundation Models

Xingjian Li, Qifeng Wu, Colleen Que, Yiran Ding, Adithya S. Ubaradka, Jianhua Xing, Tianyang Wang, Min Xu

arxiv logopreprintMay 23 2025
Medical image segmentation is vital for clinical diagnosis, yet current deep learning methods often demand extensive expert effort, i.e., either through annotating large training datasets or providing prompts at inference time for each new case. This paper introduces a zero-shot and automatic segmentation pipeline that combines off-the-shelf vision-language and segmentation foundation models. Given a medical image and a task definition (e.g., "segment the optic disc in an eye fundus image"), our method uses a grounding model to generate an initial bounding box, followed by a visual prompt boosting module that enhance the prompts, which are then processed by a promptable segmentation model to produce the final mask. To address the challenges of domain gap and result verification, we introduce a test-time adaptation framework featuring a set of learnable adaptors that align the medical inputs with foundation model representations. Its hyperparameters are optimized via Bayesian Optimization, guided by a proxy validation model without requiring ground-truth labels. Our pipeline offers an annotation-efficient and scalable solution for zero-shot medical image segmentation across diverse tasks. Our pipeline is evaluated on seven diverse medical imaging datasets and shows promising results. By proper decomposition and test-time adaptation, our fully automatic pipeline performs competitively with weakly-prompted interactive foundation models.

MRI-based habitat analysis for Intratumoral heterogeneity quantification combined with deep learning for HER2 status prediction in breast cancer.

Li QY, Liang Y, Zhang L, Li JH, Wang BJ, Wang CF

pubmed logopapersMay 23 2025
Human epidermal growth factor receptor 2 (HER2) is a crucial determinant of breast cancer prognosis and treatment options. The study aimed to establish an MRI-based habitat model to quantify intratumoral heterogeneity (ITH) and evaluate its potential in predicting HER2 expression status. Data from 340 patients with pathologically confirmed invasive breast cancer were retrospectively analyzed. Two tasks were designed for this study: Task 1 distinguished between HER2-positive and HER2-negative breast cancer. Task 2 distinguished between HER2-low and HER2-zero breast cancer. We developed the ITH, deep learning (DL), and radiomics signatures based on the features extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Clinical independent predictors were determined by multivariable logistic regression. Finally, a combined model was constructed by integrating the clinical independent predictors, ITH signature, and DL signature. The area under the receiver operating characteristic curve (AUC) served as the standard for assessing the performance of models. In task 1, the ITH signature performed well in the training set (AUC = 0.855) and the validation set (AUC = 0.842). In task 2, the AUCs of the ITH signature were 0.844 and 0.840, respectively, which still showed good prediction performance. In the validation sets of both tasks, the combined model exhibited the best prediction performance, with AUCs of 0.912 and 0.917 respectively, making it the optimal model. A combined model integrating clinical independent predictors, ITH signature, and DL signature can predict HER2 expression status preoperatively and noninvasively.
Page 331 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.