Sort by:
Page 87 of 1321316 results

Multimodal fusion model for prognostic prediction and radiotherapy response assessment in head and neck squamous cell carcinoma.

Tian R, Hou F, Zhang H, Yu G, Yang P, Li J, Yuan T, Chen X, Chen Y, Hao Y, Yao Y, Zhao H, Yu P, Fang H, Song L, Li A, Liu Z, Lv H, Yu D, Cheng H, Mao N, Song X

pubmed logopapersMay 23 2025
Accurate prediction of prognosis and postoperative radiotherapy response is critical for personalized treatment in head and neck squamous cell carcinoma (HNSCC). We developed a multimodal deep learning model (MDLM) integrating computed tomography, whole-slide images, and clinical features from 1087 HNSCC patients across multiple centers. The MDLM exhibited good performance in predicting overall survival (OS) and disease-free survival in external test cohorts. Additionally, the MDLM outperformed unimodal models. Patients with a high-risk score who underwent postoperative radiotherapy exhibited prolonged OS compared to those who did not (P = 0.016), whereas no significant improvement in OS was observed among patients with a low-risk score (P = 0.898). Biological exploration indicated that the model may be related to changes in the cytochrome P450 metabolic pathway, tumor microenvironment, and myeloid-derived cell subpopulations. Overall, the MDLM effectively predicts prognosis and postoperative radiotherapy response, offering a promising tool for personalized HNSCC therapy.

Development and validation of a multi-omics hemorrhagic transformation model based on hyperattenuated imaging markers following mechanical thrombectomy.

Jiang L, Zhu G, Wang Y, Hong J, Fu J, Hu J, Xiao S, Chu J, Hu S, Xiao W

pubmed logopapersMay 23 2025
This study aimed to develop a predictive model integrating clinical, radiomics, and deep learning (DL) features of hyperattenuated imaging markers (HIM) from computed tomography scans immediately following mechanical thrombectomy (MT) to predict hemorrhagic transformation (HT). A total of 239 patients with HIM who underwent MT were enrolled, with 191 patients (80%) in the training cohort and 48 patients (20%) in the validation cohort. Additionally, the model was tested on an internal prospective cohort of 49 patients. A total of 1834 radiomics features and 2048 DL features were extracted from HIM images. Statistical methods, such as analysis of variance, Pearson's correlation coefficient, principal component analysis, and least absolute shrinkage and selection operator, were used to select the most significant features. A K-Nearest Neighbor classifier was employed to develop a combined model integrating clinical, radiomics, and DL features for HT prediction. Model performance was evaluated using metrics such as accuracy, sensitivity, specificity, receiver operating characteristic curves, and area under curve (AUC). In the training, validation, and test cohorts, the combined model achieved AUCs of 0.926, 0.923, and 0.887, respectively, outperforming other models, including clinical, radiomics, and DL models, as well as hybrid models combining subsets of features (Clinical + Radiomics, DL + Radiomics, and Clinical + DL) in predicting HT. The combined model, which integrates clinical, radiomics, and DL features derived from HIM, demonstrated efficacy in noninvasively predicting HT. These findings suggest its potential utility in guiding clinical decision-making for patients with MT.

Multi-view contrastive learning and symptom extraction insights for medical report generation.

Bai Q, Zou X, Alhaskawi A, Dong Y, Zhou H, Ezzi SHA, Kota VG, AbdullaAbdulla MHH, Abdalbary SA, Hu X, Lu H

pubmed logopapersMay 23 2025
The task of generating medical reports automatically is of paramount importance in modern healthcare, offering a substantial reduction in the workload of radiologists and accelerating the processes of clinical diagnosis and treatment. Current challenges include handling limited sample sizes and interpreting intricate multi-modal and multi-view medical data. In order to improve the accuracy and efficiency for radiologists, we conducted this investigation. This study aims to present a novel methodology for medical report generation that leverages Multi-View Contrastive Learning (MVCL) applied to MRI data, combined with a Symptom Consultant (SC) for extracting medical insights, to improve the quality and efficiency of automated medical report generation. We introduce an advanced MVCL framework that maximizes the potential of multi-view MRI data to enhance visual feature extraction. Alongside, the SC component is employed to distill critical medical insights from symptom descriptions. These components are integrated within a transformer decoder architecture, which is then applied to the Deep Wrist dataset for model training and evaluation. Our experimental analysis on the Deep Wrist dataset reveals that our proposed integration of MVCL and SC significantly outperforms the baseline model in terms of accuracy and relevance of the generated medical reports. The results indicate that our approach is particularly effective in capturing and utilizing the complex information inherent in multi-modal and multi-view medical datasets. The combination of MVCL and SC constitutes a powerful approach to medical report generation, addressing the existing challenges in the field. The demonstrated superiority of our model over traditional methods holds promise for substantial improvements in clinical diagnosis and automated report generation, indicating a significant stride forward in medical technology.

Ovarian Cancer Screening: Recommendations and Future Prospects.

Chiu S, Staley H, Jeevananthan P, Mascarenhas S, Fotopoulou C, Rockall A

pubmed logopapersMay 23 2025
Ovarian cancer remains a significant cause of mortality among women, largely due to challenges in early detection. Current screening strategies, including transvaginal ultrasound and CA125 testing, have limited sensitivity and specificity, particularly in asymptomatic women or those with early-stage disease. The European Society of Gynaecological Oncology, the European Society for Medical Oncology, the European Society of Pathology, and other health organizations currently do not recommend routine population-based screening for ovarian cancer due to the high rates of false-positives and the absence of a reliable early detection method.This review examines existing ovarian cancer screening guidelines and explores recent advances in diagnostic technologies including radiomics, artificial intelligence, point-of-care testing, and novel detection methods.Emerging technologies show promise with respect to improving ovarian cancer detection by enhancing sensitivity and specificity compared to traditional methods. Artificial intelligence and radiomics have potential for revolutionizing ovarian cancer screening by identifying subtle diagnostic patterns, while liquid biopsy-based approaches and cell-free DNA profiling enable tumor-specific biomarker detection. Minimally invasive methods, such as intrauterine lavage and salivary diagnostics, provide avenues for population-wide applicability. However, large-scale validation is required to establish these techniques as effective and reliable screening options. · Current ovarian cancer screening methods lack sensitivity and specificity for early-stage detection.. · Emerging technologies like artificial intelligence, radiomics, and liquid biopsy offer improved diagnostic accuracy.. · Large-scale clinical validation is required, particularly for baseline-risk populations.. · Chiu S, Staley H, Jeevananthan P et al. Ovarian Cancer Screening: Recommendations and Future Prospects. Rofo 2025; DOI 10.1055/a-2589-5696.

Automated ventricular segmentation in pediatric hydrocephalus: how close are we?

Taha BR, Luo G, Naik A, Sabal L, Sun J, McGovern RA, Sandoval-Garcia C, Guillaume DJ

pubmed logopapersMay 23 2025
The explosive growth of available high-quality imaging data coupled with new progress in hardware capabilities has enabled a new era of unprecedented performance in brain segmentation tasks. Despite the explosion of new data released by consortiums and groups around the world, most published, closed, or openly available segmentation models have either a limited or an unknown role in pediatric brains. This study explores the utility of state-of-the-art automated ventricular segmentation tools applied to pediatric hydrocephalus. Two popular, fast, whole-brain segmentation tools were used (FastSurfer and QuickNAT) to automatically segment the lateral ventricles and evaluate their accuracy in children with hydrocephalus. Forty scans from 32 patients were included in this study. The patients underwent imaging at the University of Minnesota Medical Center or satellite clinics, were between 0 and 18 years old, had an ICD-10 diagnosis that included the word hydrocephalus, and had at least one T1-weighted pre- or postcontrast MPRAGE sequence. Patients with poor quality scans were excluded. Dice similarity coefficient (DSC) scores were used to compare segmentation outputs against manually segmented lateral ventricles. Overall, both models performed poorly with DSCs of 0.61 for each segmentation tool. No statistically significant difference was noted between model performance (p = 0.86). Using a multivariate linear regression to examine factors associated with higher DSC performance, male gender (p = 0.66), presence of ventricular catheter (p = 0.72), and MRI magnet strength (p = 0.23) were not statistically significant factors. However, younger age (p = 0.03) and larger ventricular volumes (p = 0.01) were significantly associated with lower DSC values. A large-scale visualization of 196 scans in both models showed characteristic patterns of segmentation failure in larger ventricles. Significant gaps exist in current cutting-edge segmentation models when applied to pediatric hydrocephalus. Researchers will need to address these types of gaps in performance through thoughtful consideration of their training data before reaching the ultimate goal of clinical deployment.

Deep Learning and Radiomic Signatures Associated with Tumor Immune Heterogeneity Predict Microvascular Invasion in Colon Cancer.

Jia J, Wang J, Zhang Y, Bai G, Han L, Niu Y

pubmed logopapersMay 23 2025
This study aims to develop and validate a deep learning radiomics signature (DLRS) that integrates radiomics and deep learning features for the non-invasive prediction of microvascular invasion (MVI) in patients with colon cancer (CC). Furthermore, the study explores the potential association between DLRS and tumor immune heterogeneity. This study is a multi-center retrospective study that included a total of 1007 patients with colon cancer (CC) from three medical centers and The Cancer Genome Atlas (TCGA-COAD) database. Patients from Medical Centers 1 and 2 were divided into a training cohort (n = 592) and an internal validation cohort (n = 255) in a 7:3 ratio. Medical Center 3 (n = 135) and the TCGA-COAD database (n = 25) were used as external validation cohorts. Radiomics and deep learning features were extracted from contrast-enhanced venous-phase CT images. Feature selection was performed using machine learning algorithms, and three predictive models were developed: a radiomics model, a deep learning (DL) model, and a combined deep learning radiomics (DLR) model. The predictive performance of each model was evaluated using multiple metrics, including the area under the curve (AUC), sensitivity, and specificity. Additionally, differential gene expression analysis was conducted on RNA-seq data from the TCGA-COAD dataset to explore the association between the DLRS and tumor immune heterogeneity within the tumor microenvironment. Compared to the standalone radiomics and deep learning models, DLR fusion model demonstrated superior predictive performance. The AUC for the internal validation cohort was 0.883 (95% CI: 0.828-0.937), while the AUC for the external validation cohort reached 0.855 (95% CI: 0.775-0.935). Furthermore, stratifying patients from the TCGA-COAD dataset into high-risk and low-risk groups based on the DLRS revealed significant differences in immune cell infiltration and immune checkpoint expression between the two groups (P < 0.05). The contrast-enhanced CT-based DLR fusion model developed in this study effectively predicts the MVI status in patients with CC. This model serves as a non-invasive preoperative assessment tool and reveals a potential association between the DLRS and immune heterogeneity within the tumor microenvironment, providing insights to optimize individualized treatment strategies.

Renal Transplant Survival Prediction From Unsupervised Deep Learning-Based Radiomics on Early Dynamic Contrast-Enhanced MRI.

Milecki L, Bodard S, Kalogeiton V, Poinard F, Tissier AM, Boudhabhay I, Correas JM, Anglicheau D, Vakalopoulou M, Timsit MO

pubmed logopapersMay 23 2025
End-stage renal disease is characterized by an irreversible decline in kidney function. Despite a risk of chronic dysfunction of the transplanted kidney, renal transplantation is considered the most effective solution among available treatment options. Clinical attributes of graft survival prediction, such as allocation variables or results of pathological examinations, have been widely studied. Nevertheless, medical imaging is clinically used only to assess current transplant status. This study investigated the use of unsupervised deep learning-based algorithms to identify rich radiomic features that may be linked to graft survival from early dynamic contrast-enhanced magnetic resonance imaging data of renal transplants. A retrospective cohort of 108 transplanted patients (mean age 50 +/- 15, 67 men) undergoing systematic magnetic resonance imaging follow-up examinations (2013 to 2015) was used to train deep convolutional neural network models based on an unsupervised contrastive learning approach. 5-year graft survival analysis was performed from the obtained artificial intelligence radiomics features using penalized Cox models and Kaplan-Meier estimates. Using a validation set of 48 patients (mean age 54 +/- 13, 30 men) having 1-month post-transplantation magnetic resonance imaging examinations, the proposed approach demonstrated promising 5-year graft survival capability with a 72.7% concordance index from the artificial intelligence radiomics features. Unsupervised clustering of these radiomics features enabled statistically significant stratification of patients (p=0.029). This proof-of-concept study exposed the promising capability of artificial intelligence algorithms to extract relevant radiomics features that enable renal transplant survival prediction. Further studies are needed to demonstrate the robustness of this technique, and to identify appropriate procedures for integration of such an approach into multimodal and clinical settings.

Meta-analysis of AI-based pulmonary embolism detection: How reliable are deep learning models?

Lanza E, Ammirabile A, Francone M

pubmed logopapersMay 23 2025
Deep learning (DL)-based methods show promise in detecting pulmonary embolism (PE) on CT pulmonary angiography (CTPA), potentially improving diagnostic accuracy and workflow efficiency. This meta-analysis aimed to (1) determine pooled performance estimates of DL algorithms for PE detection; and (2) compare the diagnostic efficacy of convolutional neural network (CNN)- versus U-Net-based architectures. Following PRISMA guidelines, we searched PubMed and EMBASE through April 15, 2025 for English-language studies (2010-2025) reporting DL models for PE detection with extractable 2 × 2 data or performance metrics. True/false positives and negatives were reconstructed when necessary under an assumed 50 % PE prevalence (with 0.5 continuity correction). We approximated AUROC as the mean of sensitivity and specificity if not directly reported. Sensitivity, specificity, accuracy, PPV and NPV were pooled using a DerSimonian-Laird random-effects model with Freeman-Tukey transformation; AUROC values were combined via a fixed-effect inverse-variance approach. Heterogeneity was assessed by Cochran's Q and I<sup>2</sup>. Subgroup analyses contrasted CNN versus U-Net models. Twenty-four studies (n = 22,984 patients) met inclusion criteria. Pooled estimates were: AUROC 0.895 (95 % CI: 0.874-0.917), sensitivity 0.894 (0.856-0.923), specificity 0.871 (0.831-0.903), accuracy 0.857 (0.833-0.882), PPV 0.832 (0.794-0.869) and NPV 0.902 (0.874-0.929). Between-study heterogeneity was high (I<sup>2</sup> ≈ 97 % for sensitivity/specificity). U-Net models exhibited higher sensitivity (0.899 vs 0.893) and CNN models higher specificity (0.926 vs 0.900); subgroup Q-tests confirmed significant differences for both sensitivity (p = 0.0002) and specificity (p < 0.001). DL algorithms demonstrate high diagnostic accuracy for PE detection on CTPA, with complementary strengths: U-Net architectures excel in true-positive identification, whereas CNNs yield fewer false positives. However, marked heterogeneity underscores the need for standardized, prospective validation before routine clinical implementation.

A general survey on medical image super-resolution via deep learning.

Yu M, Xu Z, Lukasiewicz T

pubmed logopapersMay 23 2025
Medical image super-resolution (SR) is a classic regression task in low-level vision. Limited by hardware limitations, acquisition time, low radiation dose, and other factors, the spatial resolution of some medical images is not sufficient. To address this problem, many different SR methods have been proposed. Especially in recent years, medical image SR networks based on deep learning have been vigorously developed. This survey provides a modular and detailed introduction to the key components of medical image SR technology based on deep learning. In this paper, we first introduce the background concepts of deep learning and medical image SR task. Subsequently, we present a comprehensive analysis of the key components from the perspectives of effective architecture, upsampling module, learning strategy, and image quality assessment of medical image SR networks. Furthermore, we focus on the urgent problems that need to be addressed in the medical image SR task based on deep learning. And finally we summarize the trends and challenges of future development.

Validation and comparison of three different methods for automated identification of distal femoral landmarks in 3D.

Berger L, Brößner P, Ehreiser S, Tokunaga K, Okamoto M, Radermacher K

pubmed logopapersMay 23 2025
Identification of bony landmarks in medical images is of high importance for 3D planning in orthopaedic surgery. Automated landmark identification has the potential to optimize clinical routines and allows for the scientific analysis of large databases. To the authors' knowledge, no direct comparison of different methods for automated landmark detection on the same dataset has been published to date. We compared 3 methods for automated femoral landmark identification: an artificial neural network, a statistical shape model and a geometric approach. All methods were compared against manual measurements of two raters on the task of identifying 6 femoral landmarks on CT data or derived surface models of 202 femora. The accuracy of the methods was in the range of the manual measurements and comparable to those reported in previous studies. The geometric approach showed a significantly higher average deviation compared to the manually selected reference landmarks, while there was no statistically significant difference for the neural network and the SSM. All fully automated methods show potential for use, depending on the use case. Characteristics of the different methods, such as the input data required (raw CT/segmented bone surface models, amount of training data required) and/or the methods robustness, can be used for method selection in the individual application.
Page 87 of 1321316 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.