Sort by:
Page 345 of 3863853 results

Right Ventricular Strain as a Key Feature in Interpretable Machine Learning for Identification of Takotsubo Syndrome: A Multicenter CMR-based Study.

Du Z, Hu H, Shen C, Mei J, Feng Y, Huang Y, Chen X, Guo X, Hu Z, Jiang L, Su Y, Biekan J, Lyv L, Chong T, Pan C, Liu K, Ji J, Lu C

pubmed logopapersMay 21 2025
To develop an interpretable machine learning (ML) model based on cardiac magnetic resonance (CMR) multimodal parameters and clinical data to discriminate Takotsubo syndrome (TTS), acute myocardial infarction (AMI), and acute myocarditis (AM), and to further assess the diagnostic value of right ventricular (RV) strain in TTS. This study analyzed CMR and clinical data of 130 patients from three centers. Key features were selected using least absolute shrinkage and selection operator regression and random forest. Data were split into a training cohort and an internal testing cohort (ITC) in the ratio 7:3, with overfitting avoided using leave-one-out cross-validation and bootstrap methods. Nine ML models were evaluated using standard performance metrics, with Shapley additive explanations (SHAP) analysis used for model interpretation. A total of 11 key features were identified. The extreme gradient boosting model showed the best performance, with an area under the curve (AUC) value of 0.94 (95% CI: 0.85-0.97) in the ITC. Right ventricular basal circumferential strain (RVCS-basal) was the most important feature for identifying TTS. Its absolute value was significantly higher in TTS patients than in AMI and AM patients (-9.93%, -5.21%, and -6.18%, respectively, p < 0.001), with values above -6.55% contributing to a diagnosis of TTS. This study developed an interpretable ternary classification ML model for identifying TTS and used SHAP analysis to elucidate the significant value of RVCS-basal in TTS diagnosis. An online calculator (https://lsszxyy.shinyapps.io/XGboost/) based on this model was developed to provide immediate decision support for clinical use.

Enhancing nuclei segmentation in breast histopathology images using U-Net with backbone architectures.

C V LP, V G B, Bhooshan RS

pubmed logopapersMay 21 2025
Breast cancer remains a leading cause of mortality among women worldwide, underscoring the need for accurate and timely diagnostic methods. Precise segmentation of nuclei in breast histopathology images is crucial for effective diagnosis and prognosis, offering critical insights into tumor characteristics and informing treatment strategies. This paper presents an enhanced U-Net architecture utilizing ResNet-34 as an advanced backbone, aimed at improving nuclei segmentation performance. The proposed model is evaluated and compared with standard U-Net and its other variants, including U-Net with VGG-16 and Inception-v3 backbones, using the BreCaHad dataset with nuclei masks generated through ImageJ software. The U-Net model with ResNet-34 backbone achieved superior performance, recording an Intersection over Union (IoU) score of 0.795, significantly outperforming the basic U-Net's IoU score of 0.725. The integration of advanced backbones and data augmentation techniques substantially improved segmentation accuracy, especially on limited medical imaging datasets. Comparative analysis demonstrated that ResNet-34 consistently surpassed other configurations across multiple metrics, including IoU, accuracy, precision, and F1 score. Further validation on the BNS and MoNuSeg-2018 datasets confirmed the robustness of the proposed model. This study highlights the potential of advanced deep learning architectures combined with augmentation methods to address challenges in nuclei segmentation, contributing to the development of more effective clinical diagnostic tools and improved patient care outcomes.

Large medical image database impact on generalizability of synthetic CT scan generation.

Boily C, Mazellier JP, Meyer P

pubmed logopapersMay 21 2025
This study systematically examines the impact of training database size and the generalizability of deep learning models for synthetic medical image generation. Specifically, we employ a Cycle-Consistency Generative Adversarial Network (CycleGAN) with softly paired data to synthesize kilovoltage computed tomography (kVCT) images from megavoltage computed tomography (MVCT) scans. Unlike previous works, which were constrained by limited data availability, our study uses an extensive database comprising 4,000 patient CT scans, an order of magnitude larger than prior research, allowing for a more rigorous assessment of database size in medical image translation. We quantitatively evaluate the fidelity of the generated synthetic images using established image similarity metrics, including Mean Absolute Error (MAE) and Structural Similarity Index Measure (SSIM). Beyond assessing image quality, we investigate the model's capacity for generalization by analyzing its performance across diverse patient subgroups, considering factors such as sex, age, and anatomical region. This approach enables a more granular understanding of how dataset composition influences model robustness.

BrainView: A Cloud-based Deep Learning System for Brain Image Segmentation, Tumor Detection and Visualization.

Ghose P, Jamil HM

pubmed logopapersMay 21 2025
A brain tumor is an abnormal growth in the brain that disrupts its functionality and poses a significant threat to human life by damaging neurons. Early detection and classification of brain tumors are crucial to prevent complications and maintain good health. Recent advancements in deep learning techniques have shown immense potential in image classification and segmentation for tumor identification and classification. In this study, we present a platform, BrainView, for detection, and segmentation of brain tumors from Magnetic Resonance Images (MRI) using deep learning. We utilized EfficientNetB7 pre-trained model to design our proposed DeepBrainNet classification model for analyzing brain MRI images to classify its type. We also proposed a EfficinetNetB7 based image segmentation model, called the EffB7-UNet, for tumor localization. Experimental results show significantly high classification (99.96%) and segmentation (92.734%) accuracies for our proposed models. Finally, we discuss the contours of a cloud application for BrainView using Flask and Flutter to help researchers and clinicians use our machine learning models online for research purposes.

Deep learning radiopathomics based on pretreatment MRI and whole slide images for predicting over survival in locally advanced nasopharyngeal carcinoma.

Yi X, Yu X, Li C, Li J, Cao H, Lu Q, Li J, Hou J

pubmed logopapersMay 21 2025
To develop an integrative radiopathomic model based on deep learning to predict overall survival (OS) in locally advanced nasopharyngeal carcinoma (LANPC) patients. A cohort of 343 LANPC patients with pretreatment MRI and whole slide image (WSI) were randomly divided into training (n = 202), validation (n = 91), and external test (n = 50) sets. For WSIs, a self-attention mechanism was employed to assess the significance of different patches for the prognostic task, aggregating them into a WSI-level representation. For MRI, a multilayer perceptron was used to encode the extracted radiomic features, resulting in an MRI-level representation. These were combined in a multimodal fusion model to produce prognostic predictions. Model performances were evaluated using the concordance index (C-index), and Kaplan-Meier curves were employed for risk stratification. To enhance model interpretability, attention-based and Integrated Gradients techniques were applied to explain how WSIs and MRI features contribute to prognosis predictions. The radiopathomics model achieved high predictive accuracy in predicting the OS, with a C-index of 0.755 (95 % CI: 0.673-0.838) and 0.744 (95 % CI: 0.623-0.808) in the training and validation sets, respectively, outperforming single-modality models (radiomic signature: 0.636, 95 % CI: 0.584-0.688; deep pathomic signature: 0.736, 95 % CI: 0.684-0.810). In the external test, similar findings were observed for the predictive performance of the radiopathomics, radiomic signature, and deep pathomic signature, with their C-indices being 0.735, 0.626, and 0.660 respectively. The radiopathomics model effectively stratified patients into high- and low-risk groups (P < 0.001). Additionally, attention heatmaps revealed that high-attention regions corresponded with tumor areas in both risk groups. n: The radiopathomics model holds promise for predicting clinical outcomes in LANPC patients, offering a potential tool for improving clinical decision-making.

Update on the detection of frailty in older adults: a multicenter cohort machine learning-based study protocol.

Fernández-Carnero S, Martínez-Pozas O, Pecos-Martín D, Pardo-Gómez A, Cuenca-Zaldívar JN, Sánchez-Romero EA

pubmed logopapersMay 21 2025
This study aims to investigate the relationship between muscle activation variables assessed via ultrasound and the comprehensive assessment of geriatric patients, as well as to analyze ultrasound images to determine their correlation with morbimortality factors in frail patients. The present cohort study will be conducted in 500 older adults diagnosed with frailty. A multicenter study will be conducted among the day care centers and nursing homes. This will be achieved through the evaluation of frail older adults via instrumental and functional tests, along with specific ultrasound images to study sarcopenia and nutrition, followed by a detailed analysis of the correlation between all collected variables. This study aims to investigate the correlation between ultrasound-assessed muscle activation variables and the overall health of geriatric patients. It addresses the limitations of previous research by including a large sample size of 500 patients and measuring various muscle parameters beyond thickness. Additionally, it aims to analyze ultrasound images to identify markers associated with higher risk of complications in frail patients. The study involves frail older adults undergoing functional tests and specific ultrasound examinations. A comprehensive analysis of functional, ultrasound, and nutritional variables will be conducted to understand their correlation with overall health and risk of complications in frail older patients. The study was approved by the Research Ethics Committee of the Hospital Universitario Puerta de Hierro, Madrid, Spain (Act nº 18/2023). In addition, the study was registered with https://clinicaltrials.gov/ (NCT06218121).

Performance of multimodal prediction models for intracerebral hemorrhage outcomes using real-world data.

Matsumoto K, Suzuki M, Ishihara K, Tokunaga K, Matsuda K, Chen J, Yamashiro S, Soejima H, Nakashima N, Kamouchi M

pubmed logopapersMay 21 2025
We aimed to develop and validate multimodal models integrating computed tomography (CT) images, text and tabular clinical data to predict poor functional outcomes and in-hospital mortality in patients with intracerebral hemorrhage (ICH). These models were designed to assist non-specialists in emergency settings with limited access to stroke specialists. A retrospective analysis of 527 patients with ICH admitted to a Japanese tertiary hospital between April 2019 and February 2022 was conducted. Deep learning techniques were used to extract features from three-dimensional CT images and unstructured data, which were then combined with tabular data to develop an L1-regularized logistic regression model to predict poor functional outcomes (modified Rankin scale score 3-6) and in-hospital mortality. The model's performance was evaluated by assessing discrimination metrics, calibration plots, and decision curve analysis (DCA) using temporal validation data. The multimodal model utilizing both imaging and text data, such as medical interviews, exhibited the highest performance in predicting poor functional outcomes. In contrast, the model that combined imaging with tabular data, including physiological and laboratory results, demonstrated the best predictive performance for in-hospital mortality. These models exhibited high discriminative performance, with areas under the receiver operating curve (AUROCs) of 0.86 (95% CI: 0.79-0.92) and 0.91 (95% CI: 0.84-0.96) for poor functional outcomes and in-hospital mortality, respectively. Calibration was satisfactory for predicting poor functional outcomes, but requires refinement for mortality prediction. The models performed similar to or better than conventional risk scores, and DCA curves supported their clinical utility. Multimodal prediction models have the potential to aid non-specialists in making informed decisions regarding ICH cases in emergency departments as part of clinical decision support systems. Enhancing real-world data infrastructure and improving model calibration are essential for successful implementation in clinical practice.

Three-Blind Validation Strategy of Deep Learning Models for Image Segmentation.

Larroza A, Pérez-Benito FJ, Tendero R, Perez-Cortes JC, Román M, Llobet R

pubmed logopapersMay 21 2025
Image segmentation plays a central role in computer vision applications such as medical imaging, industrial inspection, and environmental monitoring. However, evaluating segmentation performance can be particularly challenging when ground truth is not clearly defined, as is often the case in tasks involving subjective interpretation. These challenges are amplified by inter- and intra-observer variability, which complicates the use of human annotations as a reliable reference. To address this, we propose a novel validation framework-referred to as the three-blind validation strategy-that enables rigorous assessment of segmentation models in contexts where subjectivity and label variability are significant. The core idea is to have a third independent expert, blind to the labeler identities, assess a shuffled set of segmentations produced by multiple human annotators and/or automated models. This allows for the unbiased evaluation of model performance and helps uncover patterns of disagreement that may indicate systematic issues with either human or machine annotations. The primary objective of this study is to introduce and demonstrate this validation strategy as a generalizable framework for robust model evaluation in subjective segmentation tasks. We illustrate its practical implementation in a mammography use case involving dense tissue segmentation while emphasizing its potential applicability to a broad range of segmentation scenarios.

The Desmoid Dilemma: Challenges and Opportunities in Assessing Tumor Burden and Therapeutic Response.

Chang YC, Nixon B, Souza F, Cardoso FN, Dayan E, Geiger EJ, Rosenberg A, D'Amato G, Subhawong T

pubmed logopapersMay 21 2025
Desmoid tumors are rare, locally invasive soft-tissue tumors with unpredictable clinical behavior. Imaging plays a crucial role in their diagnosis, measurement of disease burden, and assessment of treatment response. However, desmoid tumors' unique imaging features present challenges to conventional imaging metrics. The heterogeneous nature of these tumors, with a variable composition (fibrous, myxoid, or cellular), complicates accurate delineation of tumor boundaries and volumetric assessment. Furthermore, desmoid tumors can demonstrate prolonged stability or spontaneous regression, and biologic quiescence is often manifested by collagenization rather than bulk size reduction, making traditional size-based response criteria, such as Response Evaluation Criteria in Solid Tumors (RECIST), suboptimal. To overcome these limitations, advanced imaging techniques offer promising opportunities. Functional and parametric imaging methods, such as diffusion-weighted MRI, dynamic contrast-enhanced MRI, and T2 relaxometry, can provide insights into tumor cellularity and maturation. Radiomics and artificial intelligence approaches may enhance quantitative analysis by extracting and correlating complex imaging features with biological behavior. Moreover, imaging biomarkers could facilitate earlier detection of treatment efficacy or resistance, enabling tailored therapy. By integrating advanced imaging into clinical practice, it may be possible to refine the evaluation of disease burden and treatment response, ultimately improving the management and outcomes of patients with desmoid tumors.

Feasibility of an AI-driven Classification of Tuberous Breast Deformity: A Siamese Network Approach with a Continuous Tuberosity Score.

Vaccari S, Paderno A, Furlan S, Cavallero MF, Lupacchini AM, Di Giuli R, Klinger M, Klinger F, Vinci V

pubmed logopapersMay 20 2025
Tuberous breast deformity (TBD) is a congenital condition characterized by constriction of the breast base, parenchymal hypoplasia, and areolar herniation. The absence of a universally accepted classification system complicates diagnosis and surgical planning, leading to variability in clinical outcomes. Artificial intelligence (AI) has emerged as a powerful adjunct in medical imaging, enabling objective, reproducible, and data-driven diagnostic assessments. This study introduces an AI-driven diagnostic tool for tuberous breast deformity (TBD) classification using a Siamese Network trained on paired frontal and lateral images. Additionally, the model generates a continuous Tuberosity Score (ranging from 0 to 1) based on embedding vector distances, offering an objective measure to enhance surgical planning and improved clinical outcomes. A dataset of 200 expertly classified frontal and lateral breast images (100 tuberous, 100 non-tuberous) was used to train a Siamese Network with contrastive loss. The model extracted high-dimensional feature embeddings to differentiate tuberous from non-tuberous breasts. Five-fold cross-validation ensured robust performance evaluation. Performance metrics included accuracy, precision, recall, and F1-score. Visualization techniques, such as t-SNE clustering and occlusion sensitivity mapping, were employed to interpret model decisions. The model achieved an average accuracy of 96.2% ± 5.5%, with balanced precision and recall. The Tuberosity Score, derived from the Euclidean distance between embeddings, provided a continuous measure of deformity severity, correlating well with clinical assessments. This AI-based framework offers an objective, high-accuracy classification system for TBD. The Tuberosity Score enhances diagnostic precision, potentially aiding in surgical planning and improving patient outcomes.
Page 345 of 3863853 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.