Sort by:
Page 563 of 6156144 results

Matsumoto K, Suzuki M, Ishihara K, Tokunaga K, Matsuda K, Chen J, Yamashiro S, Soejima H, Nakashima N, Kamouchi M

pubmed logopapersMay 21 2025
We aimed to develop and validate multimodal models integrating computed tomography (CT) images, text and tabular clinical data to predict poor functional outcomes and in-hospital mortality in patients with intracerebral hemorrhage (ICH). These models were designed to assist non-specialists in emergency settings with limited access to stroke specialists. A retrospective analysis of 527 patients with ICH admitted to a Japanese tertiary hospital between April 2019 and February 2022 was conducted. Deep learning techniques were used to extract features from three-dimensional CT images and unstructured data, which were then combined with tabular data to develop an L1-regularized logistic regression model to predict poor functional outcomes (modified Rankin scale score 3-6) and in-hospital mortality. The model's performance was evaluated by assessing discrimination metrics, calibration plots, and decision curve analysis (DCA) using temporal validation data. The multimodal model utilizing both imaging and text data, such as medical interviews, exhibited the highest performance in predicting poor functional outcomes. In contrast, the model that combined imaging with tabular data, including physiological and laboratory results, demonstrated the best predictive performance for in-hospital mortality. These models exhibited high discriminative performance, with areas under the receiver operating curve (AUROCs) of 0.86 (95% CI: 0.79-0.92) and 0.91 (95% CI: 0.84-0.96) for poor functional outcomes and in-hospital mortality, respectively. Calibration was satisfactory for predicting poor functional outcomes, but requires refinement for mortality prediction. The models performed similar to or better than conventional risk scores, and DCA curves supported their clinical utility. Multimodal prediction models have the potential to aid non-specialists in making informed decisions regarding ICH cases in emergency departments as part of clinical decision support systems. Enhancing real-world data infrastructure and improving model calibration are essential for successful implementation in clinical practice.

Sirui Li, Linkai Peng, Zheyuan Zhang, Gorkem Durak, Ulas Bagci

arxiv logopreprintMay 21 2025
Foundation models (FMs) such as CLIP and SAM have recently shown great promise in image segmentation tasks, yet their adaptation to 3D medical imaging-particularly for pathology detection and segmentation-remains underexplored. A critical challenge arises from the domain gap between natural images and medical volumes: existing FMs, pre-trained on 2D data, struggle to capture 3D anatomical context, limiting their utility in clinical applications like tumor segmentation. To address this, we propose an adaptation framework called TAGS: Tumor Adaptive Guidance for SAM, which unlocks 2D FMs for 3D medical tasks through multi-prompt fusion. By preserving most of the pre-trained weights, our approach enhances SAM's spatial feature extraction using CLIP's semantic insights and anatomy-specific prompts. Extensive experiments on three open-source tumor segmentation datasets prove that our model surpasses the state-of-the-art medical image segmentation models (+46.88% over nnUNet), interactive segmentation frameworks, and other established medical FMs, including SAM-Med2D, SAM-Med3D, SegVol, Universal, 3D-Adapter, and SAM-B (at least +13% over them). This highlights the robustness and adaptability of our proposed framework across diverse medical segmentation tasks.

Yifan Liu, Wuyang Li, Weihao Yu, Chenxin Li, Alexandre Alahi, Max Meng, Yixuan Yuan

arxiv logopreprintMay 21 2025
Computed Tomography serves as an indispensable tool in clinical workflows, providing non-invasive visualization of internal anatomical structures. Existing CT reconstruction works are limited to small-capacity model architecture and inflexible volume representation. In this work, we present X-GRM (X-ray Gaussian Reconstruction Model), a large feedforward model for reconstructing 3D CT volumes from sparse-view 2D X-ray projections. X-GRM employs a scalable transformer-based architecture to encode sparse-view X-ray inputs, where tokens from different views are integrated efficiently. Then, these tokens are decoded into a novel volume representation, named Voxel-based Gaussian Splatting (VoxGS), which enables efficient CT volume extraction and differentiable X-ray rendering. This combination of a high-capacity model and flexible volume representation, empowers our model to produce high-quality reconstructions from various testing inputs, including in-domain and out-domain X-ray projections. Our codes are available at: https://github.com/CUHK-AIM-Group/X-GRM.

Full, P. M., Schirrmeister, R. T., Hein, M., Russe, M. F., Reisert, M., Ammann, C., Greiser, K. H., Niendorf, T., Pischon, T., Schulz-Menger, J., Maier-Hein, K. H., Bamberg, F., Rospleszcz, S., Schlett, C. L., Schuppert, C.

medrxiv logopreprintMay 21 2025
PurposeTo develop a segmentation and quality control pipeline for short-axis cardiac magnetic resonance (CMR) cine images from the prospective, multi-center German National Cohort (NAKO). Materials and MethodsA deep learning model for semantic segmentation, based on the nnU-Net architecture, was applied to full-cycle short-axis cine images from 29,908 baseline participants. The primary objective was to determine data on structure and function for both ventricles (LV, RV), including end diastolic volumes (EDV), end systolic volumes (ESV), and LV myocardial mass. Quality control measures included a visual assessment of outliers in morphofunctional parameters, inter- and intra-ventricular phase differences, and LV time-volume curves (TVC). These were adjudicated using a five-point rating scale, ranging from five (excellent) to one (non-diagnostic), with ratings of three or lower subject to exclusion. The predictive value of outlier criteria for inclusion and exclusion was analyzed using receiver operating characteristics. ResultsThe segmentation model generated complete data for 29,609 participants (incomplete in 1.0%) and 5,082 cases (17.0 %) were visually assessed. Quality assurance yielded a sample of 26,899 participants with excellent or good quality (89.9%; exclusion of 1,875 participants due to image quality issues and 835 cases due to segmentation quality issues). TVC was the strongest single discriminator between included and excluded participants (AUC: 0.684). Of the two-category combinations, the pairing of TVC and phases provided the greatest improvement over TVC alone (AUC difference: 0.044; p<0.001). The best performance was observed when all three categories were combined (AUC: 0.748). Extending the quality-controlled sample to include acceptable quality ratings, a total of 28,413 (95.0%) participants were available. ConclusionThe implemented pipeline facilitated the automated segmentation of an extensive CMR dataset, integrating quality control measures. This methodology ensures that ensuing quantitative analyses are conducted with a diminished risk of bias.

Larroza A, Pérez-Benito FJ, Tendero R, Perez-Cortes JC, Román M, Llobet R

pubmed logopapersMay 21 2025
Image segmentation plays a central role in computer vision applications such as medical imaging, industrial inspection, and environmental monitoring. However, evaluating segmentation performance can be particularly challenging when ground truth is not clearly defined, as is often the case in tasks involving subjective interpretation. These challenges are amplified by inter- and intra-observer variability, which complicates the use of human annotations as a reliable reference. To address this, we propose a novel validation framework-referred to as the three-blind validation strategy-that enables rigorous assessment of segmentation models in contexts where subjectivity and label variability are significant. The core idea is to have a third independent expert, blind to the labeler identities, assess a shuffled set of segmentations produced by multiple human annotators and/or automated models. This allows for the unbiased evaluation of model performance and helps uncover patterns of disagreement that may indicate systematic issues with either human or machine annotations. The primary objective of this study is to introduce and demonstrate this validation strategy as a generalizable framework for robust model evaluation in subjective segmentation tasks. We illustrate its practical implementation in a mammography use case involving dense tissue segmentation while emphasizing its potential applicability to a broad range of segmentation scenarios.

Chang YC, Nixon B, Souza F, Cardoso FN, Dayan E, Geiger EJ, Rosenberg A, D'Amato G, Subhawong T

pubmed logopapersMay 21 2025
Desmoid tumors are rare, locally invasive soft-tissue tumors with unpredictable clinical behavior. Imaging plays a crucial role in their diagnosis, measurement of disease burden, and assessment of treatment response. However, desmoid tumors' unique imaging features present challenges to conventional imaging metrics. The heterogeneous nature of these tumors, with a variable composition (fibrous, myxoid, or cellular), complicates accurate delineation of tumor boundaries and volumetric assessment. Furthermore, desmoid tumors can demonstrate prolonged stability or spontaneous regression, and biologic quiescence is often manifested by collagenization rather than bulk size reduction, making traditional size-based response criteria, such as Response Evaluation Criteria in Solid Tumors (RECIST), suboptimal. To overcome these limitations, advanced imaging techniques offer promising opportunities. Functional and parametric imaging methods, such as diffusion-weighted MRI, dynamic contrast-enhanced MRI, and T2 relaxometry, can provide insights into tumor cellularity and maturation. Radiomics and artificial intelligence approaches may enhance quantitative analysis by extracting and correlating complex imaging features with biological behavior. Moreover, imaging biomarkers could facilitate earlier detection of treatment efficacy or resistance, enabling tailored therapy. By integrating advanced imaging into clinical practice, it may be possible to refine the evaluation of disease burden and treatment response, ultimately improving the management and outcomes of patients with desmoid tumors.

Vaccari S, Paderno A, Furlan S, Cavallero MF, Lupacchini AM, Di Giuli R, Klinger M, Klinger F, Vinci V

pubmed logopapersMay 20 2025
Tuberous breast deformity (TBD) is a congenital condition characterized by constriction of the breast base, parenchymal hypoplasia, and areolar herniation. The absence of a universally accepted classification system complicates diagnosis and surgical planning, leading to variability in clinical outcomes. Artificial intelligence (AI) has emerged as a powerful adjunct in medical imaging, enabling objective, reproducible, and data-driven diagnostic assessments. This study introduces an AI-driven diagnostic tool for tuberous breast deformity (TBD) classification using a Siamese Network trained on paired frontal and lateral images. Additionally, the model generates a continuous Tuberosity Score (ranging from 0 to 1) based on embedding vector distances, offering an objective measure to enhance surgical planning and improved clinical outcomes. A dataset of 200 expertly classified frontal and lateral breast images (100 tuberous, 100 non-tuberous) was used to train a Siamese Network with contrastive loss. The model extracted high-dimensional feature embeddings to differentiate tuberous from non-tuberous breasts. Five-fold cross-validation ensured robust performance evaluation. Performance metrics included accuracy, precision, recall, and F1-score. Visualization techniques, such as t-SNE clustering and occlusion sensitivity mapping, were employed to interpret model decisions. The model achieved an average accuracy of 96.2% ± 5.5%, with balanced precision and recall. The Tuberosity Score, derived from the Euclidean distance between embeddings, provided a continuous measure of deformity severity, correlating well with clinical assessments. This AI-based framework offers an objective, high-accuracy classification system for TBD. The Tuberosity Score enhances diagnostic precision, potentially aiding in surgical planning and improving patient outcomes.

Abudalou S, Choi J, Gage K, Pow-Sang J, Yilmaz Y, Balagurunathan Y

pubmed logopapersMay 20 2025
Deep learning methods provide enormous promise in automating manually intense tasks such as medical image segmentation and provide workflow assistance to clinical experts. Deep neural networks (DNN) require a significant amount of training examples and a variety of expert opinions to capture the nuances and the context, a challenging proposition in oncological studies (H. Wang et al., Nature, vol. 620, no. 7972, pp. 47-60, Aug 2023). Inter-reader variability among clinical experts is a real-world problem that severely impacts the generalization of DNN reproducibility. This study proposes quantifying the variability in DNN performance using expert opinions and exploring strategies to train the network and adapt between expert opinions. We address the inter-reader variability problem in the context of prostate gland segmentation using a well-studied DNN, the 3D U-Net model. Reference data includes magnetic resonance imaging (MRI, T2-weighted) with prostate glandular anatomy annotations from two expert readers (R#1, n = 342 and R#2, n = 204). 3D U-Net was trained and tested with individual expert examples (R#1 and R#2) and had an average Dice coefficient of 0.825 (CI, [0.81 0.84]) and 0.85 (CI, [0.82 0.88]), respectively. Combined training with a representative cohort proportion (R#1, n = 100 and R#2, n = 150) yielded enhanced model reproducibility across readers, achieving an average test Dice coefficient of 0.863 (CI, [0.85 0.87]) for R#1 and 0.869 (CI, [0.87 0.88]) for R#2. We re-evaluated the model performance across the gland volumes (large, small) and found improved performance for large gland size with an average Dice coefficient to be at 0.846 [CI, 0.82 0.87] and 0.872 [CI, 0.86 0.89] for R#1 and R#2, respectively, estimated using fivefold cross-validation. Performance for small gland sizes diminished with average Dice of 0.8 [0.79, 0.82] and 0.8 [0.79, 0.83] for R#1 and R#2, respectively.

Liu J, Jiang S, Wu Y, Zou R, Bao Y, Wang N, Tu J, Xiong J, Liu Y, Li Y

pubmed logopapersMay 20 2025
Glioblastoma (GBM) is a highly aggressive brain tumor with poor prognosis. This study aimed to construct and validate a radiomics-based machine learning model for predicting overall survival (OS) in IDH-wildtype GBM after maximal safe surgical resection using magnetic resonance imaging. A total of 582 patients were retrospectively enrolled, comprising 301 in the training cohort, 128 in the internal validation cohort, and 153 in the external validation cohort. Volumes of interest (VOIs) from contrast-enhanced T1-weighted imaging (CE-T1WI) were segmented into three regions: contrast-enhancing tumor, necrotic non-enhancing core, and peritumoral edema using an ResNet-based segmentation network. A total of 4,227 radiomic features were extracted and filtered using LASSO-Cox regression to identify signatures. The prognostic model was constructed using the Mime prediction framework, categorizing patients into high- and low-risk groups based on the median OS. Model performance was assessed using the concordance index (CI) and Kaplan-Meier survival analysis. Independent prognostic factors were identified through multivariable Cox regression analysis, and a nomogram was developed for individualized risk assessment. The Step Cox [backward] + RSF model achieved CIs of 0.89, 0.81, and 0.76 in the training, internal and external validation cohorts. Log-rank tests demonstrated significant survival differences between high- and low-risk groups across all cohorts (P < 0.05). Multivariate Cox analysis identified age (HR: 1.022; 95% CI: 0.979, 1.009, P < 0.05), KPS score (HR: 0.970, 95% CI: 0.960, 0.978, P < 0.05), rad-scores of the necrotic non-enhancing core (HR: 8.164; 95% CI: 2.439, 27.331, P < 0.05), and peritumoral edema (HR: 3.748; 95% CI: 1.212, 11.594, P < 0.05) as independent predictors of OS. A nomogram integrating these predictors provided individualized risk assessment. This deep learning segmentation-based radiomics model demonstrated robust performance in predicting OS in GBM after maximal safe surgical resection. By incorporating radiomic signatures and advanced machine learning algorithms, it offers a non-invasive tool for personalized prognostic assessment and supports clinical decision-making.

Li X, Zou C, Wang C, Chang C, Lin Y, Liang S, Zheng H, Liu L, Deng K, Zhang L, Liu B, Gao M, Cai P, Lao J, Xu L, Wu D, Zhao X, Wu X, Li X, Luo Y, Zhong W, Lin T

pubmed logopapersMay 20 2025
The clinical benefits of neoadjuvant chemoimmunotherapy (NACI) are demonstrated in patients with bladder cancer (BCa); however, more than half fail to achieve a pathological complete response (pCR). This study utilizes multi-center cohorts of 2322 patients with pathologically diagnosed BCa, collected between January 1, 2014, and December 31, 2023, to explore the correlation between tumor budding (TB) status and NACI response and disease prognosis. A deep learning model is developed to noninvasively evaluate TB status based on CT images. The deep learning model accurately predicts the TB status, with area under the curve values of 0.932 (95% confidence interval: 0.898-0.965) in the training cohort, 0.944 (0.897-0.991) in the internal validation cohort, 0.882 (0.832-0.933) in external validation cohort 1, 0.944 (0.908-0.981) in the external validation cohort 2, and 0.854 (0.739-0.970) in the NACI validation cohort. Patients predicted to have a high TB status exhibit a worse prognosis (p < 0.05) and a lower pCR rate of 25.9% (7/20) than those predicted to have a low TB status (pCR rate: 73.9% [17/23]; p < 0.001). Hence, this model may be a reliable, noninvasive tool for predicting TB status, aiding clinicians in prognosis assessment and NACI strategy formulation.
Page 563 of 6156144 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.