Sort by:
Page 433 of 4494481 results

Radiomics prediction of surgery in ulcerative colitis refractory to medical treatment.

Sakamoto K, Okabayashi K, Seishima R, Shigeta K, Kiyohara H, Mikami Y, Kanai T, Kitagawa Y

pubmed logopapersMay 10 2025
The surgeries in drug-resistant ulcerative colitis are determined by complex factors. This study evaluated the predictive performance of radiomics analysis on the basis of whether patients with ulcerative colitis in hospital were in the surgical or medical treatment group by discharge from hospital. This single-center retrospective cohort study used CT at admission of patients with US admitted from 2015 to 2022. The target of prediction was whether the patient would undergo surgery by the time of discharge. Radiomics features were extracted using the rectal wall at the level of the tailbone tip of the CT as the region of interest. CT data were randomly classified into a training cohort and a validation cohort, and LASSO regression was performed using the training cohort to create a formula for calculating the radiomics score. A total of 147 patients were selected, and data from 184 CT scans were collected. Data from 157 CT scans matched the selection criteria and were included. Five features were used for the radiomics score. Univariate logistic regression analysis of clinical information detected a significant influence of severity (p < 0.001), number of drugs used until surgery (p < 0.001), Lichtiger score (p = 0.024), and hemoglobin (p = 0.010). Using a nomogram combining these items, we found that the discriminatory power in the surgery and medical treatment groups was AUC 0.822 (95% confidence interval (CI) 0.841-0.951) for the training cohort and AUC 0.868 (95% CI 0.729-1.000) for the validation cohort, indicating a good ability to discriminate the outcomes. Radiomics analysis of CT images of patients with US at the time of admission, combined with clinical data, showed high predictive ability regarding a treatment strategy of surgery or medical treatment.

UltrasOM: A mamba-based network for 3D freehand ultrasound reconstruction using optical flow.

Sun R, Liu C, Wang W, Song Y, Sun T

pubmed logopapersMay 10 2025
Three-dimensional (3D) ultrasound (US) reconstruction is of significant value in clinical diagnosis, characterized by its safety, portability, low cost, and high real-time capabilities. 3D freehand ultrasound reconstruction aims to eliminate the need for tracking devices, relying solely on image data to infer the spatial relationships between frames. However, inherent jitter during handheld scanning introduces significant inaccuracies, making current methods ineffective in precisely predicting the spatial motions of ultrasound image frames. This leads to substantial cumulative errors over long sequence modeling, resulting in deformations or artifacts in the reconstructed volume. To address these challenges, we proposed UltrasOM, a 3D ultrasound reconstruction network designed for spatial relative motion estimation. Initially, we designed a video embedding module that integrates optical flow dynamics with original static information to enhance motion change features between frames. Next, we developed a Mamba-based spatiotemporal attention module, utilizing multi-layer stacked Space-Time Blocks to effectively capture global spatiotemporal correlations within video frame sequences. Finally, we incorporated correlation loss and motion speed loss to prevent overfitting related to scanning speed and pose, enhancing the model's generalization capability. Experimental results on a dataset of 200 forearm cases, comprising 58,011 frames, demonstrated that the proposed method achieved a final drift rate (FDR) of 10.24 %, a frame-to-frame distance error (DE) of 7.34 mm, a symmetric Hausdorff distance error (HD) of 10.81 mm, and a mean angular error (MEA) of 2.05°, outperforming state-of-the-art methods by 13.24 %, 15.11 %, 3.57 %, and 6.32 %, respectively. By integrating optical flow features and deeply exploring contextual spatiotemporal dependencies, the proposed network can directly predict the relative motions between multiple frames of ultrasound images without the need for tracking, surpassing the accuracy of existing methods.

Evaluating an information theoretic approach for selecting multimodal data fusion methods.

Zhang T, Ding R, Luong KD, Hsu W

pubmed logopapersMay 10 2025
Interest has grown in combining radiology, pathology, genomic, and clinical data to improve the accuracy of diagnostic and prognostic predictions toward precision health. However, most existing works choose their datasets and modeling approaches empirically and in an ad hoc manner. A prior study proposed four partial information decomposition (PID)-based metrics to provide a theoretical understanding of multimodal data interactions: redundancy, uniqueness of each modality, and synergy. However, these metrics have only been evaluated in a limited collection of biomedical data, and the existing work does not elucidate the effect of parameter selection when calculating the PID metrics. In this work, we evaluate PID metrics on a wider range of biomedical data, including clinical, radiology, pathology, and genomic data, and propose potential improvements to the PID metrics. We apply the PID metrics to seven different modality pairs across four distinct cohorts (datasets). We compare and interpret trends in the resulting PID metrics and downstream model performance in these multimodal cohorts. The downstream tasks being evaluated include predicting the prognosis (either overall survival or recurrence) of patients with non-small cell lung cancer, prostate cancer, and glioblastoma. We found that, while PID metrics are informative, solely relying on these metrics to decide on a fusion approach does not always yield a machine learning model with optimal performance. Of the seven different modality pairs, three had poor (0%), three had moderate (66%-89%), and only one had perfect (100%) consistency between the PID values and model performance. We propose two improvements to the PID metrics (determining the optimal parameters and uncertainty estimation) and identified areas where PID metrics could be further improved. The current PID metrics are not accurate enough for estimating the multimodal data interactions and need to be improved before they can serve as a reliable tool. We propose improvements and provide suggestions for future work. Code: https://github.com/zhtyolivia/pid-multimodal.

Intra- and Peritumoral Radiomics Based on Ultrasound Images for Preoperative Differentiation of Follicular Thyroid Adenoma, Carcinoma, and Follicular Tumor With Uncertain Malignant Potential.

Fu Y, Mei F, Shi L, Ma Y, Liang H, Huang L, Fu R, Cui L

pubmed logopapersMay 10 2025
Differentiating between follicular thyroid adenoma (FTA), carcinoma (FTC), and follicular tumor with uncertain malignant potential (FT-UMP) remains challenging due to their overlapping ultrasound characteristics. This retrospective study aimed to enhance preoperative diagnostic accuracy by utilizing intra- and peritumoral radiomics based on ultrasound images. We collected post-thyroidectomy ultrasound images from 774 patients diagnosed with FTA (n = 429), FTC (n = 158), or FT-UMP (n = 187) between January 2018 and December 2023. Six peritumoral regions were expanded by 5%-30% in 5% increments, with the segment-anything model utilizing prompt learning to detect the field of view and constrain the expanded boundaries. A stepwise classification strategy addressing three tasks was implemented: distinguishing FTA from the other types (task 1), differentiating FTC from FT-UMP (task 2), and classifying all three tumors. Diagnostic models were developed by combining radiomic features from tumor and peritumoral regions with clinical characteristics. Clinical characteristics combined with intratumoral and 5% peritumoral radiomic features performed best across all tasks (Test set: area under the curves, 0.93 for task 1 and 0.90 for task 2; diagnostic accuracy, 79.9%). The DeLong test indicated that all peritumoral radiomics significantly improved intratumoral radiomics performance and clinical characteristics (p < 0.04). The 5% peritumoral regions showed the best performance, though not all results were significant (p = 0.01-0.91). Ultrasound-based intratumoral and peritumoral radiomics can significantly enhance preoperative diagnostic accuracy for FTA, FTC, and FT-UMP, leading to improved treatment strategies and patient outcomes. Furthermore, the 5% peritumoral area may indicate regions of potential tumor invasion requiring further investigation.

Error correcting 2D-3D cascaded network for myocardial infarct scar segmentation on late gadolinium enhancement cardiac magnetic resonance images.

Schwab M, Pamminger M, Kremser C, Obmann D, Haltmeier M, Mayr A

pubmed logopapersMay 10 2025
Late gadolinium enhancement (LGE) cardiac magnetic resonance (CMR) imaging is considered the in vivo reference standard for assessing infarct size (IS) and microvascular obstruction (MVO) in ST-elevation myocardial infarction (STEMI) patients. However, the exact quantification of those markers of myocardial infarct severity remains challenging and very time-consuming. As LGE distribution patterns can be quite complex and hard to delineate from the blood pool or epicardial fat, automatic segmentation of LGE CMR images is challenging. In this work, we propose a cascaded framework of two-dimensional and three-dimensional convolutional neural networks (CNNs) which enables to calculate the extent of myocardial infarction in a fully automated way. By artificially generating segmentation errors which are characteristic for 2D CNNs during training of the cascaded framework we are enforcing the detection and correction of 2D segmentation errors and hence improve the segmentation accuracy of the entire method. The proposed method was trained and evaluated on two publicly available datasets. We perform comparative experiments where we show that our framework outperforms state-of-the-art reference methods in segmentation of myocardial infarction. Furthermore, in extensive ablation studies we show the advantages that come with the proposed error correcting cascaded method. The code of this project is publicly available at https://github.com/matthi99/EcorC.git.

Performance of fully automated deep-learning-based coronary artery calcium scoring in ECG-gated calcium CT and non-gated low-dose chest CT.

Kim S, Park EA, Ahn C, Jeong B, Lee YS, Lee W, Kim JH

pubmed logopapersMay 10 2025
This study aimed to validate the agreement and diagnostic performance of a deep-learning-based coronary artery calcium scoring (DL-CACS) system for ECG-gated and non-gated low-dose chest CT (LDCT) across multivendor datasets. In this retrospective study, datasets from Seoul National University Hospital (SNUH, 652 paired ECG-gated and non-gated CT scans) and the Stanford public dataset (425 ECG-gated and 199 non-gated CT scans) were analyzed. Agreement metrics included intraclass correlation coefficient (ICC), coefficient of determination (R²), and categorical agreement (κ). Diagnostic performance was assessed using categorical accuracy and the area under the receiver operating characteristic curve (AUROC). DL-CACS demonstrated excellent performance for ECG-gated CT in both datasets (SNUH: R² = 0.995, ICC = 0.997, κ = 0.97, AUROC = 0.99; Stanford: R² = 0.989, ICC = 0.990, κ = 0.97, AUROC = 0.99). For non-gated CT using manual LDCT CAC scores as a reference, performance was similarly high (R² = 0.988, ICC = 0.994, κ = 0.96, AUROC = 0.98-0.99). When using ECG-gated CT scores as the reference, performance for non-gated CT was slightly lower but remained robust (SNUH: R² = 0.948, ICC = 0.968, κ = 0.88, AUROC = 0.98-0.99; Stanford: R² = 0.949, ICC = 0.948, κ = 0.71, AUROC = 0.89-0.98). DL-CACS provides a reliable and automated solution for CACS, potentially reducing workload while maintaining robust performance in both ECG-gated and non-gated CT settings. Question How accurate and reliable is deep-learning-based coronary artery calcium scoring (DL-CACS) in ECG-gated CT and non-gated low-dose chest CT (LDCT) across multivendor datasets? Findings DL-CACS showed near-perfect performance for ECG-gated CT. For non-gated LDCT, performance was excellent using manual scores as the reference and lower but reliable when using ECG-gated CT scores. Clinical relevance DL-CACS provides a reliable and automated solution for CACS, potentially reducing workload and improving diagnostic workflow. It supports cardiovascular risk stratification and broader clinical adoption, especially in settings where ECG-gated CT is unavailable.

Reproducing and Improving CheXNet: Deep Learning for Chest X-ray Disease Classification

Daniel Strick, Carlos Garcia, Anthony Huang

arxiv logopreprintMay 10 2025
Deep learning for radiologic image analysis is a rapidly growing field in biomedical research and is likely to become a standard practice in modern medicine. On the publicly available NIH ChestX-ray14 dataset, containing X-ray images that are classified by the presence or absence of 14 different diseases, we reproduced an algorithm known as CheXNet, as well as explored other algorithms that outperform CheXNet's baseline metrics. Model performance was primarily evaluated using the F1 score and AUC-ROC, both of which are critical metrics for imbalanced, multi-label classification tasks in medical imaging. The best model achieved an average AUC-ROC score of 0.85 and an average F1 score of 0.39 across all 14 disease classifications present in the dataset.

Batch Augmentation with Unimodal Fine-tuning for Multimodal Learning

H M Dipu Kabir, Subrota Kumar Mondal, Mohammad Ali Moni

arxiv logopreprintMay 10 2025
This paper proposes batch augmentation with unimodal fine-tuning to detect the fetus's organs from ultrasound images and associated clinical textual information. We also prescribe pre-training initial layers with investigated medical data before the multimodal training. At first, we apply a transferred initialization with the unimodal image portion of the dataset with batch augmentation. This step adjusts the initial layer weights for medical data. Then, we apply neural networks (NNs) with fine-tuned initial layers to images in batches with batch augmentation to obtain features. We also extract information from descriptions of images. We combine this information with features obtained from images to train the head layer. We write a dataloader script to load the multimodal data and use existing unimodal image augmentation techniques with batch augmentation for the multimodal data. The dataloader brings a new random augmentation for each batch to get a good generalization. We investigate the FPU23 ultrasound and UPMC Food-101 multimodal datasets. The multimodal large language model (LLM) with the proposed training provides the best results among the investigated methods. We receive near state-of-the-art (SOTA) performance on the UPMC Food-101 dataset. We share the scripts of the proposed method with traditional counterparts at the following repository: github.com/dipuk0506/multimodal

Improving Generalization of Medical Image Registration Foundation Model

Jing Hu, Kaiwei Yu, Hongjiang Xian, Shu Hu, Xin Wang

arxiv logopreprintMay 10 2025
Deformable registration is a fundamental task in medical image processing, aiming to achieve precise alignment by establishing nonlinear correspondences between images. Traditional methods offer good adaptability and interpretability but are limited by computational efficiency. Although deep learning approaches have significantly improved registration speed and accuracy, they often lack flexibility and generalizability across different datasets and tasks. In recent years, foundation models have emerged as a promising direction, leveraging large and diverse datasets to learn universal features and transformation patterns for image registration, thus demonstrating strong cross-task transferability. However, these models still face challenges in generalization and robustness when encountering novel anatomical structures, varying imaging conditions, or unseen modalities. To address these limitations, this paper incorporates Sharpness-Aware Minimization (SAM) into foundation models to enhance their generalization and robustness in medical image registration. By optimizing the flatness of the loss landscape, SAM improves model stability across diverse data distributions and strengthens its ability to handle complex clinical scenarios. Experimental results show that foundation models integrated with SAM achieve significant improvements in cross-dataset registration performance, offering new insights for the advancement of medical image registration technology. Our code is available at https://github.com/Promise13/fm_sam}{https://github.com/Promise13/fm\_sam.

Preoperative radiomics models using CT and MRI for microsatellite instability in colorectal cancer: a systematic review and meta-analysis.

Capello Ingold G, Martins da Fonseca J, Kolenda Zloić S, Verdan Moreira S, Kago Marole K, Finnegan E, Yoshikawa MH, Daugėlaitė S, Souza E Silva TX, Soato Ratti MA

pubmed logopapersMay 10 2025
Microsatellite instability (MSI) is a novel predictive biomarker for chemotherapy and immunotherapy response, as well as prognostic indicator in colorectal cancer (CRC). The current standard for MSI identification is polymerase chain reaction (PCR) testing or the immunohistochemical analysis of tumor biopsy samples. However, tumor heterogeneity and procedure complications pose challenges to these techniques. CT and MRI-based radiomics models offer a promising non-invasive approach for this purpose. A systematic search of PubMed, Embase, Cochrane Library and Scopus was conducted to identify studies evaluating the diagnostic performance of CT and MRI-based radiomics models for detecting MSI status in CRC. Pooled area under the curve (AUC), sensitivity, and specificity were calculated in RStudio using a random-effects model. Forest plots and a summary ROC curve were generated. Heterogeneity was assessed using I² statistics and explored through sensitivity analyses, threshold effect assessment, subgroup analyses and meta-regression. 17 studies with a total of 6,045 subjects were included in the analysis. All studies extracted radiomic features from CT or MRI images of CRC patients with confirmed MSI status to train machine learning models. The pooled AUC was 0.815 (95% CI: 0.784-0.840) for CT-based studies and 0.900 (95% CI: 0.819-0.943) for MRI-based studies. Significant heterogeneity was identified and addressed through extensive analysis. Radiomics models represent a novel and promising tool for predicting MSI status in CRC patients. These findings may serve as a foundation for future studies aimed at developing and validating improved models, ultimately enhancing the diagnosis, treatment, and prognosis of colorectal cancer.
Page 433 of 4494481 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.