Sort by:
Page 7 of 3313305 results

Exploring Radiologists' Use of AI Chatbots for Assistance in Image Interpretation: Patterns of Use and Trust Evaluation.

Alarifi M

pubmed logopapersAug 13 2025
This study investigated radiologists' perceptions of AI-generated, patient-friendly radiology reports across three modalities: MRI, CT, and mammogram/ultrasound. The evaluation focused on report correctness, completeness, terminology complexity, and emotional impact. Seventy-nine radiologists from four major Saudi Arabian hospitals assessed AI-simplified versions of clinical radiology reports. Each participant reviewed one report from each modality and completed a structured questionnaire covering factual correctness, completeness, terminology complexity, and emotional impact. A structured and detailed prompt was used to guide ChatGPT-4 in generating the reports, which included clear findings, a lay summary, glossary, and clarification of ambiguous elements. Statistical analyses included descriptive summaries, Friedman tests, and Pearson correlations. Radiologists rated mammogram reports highest for correctness (M = 4.22), followed by CT (4.05) and MRI (3.95). Completeness scores followed a similar trend. Statistically significant differences were found in correctness (χ<sup>2</sup>(2) = 17.37, p < 0.001) and completeness (χ<sup>2</sup>(2) = 13.13, p = 0.001). Anxiety and complexity ratings were moderate, with MRI reports linked to slightly higher concern. A weak positive correlation emerged between radiologists' experience and mammogram correctness ratings (r = .235, p = .037). Radiologists expressed overall support for AI-generated simplified radiology reports when created using a structured prompt that includes summaries, glossaries, and clarification of ambiguous findings. While mammography and CT reports were rated favorably, MRI reports showed higher emotional impact, highlighting a need for clearer and more emotionally supportive language.

Pathology-Guided AI System for Accurate Segmentation and Diagnosis of Cervical Spondylosis.

Zhang Q, Chen X, He Z, Wu L, Wang K, Sun J, Shen H

pubmed logopapersAug 13 2025
Cervical spondylosis, a complex and prevalent condition, demands precise and efficient diagnostic techniques for accurate assessment. While MRI offers detailed visualization of cervical spine anatomy, manual interpretation remains labor-intensive and prone to error. To address this, we developed an innovative AI-assisted Expert-based Diagnosis System that automates both segmentation and diagnosis of cervical spondylosis using MRI. Leveraging multi-center datasets of cervical MRI images from patients with cervical spondylosis, our system features a pathology-guided segmentation model capable of accurately segmenting key cervical anatomical structures. The segmentation is followed by an expert-based diagnostic framework that automates the calculation of critical clinical indicators. Our segmentation model achieved an impressive average Dice coefficient exceeding 0.90 across four cervical spinal anatomies and demonstrated enhanced accuracy in herniation areas. Diagnostic evaluation further showcased the system's precision, with the lowest mean average errors (MAE) for the C2-C7 Cobb angle and the Maximum Spinal Cord Compression (MSCC) coefficient. In addition, our method delivered high accuracy, precision, recall, and F1 scores in herniation localization, K-line status assessment, T2 hyperintensity detection, and Kang grading. Comparative analysis and external validation demonstrate that our system outperforms existing methods, establishing a new benchmark for segmentation and diagnostic tasks for cervical spondylosis.

CT-Based radiomics and deep learning for the preoperative prediction of peritoneal metastasis in ovarian cancers.

Liu Y, Yin H, Li J, Wang Z, Wang W, Cui S

pubmed logopapersAug 13 2025
To develop a CT-based deep learning radiomics nomogram (DLRN) for the preoperative prediction of peritoneal metastasis (PM) in patients with ovarian cancer (OC). A total of 296 patients with OCs were randomly divided into training dataset (N = 207) and test dataset (N = 89). The radiomics features and DL features were extracted from CT images of each patient. Specifically, radiomics features were extracted from the 3D tumor regions, while DL features were extracted from the 2D slice with the largest tumor region of interest (ROI). The least absolute shrinkage and selection operator (LASSO) algorithm was used to select radiomics and DL features, and the radiomics score (Radscore) and DL score (Deepscore) were calculated. Multivariate logistic regression was employed to construct clinical model. The important clinical factors, radiomics and DL features were integrated to build the DLRN. The predictive performance of the models was evaluated using the area under the receiver operating characteristic curve (AUC) and DeLong's test. Nine radiomics features and 10 DL features were selected. Carbohydrate antigen 125 (CA-125) was the independent clinical predictor. In the training dataset, the AUC values of the clinical, radiomics and DL models were 0.618, 0.842, and 0.860, respectively. In the test dataset, the AUC values of these models were 0.591, 0.819 and 0.917, respectively. The DLRN showed better performance than other models in both training and test datasets with AUCs of 0.943 and 0.951, respectively. Decision curve analysis and calibration curve showed that the DLRN provided relatively high clinical benefit in both the training and test datasets. The DLRN demonstrated superior performance in predicting preoperative PM in patients with OC. This model offers a highly accurate and noninvasive tool for preoperative prediction, with substantial clinical potential to provide critical information for individualized treatment planning, thereby enabling more precise and effective management of OC patients.

MammosighTR: Nationwide Breast Cancer Screening Mammogram Dataset with BI-RADS Annotations for Artificial Intelligence Applications.

Koç U, Beşler MS, Sezer EA, Karakaş E, Özkaya YA, Evrimler Ş, Yalçın A, Kızıloğlu A, Kesimal U, Oruç M, Çankaya İ, Koç Keleş D, Merd N, Özkan E, Çevik Nİ, Gökhan MB, Boyraz Hayat B, Özer M, Tokur O, Işık F, Tezcan A, Battal F, Yüzkat M, Sebik NB, Karademir F, Topuz Y, Sezer Ö, Varlı S, Ülgü MM, Akdoğan E, Birinci Ş

pubmed logopapersAug 13 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. The MammosighTR dataset, derived from Türkiye's national breast cancer screening mammography program, provides BI-RADS-labeled mammograms with detailed annotations on breast composition and lesion quadrant location, which may be useful for developing and testing AI models in breast cancer detection. ©RSNA, 2025.

Differentiation Between Fibro-Adipose Vascular Anomaly and Intramuscular Venous Malformation Using Grey-Scale Ultrasound-Based Radiomics and Machine Learning.

Hu WJ, Wu G, Yuan JJ, Ma BX, Liu YH, Guo XN, Dong CX, Kang H, Yang X, Li JC

pubmed logopapersAug 13 2025
To establish an ultrasound-based radiomics model to differentiate fibro adipose vascular anomaly (FAVA) and intramuscular venous malformation (VM). The clinical data of 65 patients with VM and 31 patients with FAVA who were treated and pathologically confirmed were retrospectively analyzed. Dimensionality reduction was performed on these features using the least absolute shrinkage and selection operator (LASSO). An ultrasound-based radiomics model was established using support vector machine (SVM) and random forest (RF) models. The diagnostic efficiency of this model was evaluated using the receiver operating characteristic. A total of 851 features were obtained by feature extraction, and 311 features were screened out using the <i>t</i>-test and Mann-Whitney <i>U</i> test. The dimensionality reduction was performed on the remaining features using LASSO. Finally, seven features were included to establish the diagnostic prediction model. In the testing group, the AUC, accuracy and specificity of the SVM model were higher than those of the RF model (0.841 [0.815-0.867] vs. 0.791 [0.759-0.824], 96.6% vs. 93.1%, and 100.0% vs. 90.5%, respectively). However, the sensitivity of the SVM model was lower than that of the RF model (88.9% vs. 100.0%). In this study, a prediction model based on ultrasound radiomics was developed to distinguish FAVA from VM. The study achieved high classification accuracy, sensitivity, and specificity. SVM model is superior to RF model and provides a new perspective and tool for clinical diagnosis.

Development of a multimodal vision transformer model for predicting traumatic versus degenerative rotator cuff tears on magnetic resonance imaging: A single-centre retrospective study.

Oettl FC, Malayeri AB, Furrer PR, Wieser K, Fürnstahl P, Bouaicha S

pubmed logopapersAug 13 2025
The differentiation between traumatic and degenerative rotator cuff tears (RCTs remains a diagnostic challenge with significant implications for treatment planning. While magnetic resonance imaging (MRI) is standard practice, traditional radiological interpretation has shown limited reliability in distinguishing these etiologies. This study evaluates the potential of artificial intelligence (AI) models, specifically a multimodal vision transformer (ViT), to differentiate between traumatic and degenerative RCT. In this retrospective, single-centre study, 99 shoulder MRIs were analysed from patients who underwent surgery at a specialised university shoulder unit between 2016 and 2019. The cohort was divided into training (n = 79) and validation (n = 20) sets. The traumatic group required a documented relevant trauma (excluding simple lifting injuries), previously asymptomatic shoulder and MRI within 3 months posttrauma. The degenerative group was of similar age and injured tendon, with patients presenting with at least 1 year of constant shoulder pain prior to imaging and no trauma history. The ViT was subsequently combined with demographic data to finalise in a multimodal ViT. Saliency maps are utilised as an explainability tool. The multimodal ViT model achieved an accuracy of 0.75 ± 0.08 with a recall of 0.8 ± 0.08, specificity of 0.71 ± 0.11 and a F1 score of 0.76 ± 0.1. The model maintained consistent performance across different patient subsets, demonstrating robust generalisation. Saliency maps do not show a consistent focus on the rotator cuff. AI shows potential in supporting the challenging differentiation between traumatic and degenerative RCT on MRI. The achieved accuracy of 75% is particularly significant given the similar groups which presented a challenging diagnostic scenario. Saliency maps were utilised to ensure explainability, the given lack of consistent focus on rotator cuff tendons hints towards underappreciated aspects in the differentiation. Not applicable.

AST-n: A Fast Sampling Approach for Low-Dose CT Reconstruction using Diffusion Models

Tomás de la Sotta, José M. Saavedra, Héctor Henríquez, Violeta Chang, Aline Xavier

arxiv logopreprintAug 13 2025
Low-dose CT (LDCT) protocols reduce radiation exposure but increase image noise, compromising diagnostic confidence. Diffusion-based generative models have shown promise for LDCT denoising by learning image priors and performing iterative refinement. In this work, we introduce AST-n, an accelerated inference framework that initiates reverse diffusion from intermediate noise levels, and integrate high-order ODE solvers within conditioned models to further reduce sampling steps. We evaluate two acceleration paradigms--AST-n sampling and standard scheduling with high-order solvers -- on the Low Dose CT Grand Challenge dataset, covering head, abdominal, and chest scans at 10-25 % of standard dose. Conditioned models using only 25 steps (AST-25) achieve peak signal-to-noise ratio (PSNR) above 38 dB and structural similarity index (SSIM) above 0.95, closely matching standard baselines while cutting inference time from ~16 seg to under 1 seg per slice. Unconditional sampling suffers substantial quality loss, underscoring the necessity of conditioning. We also assess DDIM inversion, which yields marginal PSNR gains at the cost of doubling inference time, limiting its clinical practicality. Our results demonstrate that AST-n with high-order samplers enables rapid LDCT reconstruction without significant loss of image fidelity, advancing the feasibility of diffusion-based methods in clinical workflows.

Automated Segmentation of Coronal Brain Tissue Slabs for 3D Neuropathology

Jonathan Williams Ramirez, Dina Zemlyanker, Lucas Deden-Binder, Rogeny Herisse, Erendira Garcia Pallares, Karthik Gopinath, Harshvardhan Gazula, Christopher Mount, Liana N. Kozanno, Michael S. Marshall, Theresa R. Connors, Matthew P. Frosch, Mark Montine, Derek H. Oakley, Christine L. Mac Donald, C. Dirk Keene, Bradley T. Hyman, Juan Eugenio Iglesias

arxiv logopreprintAug 13 2025
Advances in image registration and machine learning have recently enabled volumetric analysis of \emph{postmortem} brain tissue from conventional photographs of coronal slabs, which are routinely collected in brain banks and neuropathology laboratories worldwide. One caveat of this methodology is the requirement of segmentation of the tissue from photographs, which currently requires costly manual intervention. In this article, we present a deep learning model to automate this process. The automatic segmentation tool relies on a U-Net architecture that was trained with a combination of \textit{(i)}1,414 manually segmented images of both fixed and fresh tissue, from specimens with varying diagnoses, photographed at two different sites; and \textit{(ii)}~2,000 synthetic images with randomized contrast and corresponding masks generated from MRI scans for improved generalizability to unseen photographic setups. Automated model predictions on a subset of photographs not seen in training were analyzed to estimate performance compared to manual labels -- including both inter- and intra-rater variability. Our model achieved a median Dice score over 0.98, mean surface distance under 0.4~mm, and 95\% Hausdorff distance under 1.60~mm, which approaches inter-/intra-rater levels. Our tool is publicly available at surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools.

Quest for a clinically relevant medical image segmentation metric: the definition and implementation of Medical Similarity Index

Szuzina Fazekas, Bettina Katalin Budai, Viktor Bérczi, Pál Maurovich-Horvat, Zsolt Vizi

arxiv logopreprintAug 13 2025
Background: In the field of radiology and radiotherapy, accurate delineation of tissues and organs plays a crucial role in both diagnostics and therapeutics. While the gold standard remains expert-driven manual segmentation, many automatic segmentation methods are emerging. The evaluation of these methods primarily relies on traditional metrics that only incorporate geometrical properties and fail to adapt to various applications. Aims: This study aims to develop and implement a clinically relevant segmentation metric that can be adapted for use in various medical imaging applications. Methods: Bidirectional local distance was defined, and the points of the test contour were paired with points of the reference contour. After correcting for the distance between the test and reference center of mass, Euclidean distance was calculated between the paired points, and a score was given to each test point. The overall medical similarity index was calculated as the average score across all the test points. For demonstration, we used myoma and prostate datasets; nnUNet neural networks were trained for segmentation. Results: An easy-to-use, sustainable image processing pipeline was created using Python. The code is available in a public GitHub repository along with Google Colaboratory notebooks. The algorithm can handle multislice images with multiple masks per slice. Mask splitting algorithm is also provided that can separate the concave masks. We demonstrate the adaptability with prostate segmentation evaluation. Conclusions: A novel segmentation evaluation metric was implemented, and an open-access image processing pipeline was also provided, which can be easily used for automatic measurement of clinical relevance of medical image segmentation.}

Multi-Contrast Fusion Module: An attention mechanism integrating multi-contrast features for fetal torso plane classification

Shengjun Zhu, Siyu Liu, Runqing Xiong, Liping Zheng, Duo Ma, Rongshang Chen, Jiaxin Cai

arxiv logopreprintAug 13 2025
Purpose: Prenatal ultrasound is a key tool in evaluating fetal structural development and detecting abnormalities, contributing to reduced perinatal complications and improved neonatal survival. Accurate identification of standard fetal torso planes is essential for reliable assessment and personalized prenatal care. However, limitations such as low contrast and unclear texture details in ultrasound imaging pose significant challenges for fine-grained anatomical recognition. Methods: We propose a novel Multi-Contrast Fusion Module (MCFM) to enhance the model's ability to extract detailed information from ultrasound images. MCFM operates exclusively on the lower layers of the neural network, directly processing raw ultrasound data. By assigning attention weights to image representations under different contrast conditions, the module enhances feature modeling while explicitly maintaining minimal parameter overhead. Results: The proposed MCFM was evaluated on a curated dataset of fetal torso plane ultrasound images. Experimental results demonstrate that MCFM substantially improves recognition performance, with a minimal increase in model complexity. The integration of multi-contrast attention enables the model to better capture subtle anatomical structures, contributing to higher classification accuracy and clinical reliability. Conclusions: Our method provides an effective solution for improving fetal torso plane recognition in ultrasound imaging. By enhancing feature representation through multi-contrast fusion, the proposed approach supports clinicians in achieving more accurate and consistent diagnoses, demonstrating strong potential for clinical adoption in prenatal screening. The codes are available at https://github.com/sysll/MCFM.
Page 7 of 3313305 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.