Sort by:
Page 6 of 41408 results

Improving radiology reporting accuracy: use of GPT-4 to reduce errors in reports.

Mayes CJ, Reyes C, Truman ME, Dodoo CA, Adler CR, Banerjee I, Khandelwal A, Alexander LF, Sheedy SP, Thompson CP, Varner JA, Zulfiqar M, Tan N

pubmed logopapersJun 27 2025
Radiology reports are essential for communicating imaging findings to guide diagnosis and treatment. Although most radiology reports are accurate, errors can occur in the final reports due to high workloads, use of dictation software, and human error. Advanced artificial intelligence models, such as GPT-4, show potential as tools to improve report accuracy. This retrospective study evaluated how GPT-4 performed in detecting and correcting errors in finalized radiology reports in real-world settings for abdominopelvic computed tomography (CT) reports. We evaluated finalized CT abdominopelvic reports from a tertiary health system by using GPT-4 with zero-shot learning techniques. Six radiologists each reviewed 100 of their finalized reports (randomly selected), evaluating GPT-4's suggested revisions for agreement, acceptance, and clinical impact. The radiologists' responses were compared by years in practice and sex. GPT-4 identified issues and suggested revisions for 91% of the 600 reports; most revisions addressed grammar (74%). The radiologists agreed with 27% of the revisions and accepted 23%. Most revisions were rated as having no (44%) or low (46%) clinical impact. Potential harm was rare (8%), with only 2 cases of potentially severe harm. Radiologists with less experience (≤ 7 years of practice) were more likely to agree with the revisions suggested by GPT-4 than those with more experience (34% vs. 20%, P = .003) and accepted a greater percentage of the revisions (32% vs. 15%, P = .003). Although GPT-4 showed promise in identifying errors and improving the clarity of finalized radiology reports, most errors were categorized as minor, with no or low clinical impact. Collectively, the radiologists accepted 23% of the suggested revisions in their finalized reports. This study highlights the potential of GPT-4 as a prospective tool for radiology reporting, with further refinement needed for consistent use in clinical practice.

3D Auto-segmentation of pancreas cancer and surrounding anatomical structures for surgical planning.

Rhu J, Oh N, Choi GS, Kim JM, Choi SY, Lee JE, Lee J, Jeong WK, Min JH

pubmed logopapersJun 27 2025
This multicenter study aimed to develop a deep learning-based autosegmentation model for pancreatic cancer and surrounding anatomical structures using computed tomography (CT) to enhance surgical planning. We included patients with pancreatic cancer who underwent pancreatic surgery at three tertiary referral hospitals. A hierarchical Swin Transformer V2 model was implemented to segment the pancreas, pancreatic cancers, and peripancreatic structures from preoperative contrast-enhanced CT scans. Data was divided into training and internal validation sets at a 3:1 ratio (from one tertiary institution), with separately prepared external validation set (from two separate institutions). Segmentation performance was quantitatively assessed using the dice similarity coefficient (DSC) and qualitatively evaluated (complete vs partial vs absent). A total of 275 patients (51.6% male, mean age 65.8 ± 9.5 years) were included (176 training group, 59 internal validation group, and 40 external validation group). No significant differences in baseline characteristics were observed between the groups. The model achieved an overall mean DSC of 75.4 ± 6.0 and 75.6 ± 4.8 in the internal and external validation groups, respectively. It showed high accuracy particularly in the pancreas parenchyma (84.8 ± 5.3 and 86.1 ± 4.1) and lower accuracy in pancreatic cancer (57.0 ± 28.7 and 54.5 ± 23.5). The DSC scores for pancreatic cancer tended to increase with larger tumor sizes. Moreover, the qualitative assessments revealed high accuracy in the superior mesenteric artery (complete segmentation, 87.5%-100%), portal and superior mesenteric vein (97.5%-100%), pancreas parenchyma (83.1%-87.5%), but lower accuracy in cancers (62.7%-65.0%). The deep learning-based autosegmentation model for 3D visualization of pancreatic cancer and peripancreatic structures showed robust performance. Further improvement will enhance many promising applications in clinical research.

Early prediction of adverse outcomes in liver cirrhosis using a CT-based multimodal deep learning model.

Xie N, Liang Y, Luo Z, Hu J, Ge R, Wan X, Wang C, Zou G, Guo F, Jiang Y

pubmed logopapersJun 27 2025
Early-stage cirrhosis frequently presents without symptoms, making timely identification of high-risk patients challenging. We aimed to develop a deep learning-based triple-modal fusion liver cirrhosis network (TMF-LCNet) for the prediction of adverse outcomes, offering a promising tool to enhance early risk assessment and improve clinical management strategies. This retrospective study included 243 patients with early-stage cirrhosis across two centers. Adverse outcomes were defined as the development of severe complications like ascites, hepatic encephalopathy and variceal bleeding. TMF-LCNet was developed by integrating three types of data: non-contrast abdominal CT images, radiomic features extracted from liver and spleen, and clinical text detailing laboratory parameters and adipose tissue composition measurements. TMF-LCNet was compared with conventional methods on the same dataset, and single-modality versions of TMF-LCNet were tested to determine the impact of each data type. Model effectiveness was measured using the area under the receiver operating characteristics curve (AUC) for discrimination, calibration curves for model fit, and decision curve analysis (DCA) for clinical utility. TMF-LCNet demonstrated superior predictive performance compared to conventional image-based, radiomics-based, and multimodal methods, achieving an AUC of 0.797 in the training cohort (n = 184) and 0.747 in the external test cohort (n = 59). Only TMF-LCNet exhibited robust model calibration in both cohorts. Of the three data types, the imaging modality contributed the most, as the image-only version of TMF-LCNet achieved performance closest to the complete version (AUC = 0.723 and 0.716, respectively; p > 0.05). This was followed by the text modality, with radiomics contributing the least, a pattern consistent with the clinical utility trends observed in DCA. TMF-LCNet represents an accurate and robust tool for predicting adverse outcomes in early-stage cirrhosis by integrating multiple data types. It holds potential for early identification of high-risk patients, guiding timely interventions, and ultimately improving patient prognosis.

Practical applications of AI in body imaging.

Mervak BM, Fried JG, Neshewat J, Wasnik AP

pubmed logopapersJun 27 2025
Artificial intelligence (AI) algorithms and deep learning continue to change the landscape of radiology. New algorithms promise to enhance diagnostic accuracy, improve workflow efficiency, and automate repetitive tasks. This article provides a narrative review of the FDA-cleared AI algorithms which are commercially available in the United States as of late 2024 and targeted toward assessment of abdominopelvic organs and related diseases, evaluates potential advantages of using AI, and suggests future directions for the field.

A two-step automatic identification of contrast phases for abdominal CT images based on residual networks.

Liu Q, Jiang J, Wu K, Zhang Y, Sun N, Luo J, Ba T, Lv A, Liu C, Yin Y, Yang Z, Xu H

pubmed logopapersJun 27 2025
To develop a deep learning model based on Residual Networks (ResNet) for the automated and accurate identification of contrast phases in abdominal CT images. A dataset of 1175 abdominal contrast-enhanced CT scans was retrospectively collected for the model development, and another independent dataset of 215 scans from five hospitals was collected for external testing. Each contrast phase was independently annotated by two radiologists. A ResNet-based model was developed to automatically classify phases into the early arterial phase (EAP) or late arterial phase (LAP), portal venous phase (PVP), and delayed phase (DP). Strategy A identified EAP or LAP, PVP, and DP in one step. Strategy B used a two-step approach: first classifying images as arterial phase (AP), PVP, and DP, then further classifying AP images into EAP or LAP. Model performance and strategy comparison were evaluated. In the internal test set, the overall accuracy of the two-step strategy was 98.3% (283/288; p < 0.001), significantly higher than that of the one-step strategy (91.7%, 264/288; p < 0.001). In the external test set, the two-step model achieved an overall accuracy of 99.1% (639/645), with sensitivities of 95.1% (EAP), 99.4% (LAP), 99.5% (PVP), and 99.5% (DP). The proposed two-step ResNet-based model provides highly accurate and robust identification of contrast phases in abdominal CT images, outperforming the conventional one-step strategy. Automated and accurate identification of contrast phases in abdominal CT images provides a robust tool for improving image quality control and establishes a strong foundation for AI-driven applications, particularly those leveraging contrast-enhanced abdominal imaging data. Accurate identification of contrast phases is crucial in abdominal CT imaging. The two-step ResNet-based model achieved superior accuracy across internal and external datasets. Automated phase classification strengthens imaging quality control and supports precision AI applications.

HyperSORT: Self-Organising Robust Training with hyper-networks

Samuel Joutard, Marijn Stollenga, Marc Balle Sanchez, Mohammad Farid Azampour, Raphael Prevost

arxiv logopreprintJun 26 2025
Medical imaging datasets often contain heterogeneous biases ranging from erroneous labels to inconsistent labeling styles. Such biases can negatively impact deep segmentation networks performance. Yet, the identification and characterization of such biases is a particularly tedious and challenging task. In this paper, we introduce HyperSORT, a framework using a hyper-network predicting UNets' parameters from latent vectors representing both the image and annotation variability. The hyper-network parameters and the latent vector collection corresponding to each data sample from the training set are jointly learned. Hence, instead of optimizing a single neural network to fit a dataset, HyperSORT learns a complex distribution of UNet parameters where low density areas can capture noise-specific patterns while larger modes robustly segment organs in differentiated but meaningful manners. We validate our method on two 3D abdominal CT public datasets: first a synthetically perturbed version of the AMOS dataset, and TotalSegmentator, a large scale dataset containing real unknown biases and errors. Our experiments show that HyperSORT creates a structured mapping of the dataset allowing the identification of relevant systematic biases and erroneous samples. Latent space clusters yield UNet parameters performing the segmentation task in accordance with the underlying learned systematic bias. The code and our analysis of the TotalSegmentator dataset are made available: https://github.com/ImFusionGmbH/HyperSORT

A machine learning model integrating clinical-radiomics-deep learning features accurately predicts postoperative recurrence and metastasis of primary gastrointestinal stromal tumors.

Xie W, Zhang Z, Sun Z, Wan X, Li J, Jiang J, Liu Q, Yang G, Fu Y

pubmed logopapersJun 26 2025
Post-surgical prediction of recurrence or metastasis for primary gastrointestinal stromal tumors (GISTs) remains challenging. We aim to develop individualized clinical follow-up strategies for primary GIST patients, such as shortening follow-up time or extending drug administration based on the clinical deep learning radiomics model (CDLRM). The clinical information on primary GISTs was collected from two independent centers. Postoperative recurrence or metastasis in GIST patients was defined as the endpoint of the study. A total of nine machine learning models were established based on the selected features. The performance of the models was assessed by calculating the area under the curve (AUC). The CDLRM with the best predictive performance was constructed. Decision curve analysis (DCA) and calibration curves were analyzed separately. Ultimately, our model was applied to the high-potential malignant group vs the low-malignant-potential group. The optimal clinical application scenarios of the model were further explored by comparing the DCA performance of the two subgroups. A total of 526 patients, 260 men and 266 women, with a mean age of 62 years, were enrolled in the study. CDLRM performed excellently with AUC values of 0.999, 0.963, and 0.995 for the training, external validation, and aggregated sets, respectively. The calibration curve indicated that CDLRM was in good agreement between predicted and observed probabilities in the validation cohort. The results of DCA's performance in different subgroups show that it was more clinically valuable in populations with high malignant potential. CDLRM could help the development of personalized treatment and improved follow-up of patients with a high probability of recurrence or metastasis in the future. This model utilizes imaging features extracted from CT scans (including radiomic features and deep features) and clinical data to accurately predict postoperative recurrence and metastasis in patients with primary GISTs, which has a certain auxiliary role in clinical decision-making. We developed and validated a model to predict recurrence or metastasis in patients taking oral imatinib after GIST. We demonstrate that CT image features were associated with recurrence or metastases. The model had good predictive performance and clinical benefit.

Constructing high-quality enhanced 4D-MRI with personalized modeling for liver cancer radiotherapy.

Yao Y, Chen B, Wang K, Cao Y, Zuo L, Zhang K, Chen X, Kuo M, Dai J

pubmed logopapersJun 26 2025
For magnetic resonance imaging (MRI), a short acquisition time and good image quality are incompatible. Thus, reconstructing time-resolved volumetric MRI (4D-MRI) to delineate and monitor thoracic and upper abdominal tumor movements is a challenge. Existing MRI sequences have limited applicability to 4D-MRI. A method is proposed for reconstructing high-quality personalized enhanced 4D-MR images. Low-quality 4D-MR images are scanned followed by deep learning-based personalization to generate high-quality 4D-MR images. High-speed multiphase 3D fast spoiled gradient recalled echo (FSPGR) sequences were utilized to generate low-quality enhanced free-breathing 4D-MR images and paired low-/high-quality breath-holding 4D-MR images for 58 liver cancer patients. Then, a personalized model guided by the paired breath-holding 4D-MR images was developed for each patient to cope with patient heterogeneity. The 4D-MR images generated by the personalized model were of much higher quality compared with the low-quality 4D-MRI images obtained by conventional scanning as demonstrated by significant improvements in the peak signal-to-noise ratio, structural similarity, normalized root mean square error, and cumulative probability of blur detection. The introduction of individualized information helped the personalized model demonstrate a statistically significant improvement compared to the general model (p < 0.001). The proposed method can be used to quickly reconstruct high-quality 4D-MR images and is potentially applicable to radiotherapy for liver cancer.

EAGLE: An Efficient Global Attention Lesion Segmentation Model for Hepatic Echinococcosis

Jiayan Chen, Kai Li, Yulu Zhao, Jianqiang Huang, Zhan Wang

arxiv logopreprintJun 25 2025
Hepatic echinococcosis (HE) is a widespread parasitic disease in underdeveloped pastoral areas with limited medical resources. While CNN-based and Transformer-based models have been widely applied to medical image segmentation, CNNs lack global context modeling due to local receptive fields, and Transformers, though capable of capturing long-range dependencies, are computationally expensive. Recently, state space models (SSMs), such as Mamba, have gained attention for their ability to model long sequences with linear complexity. In this paper, we propose EAGLE, a U-shaped network composed of a Progressive Visual State Space (PVSS) encoder and a Hybrid Visual State Space (HVSS) decoder that work collaboratively to achieve efficient and accurate segmentation of hepatic echinococcosis (HE) lesions. The proposed Convolutional Vision State Space Block (CVSSB) module is designed to fuse local and global features, while the Haar Wavelet Transformation Block (HWTB) module compresses spatial information into the channel dimension to enable lossless downsampling. Due to the lack of publicly available HE datasets, we collected CT slices from 260 patients at a local hospital. Experimental results show that EAGLE achieves state-of-the-art performance with a Dice Similarity Coefficient (DSC) of 89.76%, surpassing MSVM-UNet by 1.61%.

[Thyroid nodule segmentation method integrating receiving weighted key-value architecture and spherical geometric features].

Zhu L, Wei G

pubmed logopapersJun 25 2025
To address the high computational complexity of the Transformer in the segmentation of ultrasound thyroid nodules and the loss of image details or omission of key spatial information caused by traditional image sampling techniques when dealing with high-resolution, complex texture or uneven density two-dimensional ultrasound images, this paper proposes a thyroid nodule segmentation method that integrates the receiving weighted key-value (RWKV) architecture and spherical geometry feature (SGF) sampling technology. This method effectively captures the details of adjacent regions through two-dimensional offset prediction and pixel-level sampling position adjustment, achieving precise segmentation. Additionally, this study introduces a patch attention module (PAM) to optimize the decoder feature map using a regional cross-attention mechanism, enabling it to focus more precisely on the high-resolution features of the encoder. Experiments on the thyroid nodule segmentation dataset (TN3K) and the digital database for thyroid images (DDTI) show that the proposed method achieves dice similarity coefficients (DSC) of 87.24% and 80.79% respectively, outperforming existing models while maintaining a lower computational complexity. This approach may provide an efficient solution for the precise segmentation of thyroid nodules.
Page 6 of 41408 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.