Sort by:
Page 25 of 47469 results

Measurement of adipose body composition using an artificial intelligence-based CT Protocol and its association with severe acute pancreatitis in hospitalized patients.

Cortés P, Mistretta TA, Jackson B, Olson CG, Al Qady AM, Stancampiano FF, Korfiatis P, Klug JR, Harris DM, Dan Echols J, Carter RE, Ji B, Hardway HD, Wallace MB, Kumbhari V, Bi Y

pubmed logopapersJun 1 2025
The clinical utility of body composition in predicting the severity of acute pancreatitis (AP) remains unclear. We aimed to measure body composition using artificial intelligence (AI) to predict severe AP in hospitalized patients. We performed a retrospective study of patients hospitalized with AP at three tertiary care centers in 2018. Patients with computer tomography (CT) imaging of the abdomen at admission were included. A fully automated and validated abdominal segmentation algorithm was used for body composition analysis. The primary outcome was severe AP, defined as having persistent single- or multi-organ failure as per the revised Atlanta classification. 352 patients were included. Severe AP occurred in 35 patients (9.9%). In multivariable analysis, adjusting for male sex and first episode of AP, intermuscular adipose tissue (IMAT) was associated with severe AP, OR = 1.06 per 5 cm<sup>2</sup>, p = 0.0207. Subcutaneous adipose tissue (SAT) area approached significance, OR = 1.05, p = 0.17. Neither visceral adipose tissue (VAT) nor skeletal muscle (SM) was associated with severe AP. In obese patients, a higher SM was associated with severe AP in unadjusted analysis (86.7 vs 75.1 and 70.3 cm<sup>2</sup> in moderate and mild, respectively p = 0.009). In this multi-site retrospective study using AI to measure body composition, we found elevated IMAT to be associated with severe AP. Although SAT was non-significant for severe AP, it approached statistical significance. Neither VAT nor SM were significant. Further research in larger prospective studies may be beneficial.

A new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation.

Sagberg K, Lie T, F Peterson H, Hillestad V, Eskild A, Bø LE

pubmed logopapersJun 1 2025
Placental volume measurements can potentially identify high-risk pregnancies. We aimed to develop and validate a new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation. We included 43 pregnancies at gestational week 27 and acquired placental images using a 2D ultrasound probe with position tracking, and trained a convolutional neural network (CNN) for automatic image segmentation. The automatically segmented 2D images were combined with tracking data to calculate placental volume. For 15 of the included pregnancies, placental volume was also estimated based on MRI examinations, 3D ultrasound and manually segmented 2D ultrasound images. The ultrasound methods were compared to MRI (gold standard). The CNN demonstrated good performance in automatic image segmentation (F1-score 0.84). The correlation with MRI-based placental volume was similar for tracked 2D ultrasound using automatically segmented images (absolute agreement intraclass correlation coefficient [ICC] 0.58, 95% CI 0.13-0.84) and manually segmented images (ICC 0.59, 95% CI 0.13-0.84). The 3D ultrasound method showed lower ICC (0.35, 95% CI -0.11 to 0.74) than the methods based on tracked 2D ultrasound. Tracked 2D ultrasound with automatic image segmentation is a promising new method for placental volume measurements and has potential for further improvement.

Coarse for Fine: Bounding Box Supervised Thyroid Ultrasound Image Segmentation Using Spatial Arrangement and Hierarchical Prediction Consistency.

Chi J, Lin G, Li Z, Zhang W, Chen JH, Huang Y

pubmed logopapersJun 1 2025
Weakly-supervised learning methods have become increasingly attractive for medical image segmentation, but suffered from a high dependence on quantifying the pixel-wise affinities of low-level features, which are easily corrupted in thyroid ultrasound images, resulting in segmentation over-fitting to weakly annotated regions without precise delineation of target boundaries. We propose a dual-branch weakly-supervised learning framework to optimize the backbone segmentation network by calibrating semantic features into rational spatial distribution under the indirect, coarse guidance of the bounding box mask. Specifically, in the spatial arrangement consistency branch, the maximum activations sampled from the preliminary segmentation prediction and the bounding box mask along the horizontal and vertical dimensions are compared to measure the rationality of the approximate target localization. In the hierarchical prediction consistency branch, the target and background prototypes are encapsulated from the semantic features under the combined guidance of the preliminary segmentation prediction and the bounding box mask. The secondary segmentation prediction induced from the prototypes is compared with the preliminary prediction to quantify the rationality of the elaborated target and background semantic feature perception. Experiments on three thyroid datasets illustrate that our model outperforms existing weakly-supervised methods for thyroid gland and nodule segmentation and is comparable to the performance of fully-supervised methods with reduced annotation time. The proposed method has provided a weakly-supervised segmentation strategy by simultaneously considering the target's location and the rationality of target and background semantic features distribution. It can improve the applicability of deep learning based segmentation in the clinical practice.

A Multimodal Model Based on Transvaginal Ultrasound-Based Radiomics to Predict the Risk of Peritoneal Metastasis in Ovarian Cancer: A Multicenter Study.

Zhou Y, Duan Y, Zhu Q, Li S, Zhang C

pubmed logopapersJun 1 2025
This study aimed to develop a predictive model for peritoneal metastasis (PM) in ovarian cancer using a combination radiomics and clinical biomarkers to improve diagnostic accuracy. This retrospective cohort study of 619 ovarian cancer patients involved demographic data, radiomics, O-RADS standardized description, clinical biomarkers, and histological findings. Radiomics features were extracted using 3D Slicer and Pyradiomics, with selective feature extraction using Least Absolute Shrinkage and Selection Operator regression. Model development and validation were carried out using logistic regression and machine learning methods RESULTS: Interobserver agreement was high for radiomics features, with 1049 features initially extracted and 7 features selected through regression analysis. Multi-modal information such as Ascites, Fallopian tube invasion, Greatest diameter, HE4 and D-dimer levels were significant predictors of PM. The developed radiomics nomogram demonstrated strong discriminatory power, with AUC values of 0.912, 0.883, and 0.831 in the training, internal test, and external test sets respectively. The nomogram displayed superior diagnostic performance compared to single-modality models. The integration of multimodal information in a predictive model for PM in ovarian cancer shows promise for enhancing diagnostic accuracy and guiding personalized treatment. This multi-modal approach offers a potential strategy for improving patient outcomes in ovarian cancer management with PM.

GAN Inversion for Data Augmentation to Improve Colonoscopy Lesion Classification.

Golhar MV, Bobrow TL, Ngamruengphong S, Durr NJ

pubmed logopapersJun 1 2025
A major challenge in applying deep learning to medical imaging is the paucity of annotated data. This study explores the use of synthetic images for data augmentation to address the challenge of limited annotated data in colonoscopy lesion classification. We demonstrate that synthetic colonoscopy images generated by Generative Adversarial Network (GAN) inversion can be used as training data to improve polyp classification performance by deep learning models. We invert pairs of images with the same label to a semantically rich and disentangled latent space and manipulate latent representations to produce new synthetic images. These synthetic images maintain the same label as the input pairs. We perform image modality translation (style transfer) between white light and narrow-band imaging (NBI). We also generate realistic synthetic lesion images by interpolating between original training images to increase the variety of lesion shapes in the training dataset. Our experiments show that GAN inversion can produce multiple colonoscopy data augmentations that improve the downstream polyp classification performance by 2.7% in F1-score and 4.9% in sensitivity over other methods, including state-of-the-art data augmentation. Testing on unseen out-of-domain data also showcased an improvement of 2.9% in F1-score and 2.7% in sensitivity. This approach outperforms other colonoscopy data augmentation techniques and does not require re-training multiple generative models. It also effectively uses information from diverse public datasets, even those not specifically designed for the targeted downstream task, resulting in strong domain generalizability. Project code and model: https://github.com/DurrLab/GAN-Inversion.

[Capabilities and Advances of Transrectal Ultrasound in 2025].

Kaufmann S, Kruck S

pubmed logopapersJun 1 2025
Transrectal ultrasound, particularly in the combination of high-frequency ultrasound and MR-TRUS fusion technologies, provides a highly precise and effective method for correlation and targeted biopsy of suspicious intraprostatic lesions detected by MRI. Advances in imaging technology, driven by 29 Mhz micro-ultrasound transducers, robotic-assisted systems, and the integration of AI-based analyses, promise further improvements in diagnostic accuracy and a reduction in unnecessary biopsies. Further technological advancements and improved TRUS training could contribute to a decentralized and cost-effective diagnostic evaluation of prostate cancer in the future.

Habitat Radiomics Based on MRI for Predicting Metachronous Liver Metastasis in Locally Advanced Rectal Cancer: a Two‑center Study.

Shi S, Jiang T, Liu H, Wu Y, Singh A, Wang Y, Xie J, Li X

pubmed logopapersJun 1 2025
This study aimed to explore the feasibility of using habitat radiomics based on magnetic resonance imaging (MRI) to predict metachronous liver metastasis (MLM) in locally advanced rectal cancer (LARC) patients. A nomogram was developed by integrating multiple factors to enhance predictive accuracy. Retrospective data from 385 LARC patients across two centers were gathered. The data from Center 1 were split into a training set of 203 patients and an internal validation set of 87 patients, while Center 2 provided an external test set of 95 patients. K - means clustering was used on T2 - weighted images, and the region of interest was extended at different thicknesses. After feature extraction and selection, four machine - learning algorithms were utilized to build radiomics models. A nomogram was created by combining habitat radiomics, conventional radiomics, and clinical independent predictors. Model performance was evaluated by the AUC, and clinical utility was assessed through calibration curve and DCA. Habitat radiomics outperformed other single models in predicting MLM, with AUCs of 0.926, 0.864, and 0.851 in respective sets. The integrated nomogram achieved even higher AUCs of 0.959, 0.925, and 0.889. DCA and calibration curve analysis showed its high net benefit and good calibration. MRI - based habitat radiomics can effectively predict MLM in LARC patients. The integrated nomogram has optimal predictive performance and improves model accuracy significantly.

3-D contour-aware U-Net for efficient rectal tumor segmentation in magnetic resonance imaging.

Lu Y, Dang J, Chen J, Wang Y, Zhang T, Bai X

pubmed logopapersJun 1 2025
Magnetic resonance imaging (MRI), as a non-invasive detection method, is crucial for the clinical diagnosis and treatment plan of rectal cancer. However, due to the low contrast of rectal tumor signal in MRI, segmentation is often inaccurate. In this paper, we propose a new three-dimensional rectal tumor segmentation method CAU-Net based on T2-weighted MRI images. The method adopts a convolutional neural network to extract multi-scale features from MRI images and uses a Contour-Aware decoder and attention fusion block (AFB) for contour enhancement. We also introduce adversarial constraint to improve augmentation performance. Furthermore, we construct a dataset of 108 MRI-T2 volumes for the segmentation of locally advanced rectal cancer. Finally, CAU-Net achieved a DSC of 0.7112 and an ASD of 2.4707, which outperforms other state-of-the-art methods. Various experiments on this dataset show that CAU-Net has high accuracy and efficiency in rectal tumor segmentation. In summary, proposed method has important clinical application value and can provide important support for medical image analysis and clinical treatment of rectal cancer. With further development and application, this method has the potential to improve the accuracy of rectal cancer diagnosis and treatment.

Internal Target Volume Estimation for Liver Cancer Radiation Therapy Using an Ultra Quality 4-Dimensional Magnetic Resonance Imaging.

Liao YP, Xiao H, Wang P, Li T, Aguilera TA, Visak JD, Godley AR, Zhang Y, Cai J, Deng J

pubmed logopapersJun 1 2025
Accurate internal target volume (ITV) estimation is essential for effective and safe radiation therapy in liver cancer. This study evaluates the clinical value of an ultraquality 4-dimensional magnetic resonance imaging (UQ 4D-MRI) technique for ITV estimation. The UQ 4D-MRI technique maps motion information from a low spatial resolution dynamic volumetric MRI onto a high-resolution 3-dimensional MRI used for radiation treatment planning. It was validated using a motion phantom and data from 13 patients with liver cancer. ITV generated from UQ 4D-MRI (ITV<sub>4D</sub>) was compared with those obtained through isotropic expansions (ITV<sub>2 mm</sub> and ITV<sub>5 mm</sub>) and those measured using conventional 4D-computed tomography (computed tomography-based ITV, ITV<sub>CT</sub>) for each patient. Phantom studies showed a displacement measurement difference of <5% between UQ 4D-MRI and single-slice 2-dimensional cine MRI. In patient studies, the maximum superior-inferior displacements of the tumor on UQ 4D-MRI showed no significant difference compared with single-slice 2-dimensional cine imaging (<i>P</i> = .985). Computed tomography-based ITV showed no significant difference (<i>P</i> = .72) with ITV<sub>4D</sub>, whereas ITV<sub>2 mm</sub> and ITV<sub>5 mm</sub> significantly overestimated the volume by 29.0% (<i>P</i> = .002) and 120.7% (<i>P</i> < .001) compared with ITV<sub>4D</sub>, respectively. UQ 4D-MRI enables accurate motion assessment for liver tumors, facilitating precise ITV delineation for radiation treatment planning. Despite uncertainties from artificial intelligence-based delineation and variations in patients' respiratory patterns, UQ 4D-MRI excels at capturing tumor motion trajectories, potentially improving treatment planning accuracy and reducing margins in liver cancer radiation therapy.

Large Language Models for Diagnosing Focal Liver Lesions From CT/MRI Reports: A Comparative Study With Radiologists.

Sheng L, Chen Y, Wei H, Che F, Wu Y, Qin Q, Yang C, Wang Y, Peng J, Bashir MR, Ronot M, Song B, Jiang H

pubmed logopapersJun 1 2025
Whether large language models (LLMs) could be integrated into the diagnostic workflow of focal liver lesions (FLLs) remains unclear. We aimed to investigate two generic LLMs (ChatGPT-4o and Gemini) regarding their diagnostic accuracies referring to the CT/MRI reports, compared to and combined with radiologists of different experience levels. From April 2022 to April 2024, this single-center retrospective study included consecutive adult patients who underwent contrast-enhanced CT/MRI for single FLL and subsequent histopathologic examination. The LLMs were prompted by clinical information and the "findings" section of radiology reports three times to provide differential diagnoses in the descending order of likelihood, with the first considered the final diagnosis. In the research setting, six radiologists (three junior and three middle-level) independently reviewed the CT/MRI images and clinical information in two rounds (first alone, then with LLM assistance). In the clinical setting, diagnoses were retrieved from the "impressions" section of radiology reports. Diagnostic accuracy was investigated against histopathology. 228 patients (median age, 59 years; 155 males) with 228 FLLs (median size, 3.6 cm) were included. Regarding the final diagnosis, the accuracy of two-step ChatGPT-4o (78.9%) was higher than single-step ChatGPT-4o (68.0%, p < 0.001) and single-step Gemini (73.2%, p = 0.004), similar to real-world radiology reports (80.0%, p = 0.34) and junior radiologists (78.9%-82.0%; p-values, 0.21 to > 0.99), but lower than middle-level radiologists (84.6%-85.5%; p-values, 0.001 to 0.02). No incremental diagnostic value of ChatGPT-4o was observed for any radiologist (p-values, 0.63 to > 0.99). Two-step ChatGPT-4o showed matching accuracies to real-world radiology reports and junior radiologists for diagnosing FLLs but was less accurate than middle-level radiologists and demonstrated little incremental diagnostic value.
Page 25 of 47469 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.