Sort by:
Page 78 of 99990 results

Enhancing diagnostic accuracy of thyroid nodules: integrating self-learning and artificial intelligence in clinical training.

Kim D, Hwang YA, Kim Y, Lee HS, Lee E, Lee H, Yoon JH, Park VY, Rho M, Yoon J, Lee SE, Kwak JY

pubmed logopapersJun 1 2025
This study explores a self-learning method as an auxiliary approach in residency training for distinguishing between benign and malignant thyroid nodules. Conducted from March to December 2022, internal medicine residents underwent three repeated learning sessions with a "learning set" comprising 3000 thyroid nodule images. Diagnostic performances for internal medicine residents were assessed before the study, after every learning session, and for radiology residents before and after one-on-one education, using a "test set," comprising 120 thyroid nodule images. Finally, all residents repeated the same test using artificial intelligence computer-assisted diagnosis (AI-CAD). Twenty-one internal medicine and eight radiology residents participated. Initially, internal medicine residents had a lower area under the receiver operating characteristic curve (AUROC) than radiology residents (0.578 vs. 0.701, P < 0.001), improving post-learning (0.578 to 0.709, P < 0.001) to a comparable level with radiology residents (0.709 vs. 0.735, P = 0.17). Further improvement occurred with AI-CAD for both group (0.709 to 0.755, P < 0.001; 0.735 to 0.768, P = 0.03). The proposed iterative self-learning method using a large volume of ultrasonographic images can assist beginners, such as residents, in thyroid imaging to differentiate benign and malignant thyroid nodules. Additionally, AI-CAD can improve the diagnostic performance across varied levels of experience in thyroid imaging.

Measurement of adipose body composition using an artificial intelligence-based CT Protocol and its association with severe acute pancreatitis in hospitalized patients.

Cortés P, Mistretta TA, Jackson B, Olson CG, Al Qady AM, Stancampiano FF, Korfiatis P, Klug JR, Harris DM, Dan Echols J, Carter RE, Ji B, Hardway HD, Wallace MB, Kumbhari V, Bi Y

pubmed logopapersJun 1 2025
The clinical utility of body composition in predicting the severity of acute pancreatitis (AP) remains unclear. We aimed to measure body composition using artificial intelligence (AI) to predict severe AP in hospitalized patients. We performed a retrospective study of patients hospitalized with AP at three tertiary care centers in 2018. Patients with computer tomography (CT) imaging of the abdomen at admission were included. A fully automated and validated abdominal segmentation algorithm was used for body composition analysis. The primary outcome was severe AP, defined as having persistent single- or multi-organ failure as per the revised Atlanta classification. 352 patients were included. Severe AP occurred in 35 patients (9.9%). In multivariable analysis, adjusting for male sex and first episode of AP, intermuscular adipose tissue (IMAT) was associated with severe AP, OR = 1.06 per 5 cm<sup>2</sup>, p = 0.0207. Subcutaneous adipose tissue (SAT) area approached significance, OR = 1.05, p = 0.17. Neither visceral adipose tissue (VAT) nor skeletal muscle (SM) was associated with severe AP. In obese patients, a higher SM was associated with severe AP in unadjusted analysis (86.7 vs 75.1 and 70.3 cm<sup>2</sup> in moderate and mild, respectively p = 0.009). In this multi-site retrospective study using AI to measure body composition, we found elevated IMAT to be associated with severe AP. Although SAT was non-significant for severe AP, it approached statistical significance. Neither VAT nor SM were significant. Further research in larger prospective studies may be beneficial.

A new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation.

Sagberg K, Lie T, F Peterson H, Hillestad V, Eskild A, Bø LE

pubmed logopapersJun 1 2025
Placental volume measurements can potentially identify high-risk pregnancies. We aimed to develop and validate a new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation. We included 43 pregnancies at gestational week 27 and acquired placental images using a 2D ultrasound probe with position tracking, and trained a convolutional neural network (CNN) for automatic image segmentation. The automatically segmented 2D images were combined with tracking data to calculate placental volume. For 15 of the included pregnancies, placental volume was also estimated based on MRI examinations, 3D ultrasound and manually segmented 2D ultrasound images. The ultrasound methods were compared to MRI (gold standard). The CNN demonstrated good performance in automatic image segmentation (F1-score 0.84). The correlation with MRI-based placental volume was similar for tracked 2D ultrasound using automatically segmented images (absolute agreement intraclass correlation coefficient [ICC] 0.58, 95% CI 0.13-0.84) and manually segmented images (ICC 0.59, 95% CI 0.13-0.84). The 3D ultrasound method showed lower ICC (0.35, 95% CI -0.11 to 0.74) than the methods based on tracked 2D ultrasound. Tracked 2D ultrasound with automatic image segmentation is a promising new method for placental volume measurements and has potential for further improvement.

Coarse for Fine: Bounding Box Supervised Thyroid Ultrasound Image Segmentation Using Spatial Arrangement and Hierarchical Prediction Consistency.

Chi J, Lin G, Li Z, Zhang W, Chen JH, Huang Y

pubmed logopapersJun 1 2025
Weakly-supervised learning methods have become increasingly attractive for medical image segmentation, but suffered from a high dependence on quantifying the pixel-wise affinities of low-level features, which are easily corrupted in thyroid ultrasound images, resulting in segmentation over-fitting to weakly annotated regions without precise delineation of target boundaries. We propose a dual-branch weakly-supervised learning framework to optimize the backbone segmentation network by calibrating semantic features into rational spatial distribution under the indirect, coarse guidance of the bounding box mask. Specifically, in the spatial arrangement consistency branch, the maximum activations sampled from the preliminary segmentation prediction and the bounding box mask along the horizontal and vertical dimensions are compared to measure the rationality of the approximate target localization. In the hierarchical prediction consistency branch, the target and background prototypes are encapsulated from the semantic features under the combined guidance of the preliminary segmentation prediction and the bounding box mask. The secondary segmentation prediction induced from the prototypes is compared with the preliminary prediction to quantify the rationality of the elaborated target and background semantic feature perception. Experiments on three thyroid datasets illustrate that our model outperforms existing weakly-supervised methods for thyroid gland and nodule segmentation and is comparable to the performance of fully-supervised methods with reduced annotation time. The proposed method has provided a weakly-supervised segmentation strategy by simultaneously considering the target's location and the rationality of target and background semantic features distribution. It can improve the applicability of deep learning based segmentation in the clinical practice.

Comparison of Sarcopenia Assessment in Liver Transplant Recipients by Computed Tomography Freehand Region-of-Interest versus an Automated Deep Learning System.

Miller W, Fate K, Fisher J, Thul J, Ko Y, Kim KW, Pruett T, Teigen L

pubmed logopapersJun 1 2025
Sarcopenia, or the loss of muscle quality and quantity, has been associated with poor clinical outcomes in liver transplantation such as infection, increased length of stay, and increased patient mortality. Abdominal computed tomography (CT) scans are utilized to measure patient core musculature as a measurement of sarcopenia. Methods to extract information on core body musculature can be through either freehand region-of-interest (ROI) or machine learning algorithms to quantitate total body muscle within a given area. This study directly compares these two collection methods leveraging length of stay (LOS) outcomes previously found to be associated with freehand ROI measurements. A total of 50 individuals were included who underwent liver transplantation from our single center between January 1, 2016, and May 30, 2021, and had a non-contrast abdominal CT scan within 6-months of surgery. CT-derived skeletal muscle measures at the third lumbar vertebrae were obtained using freehand ROI and an automated deep learning system. Correlation analysis of freehand psoas muscle measures, psoas area index (PAI) and mean Hounsfield units (mHU), were significantly correlated to the automated deep learning system's total skeletal muscle measures at the level of the L3, skeletal muscle index (SMI) and skeletal muscle density (SMD), respectively (R<sup>2</sup> = 0.4221; p value < 0.0001; R<sup>2</sup> = 0.6297; p value < 0.0001). The automated deep learning model's SMI predicted ∼20% of the variability (R<sup>2</sup> = 0.2013; hospital length of stay) while the PAI variable only predicted about 10% of the variability (R<sup>2</sup> = 0.0919; total healthcare length of stay) of the length of stay variables. In contrast, both the freehand ROI mHU and the automated deep learning model's muscle density variables were associated with ∼20% of the variability in the inpatient length of stay (R<sup>2</sup> = 0.2383 and 0.1810, respectively) and total healthcare length of stay variables (R<sup>2</sup> = 0.2190 and 0.1947, respectively). Sarcopenia measurements represent an important risk stratification tool for liver transplantation outcomes. For muscle sarcopenia assessment association with LOS, freehand measures of sarcopenia perform similarly to automated deep learning system measurements.

A Large Convolutional Neural Network for Clinical Target and Multi-organ Segmentation in Gynecologic Brachytherapy with Multi-stage Learning

Mingzhe Hu, Yuan Gao, Yuheng Li, Ricahrd LJ Qiu, Chih-Wei Chang, Keyur D. Shah, Priyanka Kapoor, Beth Bradshaw, Yuan Shao, Justin Roper, Jill Remick, Zhen Tian, Xiaofeng Yang

arxiv logopreprintJun 1 2025
Purpose: Accurate segmentation of clinical target volumes (CTV) and organs-at-risk is crucial for optimizing gynecologic brachytherapy (GYN-BT) treatment planning. However, anatomical variability, low soft-tissue contrast in CT imaging, and limited annotated datasets pose significant challenges. This study presents GynBTNet, a novel multi-stage learning framework designed to enhance segmentation performance through self-supervised pretraining and hierarchical fine-tuning strategies. Methods: GynBTNet employs a three-stage training strategy: (1) self-supervised pretraining on large-scale CT datasets using sparse submanifold convolution to capture robust anatomical representations, (2) supervised fine-tuning on a comprehensive multi-organ segmentation dataset to refine feature extraction, and (3) task-specific fine-tuning on a dedicated GYN-BT dataset to optimize segmentation performance for clinical applications. The model was evaluated against state-of-the-art methods using the Dice Similarity Coefficient (DSC), 95th percentile Hausdorff Distance (HD95), and Average Surface Distance (ASD). Results: Our GynBTNet achieved superior segmentation performance, significantly outperforming nnU-Net and Swin-UNETR. Notably, it yielded a DSC of 0.837 +/- 0.068 for CTV, 0.940 +/- 0.052 for the bladder, 0.842 +/- 0.070 for the rectum, and 0.871 +/- 0.047 for the uterus, with reduced HD95 and ASD compared to baseline models. Self-supervised pretraining led to consistent performance improvements, particularly for structures with complex boundaries. However, segmentation of the sigmoid colon remained challenging, likely due to anatomical ambiguities and inter-patient variability. Statistical significance analysis confirmed that GynBTNet's improvements were significant compared to baseline models.

Modality Translation and Registration of MR and Ultrasound Images Using Diffusion Models

Xudong Ma, Nantheera Anantrasirichai, Stefanos Bolomytis, Alin Achim

arxiv logopreprintJun 1 2025
Multimodal MR-US registration is critical for prostate cancer diagnosis. However, this task remains challenging due to significant modality discrepancies. Existing methods often fail to align critical boundaries while being overly sensitive to irrelevant details. To address this, we propose an anatomically coherent modality translation (ACMT) network based on a hierarchical feature disentanglement design. We leverage shallow-layer features for texture consistency and deep-layer features for boundary preservation. Unlike conventional modality translation methods that convert one modality into another, our ACMT introduces the customized design of an intermediate pseudo modality. Both MR and US images are translated toward this intermediate domain, effectively addressing the bottlenecks faced by traditional translation methods in the downstream registration task. Experiments demonstrate that our method mitigates modality-specific discrepancies while preserving crucial anatomical boundaries for accurate registration. Quantitative evaluations show superior modality similarity compared to state-of-the-art modality translation methods. Furthermore, downstream registration experiments confirm that our translated images achieve the best alignment performance, highlighting the robustness of our framework for multi-modal prostate image registration.

Aiding Medical Diagnosis through Image Synthesis and Classification

Kanishk Choudhary

arxiv logopreprintJun 1 2025
Medical professionals, especially those in training, often depend on visual reference materials to support an accurate diagnosis and develop pattern recognition skills. However, existing resources may lack the diversity and accessibility needed for broad and effective clinical learning. This paper presents a system designed to generate realistic medical images from textual descriptions and validate their accuracy through a classification model. A pretrained stable diffusion model was fine-tuned using Low-Rank Adaptation (LoRA) on the PathMNIST dataset, consisting of nine colorectal histopathology tissue types. The generative model was trained multiple times using different training parameter configurations, guided by domain-specific prompts to capture meaningful features. To ensure quality control, a ResNet-18 classification model was trained on the same dataset, achieving 99.76% accuracy in detecting the correct label of a colorectal histopathological medical image. Generated images were then filtered using the trained classifier and an iterative process, where inaccurate outputs were discarded and regenerated until they were correctly classified. The highest performing version of the generative model from experimentation achieved an F1 score of 0.6727, with precision and recall scores of 0.6817 and 0.7111, respectively. Some types of tissue, such as adipose tissue and lymphocytes, reached perfect classification scores, while others proved more challenging due to structural complexity. The self-validating approach created demonstrates a reliable method for synthesizing domain-specific medical images because of high accuracy in both the generation and classification portions of the system, with potential applications in both diagnostic support and clinical education. Future work includes improving prompt-specific accuracy and extending the system to other areas of medical imaging.

Deep learning based on ultrasound images predicting cervical lymph node metastasis in postoperative patients with differentiated thyroid carcinoma.

Fan F, Li F, Wang Y, Liu T, Wang K, Xi X, Wang B

pubmed logopapersJun 1 2025
To develop a deep learning (DL) model based on ultrasound (US) images of lymph nodes for predicting cervical lymph node metastasis (CLNM) in postoperative patients with differentiated thyroid carcinoma (DTC). Retrospective collection of 352 lymph nodes from 330 patients with cytopathology findings between June 2021 and December 2023 at our institution. The database was randomly divided into the training and test cohort at an 8:2 ratio. The DL basic model of longitudinal and cross-sectional of lymph nodes was constructed based on ResNet50 respectively, and the results of the 2 basic models were fused (1:1) to construct a longitudinal + cross-sectional DL model. Univariate and multivariate analyses were used to assess US features and construct a conventional US model. Subsequently, a combined model was constructed by integrating DL and US. The diagnostic accuracy of the longitudinal + cross-sectional DL model was higher than that of longitudinal or cross-sectional alone. The area under the curve (AUC) of the combined model (US + DL) was 0.855 (95% CI, 0.767-0.942) and the accuracy, sensitivity, and specificity were 0.786 (95% CI, 0.671-0.875), 0.972 (95% CI, 0.855-0.999), and 0.588 (95% CI, 0.407-0.754), respectively. Compared with US and DL models, the integrated discrimination improvement and net reclassification improvement of the combined models are both positive. This preliminary study shows that the DL model based on US images of lymph nodes has a high diagnostic efficacy for predicting CLNM in postoperative patients with DTC, and the combined model of US+DL is superior to single conventional US and DL for predicting CLNM in this population. We innovatively used DL of lymph node US images to predict the status of cervical lymph nodes in postoperative patients with DTC.

MRI and CT radiomics for the diagnosis of acute pancreatitis.

Tartari C, Porões F, Schmidt S, Abler D, Vetterli T, Depeursinge A, Dromain C, Violi NV, Jreige M

pubmed logopapersJun 1 2025
To evaluate the single and combined diagnostic performances of CT and MRI radiomics for diagnosis of acute pancreatitis (AP). We prospectively enrolled 78 patients (mean age 55.7 ± 17 years, 48.7 % male) diagnosed with AP between 2020 and 2022. Patients underwent contrast-enhanced CT (CECT) within 48-72 h of symptoms and MRI ≤ 24 h after CECT. The entire pancreas was manually segmented tridimensionally by two operators on portal venous phase (PVP) CECT images, T2-weighted imaging (WI) MR sequence and non-enhanced and PVP T1-WI MR sequences. A matched control group (n = 77) with normal pancreas was used. Dataset was randomly split into training and test, and various machine learning algorithms were compared. Receiver operating curve analysis was performed. The T2WI model exhibited significantly better diagnostic performance than CECT and non-enhanced and venous T1WI, with sensitivity, specificity and AUC of 73.3 % (95 % CI: 71.5-74.7), 80.1 % (78.2-83.2), and 0.834 (0.819-0.844) for T2WI (p = 0.001), 74.4 % (71.5-76.4), 58.7 % (56.3-61.1), and 0.654 (0.630-0.677) for non-enhanced T1WI, 62.1 % (60.1-64.2), 78.7 % (77.1-81), and 0.787 (0.771-0.810) for venous T1WI, and 66.4 % (64.8-50.9), 48.4 % (46-50.9), and 0.610 (0.586-0.626) for CECT, respectively.The combination of T2WI with CECT enhanced diagnostic performance compared to T2WI, achieving sensitivity, specificity and AUC of 81.4 % (80-80.3), 78.1 % (75.9-80.2), and 0.911 (0.902-0.920) (p = 0.001). The MRI radiomics outperformed the CT radiomics model to detect diagnosis of AP and the combination of MRI with CECT showed better performance than single models. The translation of radiomics into clinical practice may improve detection of AP, particularly MRI radiomics.
Page 78 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.