Sort by:
Page 55 of 100991 results

Accurate and Efficient Fetal Birth Weight Estimation from 3D Ultrasound

Jian Wang, Qiongying Ni, Hongkui Yu, Ruixuan Yao, Jinqiao Ying, Bin Zhang, Xingyi Yang, Jin Peng, Jiongquan Chen, Junxuan Yu, Wenlong Shi, Chaoyu Chen, Zhongnuo Yan, Mingyuan Luo, Gaocheng Cai, Dong Ni, Jing Lu, Xin Yang

arxiv logopreprintJul 1 2025
Accurate fetal birth weight (FBW) estimation is essential for optimizing delivery decisions and reducing perinatal mortality. However, clinical methods for FBW estimation are inefficient, operator-dependent, and challenging to apply in cases of complex fetal anatomy. Existing deep learning methods are based on 2D standard ultrasound (US) images or videos that lack spatial information, limiting their prediction accuracy. In this study, we propose the first method for directly estimating FBW from 3D fetal US volumes. Our approach integrates a multi-scale feature fusion network (MFFN) and a synthetic sample-based learning framework (SSLF). The MFFN effectively extracts and fuses multi-scale features under sparse supervision by incorporating channel attention, spatial attention, and a ranking-based loss function. SSLF generates synthetic samples by simply combining fetal head and abdomen data from different fetuses, utilizing semi-supervised learning to improve prediction performance. Experimental results demonstrate that our method achieves superior performance, with a mean absolute error of $166.4\pm155.9$ $g$ and a mean absolute percentage error of $5.1\pm4.6$%, outperforming existing methods and approaching the accuracy of a senior doctor. Code is available at: https://github.com/Qioy-i/EFW.

CT-free attenuation and Monte-Carlo based scatter correction-guided quantitative <sup>90</sup>Y-SPECT imaging for improved dose calculation using deep learning.

Mansouri Z, Salimi Y, Wolf NB, Mainta I, Zaidi H

pubmed logopapersJul 1 2025
This work aimed to develop deep learning (DL) models for CT-free attenuation and Monte Carlo-based scatter correction (AC, SC) in quantitative <sup>90</sup>Y SPECT imaging for improved dose calculation. Data of 190 patients who underwent <sup>90</sup>Y selective internal radiation therapy (SIRT) with glass microspheres was studied. Voxel-level dosimetry was performed on uncorrected and corrected SPECT images using the local energy deposition method. Three deep learning models were trained individually for AC, SC, and joint ASC using a modified 3D shifted-window UNet Transformer (Swin UNETR) architecture. Corrected and unorrected dose maps served as reference and as inputs, respectively. The data was split into train set (~ 80%) and unseen test set (~ 20%). Training was conducted in a five-fold cross-validation scheme. The trained models were tested on the unseen test set. The model's performance was thoroughly evaluated by comparing organ- and voxel-level dosimetry results between the reference and DL-generated dose maps on the unseen test dataset. The voxel and organ-level evaluations also included Gamma analysis with three different distances to agreement (DTA (mm)) and dose difference (DD (%)) criteria to explore suitable criteria in SIRT dosimetry using SPECT. The average ± SD of the voxel-level quantitative metrics for AC task, are mean error (ME (Gy)): -0.026 ± 0.06, structural similarity index (SSIM (%)): 99.5 ± 0.25, and peak signal to noise ratio (PSNR (dB)): 47.28 ± 3.31. These values for SC task are - 0.014 ± 0.05, 99.88 ± 0.099, 55.9 ± 4, respectively. For ASC task, these values are as follows: -0.04 ± 0.06, 99.57 ± 0.33, 47.97 ± 3.6, respectively. The results of voxel level gamma evaluations with three different criteria, namely "DTA: 4.79, DD: 1%", "DTA:10 mm, DD: 5%", and "DTA: 15 mm, DD:10%" were around 98%. The mean absolute error (MAE (Gy)) for tumor and whole normal liver across tasks are as follows: 7.22 ± 5.9 and 1.09 ± 0.86 for AC, 8 ± 9.3 and 0.9 ± 0.8 for SC, and 11.8 ± 12.02 and 1.3 ± 0.98 for ASC, respectively. We developed multiple models for three different clinically scenarios, namely AC, SC, and ASC using the patient-specific Monte Carlo scatter corrected and CT-based attenuation corrected images. These task-specific models could be beneficial to perform the essential corrections where the CT images are either not available or not reliable due to misalignment, after training with a larger dataset.

ARTIFICIAL INTELLIGENCE ENHANCES DIAGNOSTIC ACCURACY OF CONTRAST ENEMAS IN HIRSCHSPRUNG DISEASE COMPARED TO CLINICAL EXPERTS.

Vargova P, Varga M, Izquierdo Hernandez B, Gutierrez Alonso C, Gonzalez Esgueda A, Cobos Hernandez MV, Fernandez R, González-Ruiz Y, Bragagnini Rodriguez P, Del Peral Samaniego M, Corona Bellostas C

pubmed logopapersJul 1 2025
Introduction Contrast enema (CE) is widely used in the evaluation of suspected Hirschsprung disease (HD). Deep learning is a promising tool to standardize image assessment and support clinical decision-making. This study assesses the diagnostic performance of a deep neural network (DNN), with and without clinical data, and compares its interpretation with that of pediatric surgeons and radiologists. Materials and Methods In this retrospective study, 1471 contrast enema images from patients <15 years were analysed, with 218 images used for testing. A deep neural network, pediatric radiologists, and surgeons independently reviewed the testing set, with and without clinical data. Diagnostic performance was assessed using ROC and PR curves, and interobserver agreement was evaluated using Fleiss' kappa. Results The deep neural network achieved high diagnostic accuracy (AUC-ROC = 0.87) in contrast enema interpretation, with improved performance when combining anteroposterior and lateral images (AUC-ROC = 0.92). Clinical data integration further enhanced model sensitivity and negative predictive value. The super-surgeon (majority voting of colorectal surgeons) outperformed most individual clinicians (sensitivity 81.8%, specificity 79.1%), while the super-radiologist (majority voting of radiologist) showed moderate accuracy. Interobserver analysis revealed strong agreement between the model and surgeons (Cohen's kappa = 0.73), and overall consistency among experts and the model (Fleiss' kappa = 0.62). Conclusions AI-assisted CE interpretation achieved higher specificity and comparable sensitivity to those of the clinicians. Its consistent performance and substantial agreement with experts support its potential role in improving CE assessment in HD.

Enhancing ultrasonographic detection of hepatocellular carcinoma with artificial intelligence: current applications, challenges and future directions.

Wongsuwan J, Tubtawee T, Nirattisaikul S, Danpanichkul P, Cheungpasitporn W, Chaichulee S, Kaewdech A

pubmed logopapersJul 1 2025
Hepatocellular carcinoma (HCC) remains a leading cause of cancer-related mortality worldwide, with early detection playing a crucial role in improving survival rates. Artificial intelligence (AI), particularly in medical image analysis, has emerged as a potential tool for HCC diagnosis and surveillance. Recent advancements in deep learning-driven medical imaging have demonstrated significant potential in enhancing early HCC detection, particularly in ultrasound (US)-based surveillance. This review provides a comprehensive analysis of the current landscape, challenges, and future directions of AI in HCC surveillance, with a specific focus on the application in US imaging. Additionally, it explores AI's transformative potential in clinical practice and its implications for improving patient outcomes. We examine various AI models developed for HCC diagnosis, highlighting their strengths and limitations, with a particular emphasis on deep learning approaches. Among these, convolutional neural networks have shown notable success in detecting and characterising different focal liver lesions on B-mode US often outperforming conventional radiological assessments. Despite these advancements, several challenges hinder AI integration into clinical practice, including data heterogeneity, a lack of standardisation, concerns regarding model interpretability, regulatory constraints, and barriers to real-world clinical adoption. Addressing these issues necessitates the development of large, diverse, and high-quality data sets to enhance the robustness and generalisability of AI models. Emerging trends in AI for HCC surveillance, such as multimodal integration, explainable AI, and real-time diagnostics, offer promising advancements. These innovations have the potential to significantly improve the accuracy, efficiency, and clinical applicability of AI-driven HCC surveillance, ultimately contributing to enhanced patient outcomes.

Ultrasound-based classification of follicular thyroid Cancer using deep convolutional neural networks with transfer learning.

Agyekum EA, Yuzhi Z, Fang Y, Agyekum DN, Wang X, Issaka E, Li C, Shen X, Qian X, Wu X

pubmed logopapersJul 1 2025
This study aimed to develop and validate convolutional neural network (CNN) models for distinguishing follicular thyroid carcinoma (FTC) from follicular thyroid adenoma (FTA). Additionally, this current study compared the performance of CNN models with the American College of Radiology Thyroid Imaging Reporting and Data System (ACR-TIRADS) and Chinese Thyroid Imaging Reporting and Data System (C-TIRADS) ultrasound-based malignancy risk stratification systems. A total of 327 eligible patients with FTC and FTA who underwent preoperative thyroid ultrasound examination were retrospectively enrolled between August 2017, and August 2024. Patients were randomly assigned to a training cohort (n = 263) and a test cohort (n = 64) in an 8:2 ratio using stratified sampling. Five CNN models, including VGG16, ResNet101, MobileNetV2, ResNet152, and ResNet50, pre-trained with ImageNet, were developed and tested to distinguish FTC from FTA. The CNN models exhibited good performance, yielding areas under the receiver operating characteristic curve (AUC) ranging from 0.64 to 0.77. The ResNet152 model demonstrated the highest AUC (0.77; 95% CI, 0.67-0.87) for distinguishing between FTC and FTA. Decision curve and calibration curve analyses demonstrated the models' favorable clinical value and calibration. Furthermore, when comparing the performance of the developed models with that of the C-TIRADS and ACR-TIRADS systems, the models developed in this study demonstrated superior performance. This can potentially guide appropriate management of FTC in patients with follicular neoplasms.

Deep learning model for grading carcinoma with Gini-based feature selection and linear production-inspired feature fusion.

Kundu S, Mukhopadhyay S, Talukdar R, Kaplun D, Voznesensky A, Sarkar R

pubmed logopapersJul 1 2025
The most common types of kidneys and liver cancer are renal cell carcinoma (RCC) and hepatic cell carcinoma (HCC), respectively. Accurate grading of these carcinomas is essential for determining the most appropriate treatment strategies, including surgery or pharmacological interventions. Traditional deep learning methods often struggle with the intricate and complex patterns seen in histopathology images of RCC and HCC, leading to inaccuracies in classification. To enhance the grading accuracy for liver and renal cell carcinoma, this research introduces a novel feature selection and fusion framework inspired by economic theories, incorporating attention mechanisms into three Convolutional Neural Network (CNN) architectures-MobileNetV2, DenseNet121, and InceptionV3-as foundational models. The attention mechanisms dynamically identify crucial image regions, leveraging each CNN's unique strengths. Additionally, a Gini-based feature selection method is implemented to prioritize the most discriminative features, and the extracted features from each network are optimally combined using a fusion technique modeled after a linear production function, maximizing each model's contribution to the final prediction. Experimental evaluations demonstrate that this proposed approach outperforms existing state-of-the-art models, achieving high accuracies of 93.04% for RCC and 98.24% for LCC. This underscores the method's robustness and effectiveness in accurately grading these types of cancers. The code of our method is publicly available in https://github.com/GHOSTCALL983/GRADE-CLASSIFICATION .

Personalized prediction model generated with machine learning for kidney function one year after living kidney donation.

Oki R, Hirai T, Iwadoh K, Kijima Y, Hashimoto H, Nishimura Y, Banno T, Unagami K, Omoto K, Shimizu T, Hoshino J, Takagi T, Ishida H, Hirai T

pubmed logopapersJul 1 2025
Living kidney donors typically experience approximately a 30% reduction in kidney function after donation, although the degree of reduction varies among individuals. This study aimed to develop a machine learning (ML) model to predict serum creatinine (Cre) levels at one year post-donation using preoperative clinical data, including kidney-, fat-, and muscle-volumetry values from computed tomography. A total of 204 living kidney donors were included. Symbolic regression via genetic programming was employed to create an ML-based Cre prediction model using preoperative clinical variables. Validation was conducted using a 7:3 training-to-test data split. The ML model demonstrated a median absolute error of 0.079 mg/dL for predicting Cre. In the validation cohort, it outperformed conventional methods (which assume post-donation eGFR to be 70% of the preoperative value) with higher R<sup>2</sup> (0.58 vs. 0.27), lower root mean squared error (5.27 vs. 6.89), and lower mean absolute error (3.92 vs. 5.8). Key predictive variables included preoperative Cre and remnant kidney volume. The model was deployed as a web application for clinical use. The ML model offers accurate predictions of post-donation kidney function and may assist in monitoring donor outcomes, enhancing personalized care after kidney donation.

A superpixel based self-attention network for uterine fibroid segmentation in high intensity focused ultrasound guidance images.

Wen S, Zhang D, Lei Y, Yang Y

pubmed logopapersJul 1 2025
Ultrasound guidance images are widely used for high intensity focused ultrasound (HIFU) therapy; however, the speckles, acoustic shadows, and signal attenuation in ultrasound guidance images hinder the observation of the images by radiologists and make segmentation of ultrasound guidance images more difficult. To address these issues, we proposed the superpixel based attention network, a network integrating superpixels and self-attention mechanisms that can automatically segment tumor regions in ultrasound guidance images. The method is implemented based on the framework of region splitting and merging. The ultrasound guidance image is first over-segmented into superpixels, then features within the superpixels are extracted and encoded into superpixel feature matrices with the uniform size. The network takes superpixel feature matrices and their positional information as input, and classifies superpixels using self-attention modules and convolutional layers. Finally, the superpixels are merged based on the classification results to obtain the tumor region, achieving automatic tumor region segmentation. The method was applied to a local dataset consisting of 140 ultrasound guidance images from uterine fibroid HIFU therapy. The performance of the proposed method was quantitatively evaluated by comparing the segmentation results with those of the pixel-wise segmentation networks. The proposed method achieved 75.95% and 7.34% in mean intersection over union (IoU) and mean normalized Hausdorff distance (NormHD). In comparison to the segmentation transformer (SETR), this represents an improvement in performance by 5.52% for IoU and 1.49% for NormHD. Paired t-tests were conducted to evaluate the significant difference in IoU and NormHD between the proposed method and the comparison methods. All p-values of the paired t-tests were found to be less than 0.05. The analysis of evaluation metrics and segmentation results indicates that the proposed method performs better than existing pixel-wise segmentation networks in segmenting the tumor region on ultrasound guidance images.

Radiomics analysis based on dynamic contrast-enhanced MRI for predicting early recurrence after hepatectomy in hepatocellular carcinoma patients.

Wang KD, Guan MJ, Bao ZY, Shi ZJ, Tong HH, Xiao ZQ, Liang L, Liu JW, Shen GL

pubmed logopapersJul 1 2025
This study aimed to develop a machine learning model based on Magnetic Resonance Imaging (MRI) radiomics for predicting early recurrence after curative surgery in patients with hepatocellular carcinoma (HCC).A retrospective analysis was conducted on 200 patients with HCC who underwent curative hepatectomy. Patients were randomly allocated to training (n = 140) and validation (n = 60) cohorts. Preoperative arterial, portal venous, and delayed phase images were acquired. Tumor regions of interest (ROIs) were manually delineated, with an additional ROI obtained by expanding the tumor boundary by 5 mm. Radiomic features were extracted and selected using the Least Absolute Shrinkage and Selection Operator (LASSO). Multiple machine learning algorithms were employed to develop predictive models. Model performance was evaluated using receiver operating characteristic (ROC) curves, decision curve analysis, and calibration curves. The 20 most discriminative radiomic features were integrated with tumor size and satellite nodules for model development. In the validation cohort, the clinical-peritumoral radiomics model demonstrated superior predictive accuracy (AUC = 0.85, 95% CI: 0.74-0.95) compared to the clinical-intratumoral radiomics model (AUC = 0.82, 95% CI: 0.68-0.93) and the radiomics-only model (AUC = 0.82, 95% CI: 0.69-0.93). Furthermore, calibration curves and decision curve analyses indicated superior calibration ability and clinical benefit. The MRI-based peritumoral radiomics model demonstrates significant potential for predicting early recurrence of HCC.

Muscle-Driven prognostication in gastric cancer: A multicenter deep learning framework integrating Iliopsoas and erector spinae radiomics for 5-Year survival prediction.

Hong Y, Zhang P, Teng Z, Cheng K, Zhang Z, Cheng Y, Cao G, Chen B

pubmed logopapersJul 1 2025
This study developed a 5-year survival prediction model for gastric cancer patients by combining radiomics and deep learning, focusing on CT-based 2D and 3D features of the iliopsoas and erector spinae muscles. Retrospective data from 705 patients across two centers were analyzed, with clinical variables assessed via Cox regression and radiomic features extracted using deep learning. The 2D model outperformed the 3D approach, leading to feature fusion across five dimensions, optimized via logistic regression. Results showed no significant association between clinical baseline characteristics and survival, but the 2D model demonstrated strong prognostic performance (AUC ~ 0.8), with attention heatmaps emphasizing spinal muscle regions. The 3D model underperformed due to irrelevant data. The final integrated model achieved stable predictive accuracy, confirming the link between muscle mass and survival. This approach advances precision medicine by enabling personalized prognosis and exploring 3D imaging feasibility, offering insights for gastric cancer research.
Page 55 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.