Sort by:
Page 34 of 74733 results

A Deep Learning Model Based on High-Frequency Ultrasound Images for Classification of Different Stages of Liver Fibrosis.

Zhang L, Tan Z, Li C, Mou L, Shi YL, Zhu XX, Luo Y

pubmed logopapersJul 1 2025
To develop a deep learning model based on high-frequency ultrasound images to classify different stages of liver fibrosis in chronic hepatitis B patients. This retrospective multicentre study included chronic hepatitis B patients who underwent both high-frequency and low-frequency liver ultrasound examinations between January 2014 and August 2024 at six hospitals. Paired images were employed to train the HF-DL and the LF-DL models independently. Three binary tasks were conducted: (1) Significant Fibrosis (S0-1 vs. S2-4); (2) Advanced Fibrosis (S0-2 vs. S3-4); (3) Cirrhosis (S0-3 vs. S4). Hepatic pathological results constituted the ground truth for algorithm development and evaluation. The diagnostic value of high-frequency and low-frequency liver ultrasound images was compared across commonly used CNN networks. The HF-DL model performance was compared against the LF-DL model, FIB-4, APRI, and with SWE (external test set). The calibration of models was plotted. The clinical benefits were calculated. Subgroup analysis for patients with different characteristics (BMI, ALT, inflammation level, alcohol consumption level) was conducted. The HF-DL model demonstrated consistently superior diagnostic performance across all stages of liver fibrosis compared to the LF-DL model, FIB-4, APRI and SWE, particularly in classifying advanced fibrosis (0.93 [95% CI 0.90-0.95], 0.93 [95% CI 0.89-0.96], p < 0.01). The HF-DL model demonstrates significantly improved performance in both target patient detection and negative population exclusion. The HF-DL model based on high-frequency ultrasound images outperforms other routinely used non-invasive modalities across different stages of liver fibrosis, particularly in advanced fibrosis, and may offer considerable clinical value.

LUNETR: Language-Infused UNETR for precise pancreatic tumor segmentation in 3D medical image.

Shi Z, Zhang R, Wei X, Yu C, Xie H, Hu Z, Chen X, Zhang Y, Xie B, Luo Z, Peng W, Xie X, Li F, Long X, Li L, Hu L

pubmed logopapersJul 1 2025
The identification of early micro-lesions and adjacent blood vessels in CT scans plays a pivotal role in the clinical diagnosis of pancreatic cancer, considering its aggressive nature and high fatality rate. Despite the widespread application of deep learning methods for this task, several challenges persist: (1) the complex background environment in abdominal CT scans complicates the accurate localization of potential micro-tumors; (2) the subtle contrast between micro-lesions within pancreatic tissue and the surrounding tissues makes it challenging for models to capture these features accurately; and (3) tumors that invade adjacent blood vessels pose significant barriers to surgical procedures. To address these challenges, we propose LUNETR (Language-Infused UNETR), an advanced multimodal encoder model that combines textual and image information for precise medical image segmentation. The integration of an autoencoding language model with cross-attention enabling our model to effectively leverage semantic associations between textual and image data, thereby facilitating precise localization of potential pancreatic micro-tumors. Additionally, we designed a Multi-scale Aggregation Attention (MSAA) module to comprehensively capture both spatial and channel characteristics of global multi-scale image data, enhancing the model's capacity to extract features from micro-lesions embedded within pancreatic tissue. Furthermore, in order to facilitate precise segmentation of pancreatic tumors and nearby blood vessels and address the scarcity of multimodal medical datasets, we collaborated with Zhuzhou Central Hospital to construct a multimodal dataset comprising CT images and corresponding pathology reports from 135 pancreatic cancer patients. Our experimental results surpass current state-of-the-art models, with the incorporation of the semantic encoder improving the average Dice score for pancreatic tumor segmentation by 2.23 %. For the Medical Segmentation Decathlon (MSD) liver and lung cancer datasets, our model achieved an average Dice score improvement of 4.31 % and 3.67 %, respectively, demonstrating the efficacy of the LUNETR.

Preoperative discrimination of absence or presence of myometrial invasion in endometrial cancer with an MRI-based multimodal deep learning radiomics model.

Chen Y, Ruan X, Wang X, Li P, Chen Y, Feng B, Wen X, Sun J, Zheng C, Zou Y, Liang B, Li M, Long W, Shen Y

pubmed logopapersJul 1 2025
Accurate preoperative evaluation of myometrial invasion (MI) is essential for treatment decisions in endometrial cancer (EC). However, the diagnostic accuracy of commonly utilized magnetic resonance imaging (MRI) techniques for this assessment exhibits considerable variability. This study aims to enhance preoperative discrimination of absence or presence of MI by developing and validating a multimodal deep learning radiomics (MDLR) model based on MRI. During March 2010 and February 2023, 1139 EC patients (age 54.771 ± 8.465 years; range 24-89 years) from five independent centers were enrolled retrospectively. We utilized ResNet18 to extract multi-scale deep learning features from T2-weighted imaging followed by feature selection via Mann-Whitney U test. Subsequently, a Deep Learning Signature (DLS) was formulated using Integrated Sparse Bayesian Extreme Learning Machine. Furthermore, we developed Clinical Model (CM) based on clinical characteristics and MDLR model by integrating clinical characteristics with DLS. The area under the curve (AUC) was used for evaluating diagnostic performance of the models. Decision curve analysis (DCA) and integrated discrimination index (IDI) were used to assess the clinical benefit and compare the predictive performance of models. The MDLR model comprised of age, histopathologic grade, subjective MR findings (TMD and Reading for MI status) and DLS demonstrated the best predictive performance. The AUC values for MDLR in training set, internal validation set, external validation set 1, and external validation set 2 were 0.899 (95% CI, 0.866-0.926), 0.874 (95% CI, 0.829-0.912), 0.862 (95% CI, 0.817-0.899) and 0.867 (95% CI, 0.806-0.914) respectively. The IDI and DCA showed higher diagnostic performance and clinical net benefits for the MDLR than for CM or DLS, which revealed MDLR may enhance decision-making support. The MDLR which incorporated clinical characteristics and DLS could improve preoperative accuracy in discriminating absence or presence of MI. This improvement may facilitate individualized treatment decision-making for EC.

Multi-machine learning model based on radiomics features to predict prognosis of muscle-invasive bladder cancer.

Wang B, Gong Z, Su P, Zhen G, Zeng T, Ye Y

pubmed logopapersJul 1 2025
This study aims to construct a survival prognosis prediction model for muscle-invasive bladder cancer based on CT imaging features. A total of 91 patients with muscle-invasive bladder cancer were sourced from the TCGA and TCIA dataset and were divided into a training group (64 cases) and a validation group (27 cases). Additionally, 54 patients with muscle-invasive bladder cancer were retrospectively collected from our hospital to serve as an external test group; their enhanced CT imaging data were analyzed and processed to identify the most relevant radiomic features. Five distinct machine learning methods were employed to develop the optimal radiomics model, which was then combined with clinical data to create a nomogram model aimed at accurately predicting the overall survival (OS) of patients with muscle-invasive bladder cancer. The model's performance was ultimately assessed using various evaluation methods, including the ROC curve, calibration curve, decision curve, and Kaplan-Meier (KM) analysis. Eight radiomic features were identified for modeling analysis. Among the models evaluated, the Gradient Boosting Machine (GBM) In the prediction of OS performed the best. the 2-year AUCs were 0.859, 95% CI (0.767-0.952) for the training group, 0.850, 95% CI (0.705-0.995) for the validation group, and 0.700, 95% CI (0.520-0.880) for the external test group. The 3-year AUCs were 0.809, 95% CI (0.704-0.913) for the training group, 0.895, 95% CI (0.768-1.000) for the validation group, and 0.730, 95% CI (0.569-0.891) for the external test group. The nomogram model incorporating clinical data achieved superior results, the AUCs for predicting 2-year OS were 0.913 (95% CI: 0.83-0.98) for the training group, 0.86 (95% CI: 0.78-0.96) for the validation group, and 0.778 (95% CI: 0.69-0.94) for the external test group; for predicting 3-year OS, the AUCs were 0.837 (95% CI: 0.83-0.98) for the training group, 0.982 (95% CI: 0.84-1.0) for the validation group, and 0.785 (95% CI: 0.75-0.96) for the external test group. The calibration curve demonstrated excellent calibration of the model, while the decision curve and KM analysis indicated that the model possesses substantial clinical utility. The GBM model, based on the radiomic features of enhanced CT imaging, holds significant potential for predicting the prognosis of patients with muscle-invasive bladder cancer. Furthermore, the combined model, which incorporates clinical features, demonstrates enhanced performance and is beneficial for clinical decision-making.

Generative Artificial Intelligence in Prostate Cancer Imaging.

Haque F, Simon BD, Özyörük KB, Harmon SA, Türkbey B

pubmed logopapersJul 1 2025
Prostate cancer (PCa) is the second most common cancer in men and has a significant health and social burden, necessitating advances in early detection, prognosis, and treatment strategies. Improvement in medical imaging has significantly impacted early PCa detection, characterization, and treatment planning. However, with an increasing number of patients with PCa and comparatively fewer PCa imaging experts, interpreting large numbers of imaging data is burdensome, time-consuming, and prone to variability among experts. With the revolutionary advances of artificial intelligence (AI) in medical imaging, image interpretation tasks are becoming easier and exhibit the potential to reduce the workload on physicians. Generative AI (GenAI) is a recently popular sub-domain of AI that creates new data instances, often to resemble patterns and characteristics of the real data. This new field of AI has shown significant potential for generating synthetic medical images with diverse and clinically relevant information. In this narrative review, we discuss the basic concepts of GenAI and cover the recent application of GenAI in the PCa imaging domain. This review will help the readers understand where the PCa research community stands in terms of various medical image applications like generating multi-modal synthetic images, image quality improvement, PCa detection, classification, and digital pathology image generation. We also address the current safety concerns, limitations, and challenges of GenAI for technical and clinical adaptation, as well as the limitations of current literature, potential solutions, and future directions with GenAI for the PCa community.

Multiparametric MRI for Assessment of the Biological Invasiveness and Prognosis of Pancreatic Ductal Adenocarcinoma in the Era of Artificial Intelligence.

Zhao B, Cao B, Xia T, Zhu L, Yu Y, Lu C, Tang T, Wang Y, Ju S

pubmed logopapersJul 1 2025
Pancreatic ductal adenocarcinoma (PDAC) is the deadliest malignant tumor, with a grim 5-year overall survival rate of about 12%. As its incidence and mortality rates rise, it is likely to become the second-leading cause of cancer-related death. The radiological assessment determined the stage and management of PDAC. However, it is a highly heterogeneous disease with the complexity of the tumor microenvironment, and it is challenging to adequately reflect the biological aggressiveness and prognosis accurately through morphological evaluation alone. With the dramatic development of artificial intelligence (AI), multiparametric magnetic resonance imaging (mpMRI) using specific contrast media and special techniques can provide morphological and functional information with high image quality and become a powerful tool in quantifying intratumor characteristics. Besides, AI has been widespread in the field of medical imaging analysis. Radiomics is the high-throughput mining of quantitative image features from medical imaging that enables data to be extracted and applied for better decision support. Deep learning is a subset of artificial neural network algorithms that can automatically learn feature representations from data. AI-enabled imaging biomarkers of mpMRI have enormous promise to bridge the gap between medical imaging and personalized medicine and demonstrate huge advantages in predicting biological characteristics and the prognosis of PDAC. However, current AI-based models of PDAC operate mainly in the realm of a single modality with a relatively small sample size, and the technical reproducibility and biological interpretation present a barrage of new potential challenges. In the future, the integration of multi-omics data, such as radiomics and genomics, alongside the establishment of standardized analytical frameworks will provide opportunities to increase the robustness and interpretability of AI-enabled image biomarkers and bring these biomarkers closer to clinical practice. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 4.

Artificial Intelligence in Obstetric and Gynecological MR Imaging.

Saida T, Gu W, Hoshiai S, Ishiguro T, Sakai M, Amano T, Nakahashi Y, Shikama A, Satoh T, Nakajima T

pubmed logopapersJul 1 2025
This review explores the significant progress and applications of artificial intelligence (AI) in obstetrics and gynecological MRI, charting its development from foundational algorithmic techniques to deep learning strategies and advanced radiomics. This review features research published over the last few years that has used AI with MRI to identify specific conditions such as uterine leiomyosarcoma, endometrial cancer, cervical cancer, ovarian tumors, and placenta accreta. In addition, it covers studies on the application of AI for segmentation and quality improvement in obstetrics and gynecology MRI. The review also outlines the existing challenges and envisions future directions for AI research in this domain. The growing accessibility of extensive datasets across various institutions and the application of multiparametric MRI are significantly enhancing the accuracy and adaptability of AI. This progress has the potential to enable more accurate and efficient diagnosis, offering opportunities for personalized medicine in the field of obstetrics and gynecology.

Intraindividual Comparison of Image Quality Between Low-Dose and Ultra-Low-Dose Abdominal CT With Deep Learning Reconstruction and Standard-Dose Abdominal CT Using Dual-Split Scan.

Lee TY, Yoon JH, Park JY, Park SH, Kim H, Lee CM, Choi Y, Lee JM

pubmed logopapersJul 1 2025
The aim of this study was to intraindividually compare the conspicuity of focal liver lesions (FLLs) between low- and ultra-low-dose computed tomography (CT) with deep learning reconstruction (DLR) and standard-dose CT with model-based iterative reconstruction (MBIR) from a single CT using dual-split scan in patients with suspected liver metastasis via a noninferiority design. This prospective study enrolled participants who met the eligibility criteria at 2 tertiary hospitals in South Korea from June 2022 to January 2023. The criteria included ( a ) being aged between 20 and 85 years and ( b ) having suspected or known liver metastases. Dual-source CT scans were conducted, with the standard radiation dose divided in a 2:1 ratio between tubes A and B (67% and 33%, respectively). The voltage settings of 100/120 kVp were selected based on the participant's body mass index (<30 vs ≥30 kg/m 2 ). For image reconstruction, MBIR was utilized for standard-dose (100%) images, whereas DLR was employed for both low-dose (67%) and ultra-low-dose (33%) images. Three radiologists independently evaluated FLL conspicuity, the probability of metastasis, and subjective image quality using a 5-point Likert scale, in addition to quantitative signal-to-noise and contrast-to-noise ratios. The noninferiority margins were set at -0.5 for conspicuity and -0.1 for detection. One hundred thirty-three participants (male = 58, mean body mass index = 23.0 ± 3.4 kg/m 2 ) were included in the analysis. The low- and ultra-low- dose had a lower radiation dose than the standard-dose (median CT dose index volume: 3.75, 1.87 vs 5.62 mGy, respectively, in the arterial phase; 3.89, 1.95 vs 5.84 in the portal venous phase, P < 0.001 for all). Median FLL conspicuity was lower in the low- and ultra-low-dose scans compared with the standard-dose (3.0 [interquartile range, IQR: 2.0, 4.0], 3.0 [IQR: 1.0, 4.0] vs 3.0 [IQR: 2.0, 4.0] in the arterial phase; 4.0 [IQR: 1.0, 5.0], 3.0 [IQR: 1.0, 4.0] vs 4.0 [IQR: 2.0, 5.0] in the portal venous phases), yet within the noninferiority margin ( P < 0.001 for all). FLL detection was also lower but remained within the margin (lesion detection rate: 0.772 [95% confidence interval, CI: 0.727, 0.812], 0.754 [0.708, 0.795], respectively) compared with the standard-dose (0.810 [95% CI: 0.770, 0.844]). Sensitivity for liver metastasis differed between the standard- (80.6% [95% CI: 76.0, 84.5]), low-, and ultra-low-doses (75.7% [95% CI: 70.2, 80.5], 73.7 [95% CI: 68.3, 78.5], respectively, P < 0.001 for both), whereas specificity was similar ( P > 0.05). Low- and ultra-low-dose CT with DLR showed noninferior FLL conspicuity and detection compared with standard-dose CT with MBIR. Caution is needed due to a potential decrease in sensitivity for metastasis ( clinicaltrials.gov/NCT05324046 ).

Accurate and Efficient Fetal Birth Weight Estimation from 3D Ultrasound

Jian Wang, Qiongying Ni, Hongkui Yu, Ruixuan Yao, Jinqiao Ying, Bin Zhang, Xingyi Yang, Jin Peng, Jiongquan Chen, Junxuan Yu, Wenlong Shi, Chaoyu Chen, Zhongnuo Yan, Mingyuan Luo, Gaocheng Cai, Dong Ni, Jing Lu, Xin Yang

arxiv logopreprintJul 1 2025
Accurate fetal birth weight (FBW) estimation is essential for optimizing delivery decisions and reducing perinatal mortality. However, clinical methods for FBW estimation are inefficient, operator-dependent, and challenging to apply in cases of complex fetal anatomy. Existing deep learning methods are based on 2D standard ultrasound (US) images or videos that lack spatial information, limiting their prediction accuracy. In this study, we propose the first method for directly estimating FBW from 3D fetal US volumes. Our approach integrates a multi-scale feature fusion network (MFFN) and a synthetic sample-based learning framework (SSLF). The MFFN effectively extracts and fuses multi-scale features under sparse supervision by incorporating channel attention, spatial attention, and a ranking-based loss function. SSLF generates synthetic samples by simply combining fetal head and abdomen data from different fetuses, utilizing semi-supervised learning to improve prediction performance. Experimental results demonstrate that our method achieves superior performance, with a mean absolute error of $166.4\pm155.9$ $g$ and a mean absolute percentage error of $5.1\pm4.6$%, outperforming existing methods and approaching the accuracy of a senior doctor. Code is available at: https://github.com/Qioy-i/EFW.
Page 34 of 74733 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.