Sort by:
Page 53 of 100995 results

Multi-scale fusion semantic enhancement network for medical image segmentation.

Zhang Z, Xu C, Li Z, Chen Y, Nie C

pubmed logopapersJul 2 2025
The application of sophisticated computer vision techniques for medical image segmentation (MIS) plays a vital role in clinical diagnosis and treatment. Although Transformer-based models are effective at capturing global context, they are often ineffective at dealing with local feature dependencies. In order to improve this problem, we design a Multi-scale Fusion and Semantic Enhancement Network (MFSE-Net) for endoscopic image segmentation, which aims to capture global information and enhance detailed information. MFSE-Net uses a dual encoder architecture, with PVTv2 as the primary encoder to capture global features and CNNs as the secondary encoder to capture local details. The main encoder includes the LGDA (Large-kernel Grouped Deformable Attention) module for filtering noise and enhancing the semantic extraction of the four hierarchical features. The auxiliary encoder leverages the MLCF (Multi-Layered Cross-attention Fusion) module to integrate high-level semantic data from the deep CNN with fine spatial details from the shallow layers, enhancing the precision of boundaries and positioning. On the decoder side, we have introduced the PSE (Parallel Semantic Enhancement) module, which embeds the boundary and position information of the secondary encoder into the output characteristics of the backbone network. In the multi-scale decoding process, we also add SAM (Scale Aware Module) to recover global semantic information and offset for the loss of boundary details. Extensive experiments have shown that MFSE-Net overwhelmingly outperforms SOTA on the renal tumor and polyp datasets.

Clinical validation of AI assisted animal ultrasound models for diagnosis of early liver trauma.

Song Q, He X, Wang Y, Gao H, Tan L, Ma J, Kang L, Han P, Luo Y, Wang K

pubmed logopapersJul 2 2025
The study aimed to develop an AI-assisted ultrasound model for early liver trauma identification, using data from Bama miniature pigs and patients in Beijing, China. A deep learning model was created and fine-tuned with animal and clinical data, achieving high accuracy metrics. In internal tests, the model outperformed both Junior and Senior sonographers. External tests showed the model's effectiveness, with a Dice Similarity Coefficient of 0.74, True Positive Rate of 0.80, Positive Predictive Value of 0.74, and 95% Hausdorff distance of 14.84. The model's performance was comparable to Junior sonographers and slightly lower than Senior sonographers. This AI model shows promise for liver injury detection, offering a valuable tool with diagnostic capabilities similar to those of less experienced human operators.

Automated grading of rectocele with an MRI radiomics model.

Lai W, Wang S, Li J, Qi R, Zhao Z, Wang M

pubmed logopapersJul 2 2025
To develop an automated grading model for rectocele (RC) based on radiomics and evaluate its efficacy. This study retrospectively analyzed a total of 9,392 magnetic resonance imaging (MRI) images obtained from 222 patients who underwent dynamic magnetic resonance defecography (DMRD) over the period from August 2021 to June 2023. The focus was specifically on the defecation phase images of the DMRD, as this phase provides critical information for assessing RC. To develop and evaluate the model, the MRI images from all patients were randomly divided into two groups. 70% of the data were allocated to the training cohort to build the model, and the remaining 30% was reserved as a test cohort to evaluate its performance. First, the severity of RC was assessed using the RC MRI grading criteria by two independent radiologists. To extract and select radiomic features, two additional radiologists independently delineated the regions of interest (ROIs). These features were then dimensionality reduced to retain only the most relevant data for the analysis. The radiomics features were reduced in dimension, and a machine learning model was developed using a Support Vector Machine (SVM). Finally, receiver operating characteristic curve (ROC) and area under the curve (AUC) were used to evaluate the classification efficiency of the model. The AUC (macro/micro) of the model using defecation phase images was 0.794/0.824, and the overall accuracy was 0.754. The radiomics model built using the combination of DMRD defecation phase images is well suited for grading RC and helping clinicians diagnose and treat the disease.

A Multi-Centric Anthropomorphic 3D CT Phantom-Based Benchmark Dataset for Harmonization

Mohammadreza Amirian, Michael Bach, Oscar Jimenez-del-Toro, Christoph Aberle, Roger Schaer, Vincent Andrearczyk, Jean-Félix Maestrati, Maria Martin Asiain, Kyriakos Flouris, Markus Obmann, Clarisse Dromain, Benoît Dufour, Pierre-Alexandre Alois Poletti, Hendrik von Tengg-Kobligk, Rolf Hügli, Martin Kretzschmar, Hatem Alkadhi, Ender Konukoglu, Henning Müller, Bram Stieltjes, Adrien Depeursinge

arxiv logopreprintJul 2 2025
Artificial intelligence (AI) has introduced numerous opportunities for human assistance and task automation in medicine. However, it suffers from poor generalization in the presence of shifts in the data distribution. In the context of AI-based computed tomography (CT) analysis, significant data distribution shifts can be caused by changes in scanner manufacturer, reconstruction technique or dose. AI harmonization techniques can address this problem by reducing distribution shifts caused by various acquisition settings. This paper presents an open-source benchmark dataset containing CT scans of an anthropomorphic phantom acquired with various scanners and settings, which purpose is to foster the development of AI harmonization techniques. Using a phantom allows fixing variations attributed to inter- and intra-patient variations. The dataset includes 1378 image series acquired with 13 scanners from 4 manufacturers across 8 institutions using a harmonized protocol as well as several acquisition doses. Additionally, we present a methodology, baseline results and open-source code to assess image- and feature-level stability and liver tissue classification, promoting the development of AI harmonization strategies.

Urethra contours on MRI: multidisciplinary consensus educational atlas and reference standard for artificial intelligence benchmarking

song, y., Nguyen, L., Dornisch, A., Baxter, M. T., Barrett, T., Dale, A., Dess, R. T., Harisinghani, M., Kamran, S. C., Liss, M. A., Margolis, D. J., Weinberg, E. P., Woolen, S. A., Seibert, T. M.

medrxiv logopreprintJul 2 2025
IntroductionThe urethra is a recommended avoidance structure for prostate cancer treatment. However, even subspecialist physicians often struggle to accurately identify the urethra on available imaging. Automated segmentation tools show promise, but a lack of reliable ground truth or appropriate evaluation standards has hindered validation and clinical adoption. This study aims to establish a reference-standard dataset with expert consensus contours, define clinically meaningful evaluation metrics, and assess the performance and generalizability of a deep-learning-based segmentation model. Materials and MethodsA multidisciplinary panel of four experienced subspecialists in prostate MRI generated consensus contours of the male urethra for 71 patients across six imaging centers. Four of those cases were previously used in an international study (PURE-MRI), wherein 62 physicians attempted to contour the prostate and urethra on the patient images. Separately, we developed a deep-learning AI model for urethra segmentation using another 151 cases from one center and evaluated it against the consensus reference standard and compared to human performance using Dice Score, percent urethra Coverage, and Maximum 2D (axial, in-plane) Hausdorff Distance (HD) from the reference standard. ResultsIn the PURE-MRI dataset, the AI model outperformed most physicians, achieving a median Dice of 0.41 (vs. 0.33 for physicians), Coverage of 81% (vs. 36%), and Max 2D HD of 1.8 mm (vs. 1.6 mm). In the larger dataset, performance remained consistent, with a Dice of 0.40, Coverage of 89%, and Max 2D HD of 2.0 mm, indicating strong generalizability across a broader patient population and more varied imaging conditions. ConclusionWe established a multidisciplinary consensus benchmark for segmentation of the urethra. The deep-learning model performs comparably to specialist physicians and demonstrates consistent results across multiple institutions. It shows promise as a clinical decision-support tool for accurate and reliable urethra segmentation in prostate cancer radiotherapy planning and studies of dose-toxicity associations.

PanTS: The Pancreatic Tumor Segmentation Dataset

Wenxuan Li, Xinze Zhou, Qi Chen, Tianyu Lin, Pedro R. A. S. Bassi, Szymon Plotka, Jaroslaw B. Cwikla, Xiaoxi Chen, Chen Ye, Zheren Zhu, Kai Ding, Heng Li, Kang Wang, Yang Yang, Yucheng Tang, Daguang Xu, Alan L. Yuille, Zongwei Zhou

arxiv logopreprintJul 2 2025
PanTS is a large-scale, multi-institutional dataset curated to advance research in pancreatic CT analysis. It contains 36,390 CT scans from 145 medical centers, with expert-validated, voxel-wise annotations of over 993,000 anatomical structures, covering pancreatic tumors, pancreas head, body, and tail, and 24 surrounding anatomical structures such as vascular/skeletal structures and abdominal/thoracic organs. Each scan includes metadata such as patient age, sex, diagnosis, contrast phase, in-plane spacing, slice thickness, etc. AI models trained on PanTS achieve significantly better performance in pancreatic tumor detection, localization, and segmentation compared to those trained on existing public datasets. Our analysis indicates that these gains are directly attributable to the 16x larger-scale tumor annotations and indirectly supported by the 24 additional surrounding anatomical structures. As the largest and most comprehensive resource of its kind, PanTS offers a new benchmark for developing and evaluating AI models in pancreatic CT analysis.

Heterogeneity Habitats -Derived Radiomics of Gd-EOB-DTPA Enhanced MRI for Predicting Proliferation of Hepatocellular Carcinoma.

Sun S, Yu Y, Xiao S, He Q, Jiang Z, Fan Y

pubmed logopapersJul 2 2025
To construct and validate the optimal model for preoperative prediction of proliferative HCC based on habitat-derived radiomics features of Gd-EOB-DTPA-Enhanced MRI. A total of 187 patients who underwent Gd-EOB-DTPA-enhanced MRI before curative partial hepatectomy were divided into training (n=130, 50 proliferative and 80 nonproliferative HCC) and validation cohort (n=57, 25 proliferative and 32 nonproliferative HCC). Habitat subregion generation was performed using the Gaussian Mixture Model (GMM) clustering method to cluster all pixels to identify similar subregions within the tumor. Radiomic features were extracted from each tumor subregion in the arterial phase (AP) and hepatobiliary phase (HBP). Independent sample t tests, Pearson correlation coefficient, and Least Absolute Shrinkage and Selection Operator (LASSO) algorithm were performed to select the optimal features of subregions. After feature integration and selection, machine-learning classification models using the sci-kit-learn library were constructed. Receiver Operating Characteristic (ROC) curves and the DeLong test were performed to compare the identified performance for predicting proliferative HCC among these models. The optimal number of clusters was determined to be 3 based on the Silhouette coefficient. 20, 12, and 23 features were retained from the AP, HBP, and the combined AP and HBP habitat (subregions 1, 2, 3) radiomics features. Three models were constructed with these selected features in AP, HBP, and the combined AP and HBP habitat radiomics features. The ROC analysis and DeLong test show that the Naive Bayes model of AP and HBP habitat radiomics (AP-HBP-Hab-Rad) archived the best performance. Finally, the combined model using the Light Gradient Boosting Machine (LightGBM) algorithm, incorporating the AP-HBP-Hab-Rad, age, and AFP (Alpha-Fetoprotein), was identified as the optimal model for predicting proliferative HCC. For the training and validation cohort, the accuracy, sensitivity, specificity, and AUC were 0.923, 0.880, 0.950, 0.966 (95% CI: 0.937-0.994) and 0.825, 0.680, 0.937, 0.877 (95% CI: 0.786-0.969), respectively. In its validation cohort of the combined model, the AUC value was statistically higher than the other models (P<0.01). A combined model, including AP-HBP-Hab-Rad, serum AFP, and age using the LightGBM algorithm, can satisfactorily predict proliferative HCC preoperatively.

Diagnostic performance of artificial intelligence based on contrast-enhanced computed tomography in pancreatic ductal adenocarcinoma: a systematic review and meta-analysis.

Yan G, Chen X, Wang Y

pubmed logopapersJul 2 2025
This meta-analysis systematically evaluated the diagnostic performance of artificial intelligence (AI) based on contrast-enhanced computed tomography (CECT) in detecting pancreatic ductal adenocarcinoma (PDAC). Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Diagnostic Test Accuracy (PRISMA-DTA) guidelines, a comprehensive literature search was conducted across PubMed, Embase, and Web of Science from inception to March 2025. Bivariate random-effects models pooled sensitivity, specificity, and area under the curve (AUC). Heterogeneity was quantified via I² statistics, with subgroup analyses examining sources of variability, including AI methodologies, model architectures, sample sizes, geographic distributions, control groups and tumor stages. Nineteen studies involving 5,986 patients in internal validation cohorts and 2,069 patients in external validation cohorts were included. AI models demonstrated robust diagnostic accuracy in internal validation, with pooled sensitivity of 0.94 (95% CI 0.89-0.96), specificity of 0.93 (95% CI 0.90-0.96), and AUC of 0.98 (95% CI 0.96-0.99). External validation revealed moderately reduced sensitivity (0.84; 95% CI 0.78-0.89) and AUC (0.94; 95% CI 0.92-0.96), while specificity remained comparable (0.93; 95% CI 0.87-0.96). Substantial heterogeneity (I² > 85%) was observed, predominantly attributed to methodological variations in AI architectures and disparities in cohort sizes. AI demonstrates excellent diagnostic performance for PDAC on CECT, achieving high sensitivity and specificity across validation scenarios. However, its efficacy varies significantly with clinical context and tumor stage. Therefore, prospective multicenter trials that utilize standardized protocols and diverse cohorts, including early-stage tumors and complex benign conditions, are essential to validate the clinical utility of AI.

Combining multi-parametric MRI radiomics features with tumor abnormal protein to construct a machine learning-based predictive model for prostate cancer.

Zhang C, Wang Z, Shang P, Zhou Y, Zhu J, Xu L, Chen Z, Yu M, Zang Y

pubmed logopapersJul 2 2025
This study aims to investigate the diagnostic value of integrating multi-parametric magnetic resonance imaging (mpMRI) radiomic features with tumor abnormal protein (TAP) and clinical characteristics for diagnosing prostate cancer. A cohort of 109 patients who underwent both mpMRI and TAP assessments prior to prostate biopsy were enrolled. Radiomic features were meticulously extracted from T2-weighted imaging (T2WI) and the apparent diffusion coefficient (ADC) maps. Feature selection was performed using t-tests and the Least Absolute Shrinkage and Selection Operator (LASSO) regression, followed by model construction using the random forest algorithm. To further enhance the model's accuracy and predictive performance, this study incorporated clinical factors including age, serum prostate-specific antigen (PSA) levels, and prostate volume. By integrating these clinical indicators with radiomic features, a more comprehensive and precise predictive model was developed. Finally, the model's performance was quantified by calculating accuracy, sensitivity, specificity, precision, recall, F1 score, and the area under the curve (AUC). From mpMRI sequences of T2WI, dADC(b = 100/1000 s/mm<sup>2</sup>), and dADC(b = 100/2000 s/mm<sup>2</sup>), 8, 10, and 13 radiomic features were identified as significantly correlated with prostate cancer, respectively. Random forest models constructed based on these three sets of radiomic features achieved AUCs of 0.83, 0.86, and 0.87, respectively. When integrating all three sets of data to formulate a random forest model, an AUC of 0.84 was obtained. Additionally, a random forest model constructed on TAP and clinical characteristics achieved an AUC of 0.85. Notably, combining mpMRI radiomic features with TAP and clinical characteristics, or integrating dADC (b = 100/2000 s/mm²) sequence with TAP and clinical characteristics to construct random forest models, improved the AUCs to 0.91 and 0.92, respectively. The proposed model, which integrates radiomic features, TAP and clinical characteristics using machine learning, demonstrated high predictive efficiency in diagnosing prostate cancer.

Multitask Deep Learning Based on Longitudinal CT Images Facilitates Prediction of Lymph Node Metastasis and Survival in Chemotherapy-Treated Gastric Cancer.

Qiu B, Zheng Y, Liu S, Song R, Wu L, Lu C, Yang X, Wang W, Liu Z, Cui Y

pubmed logopapersJul 2 2025
Accurate preoperative assessment of lymph node metastasis (LNM) and overall survival (OS) status is essential for patients with locally advanced gastric cancer receiving neoadjuvant chemotherapy, providing timely guidance for clinical decision-making. However, current approaches to evaluate LNM and OS have limited accuracy. In this study, we used longitudinal CT images from 1,021 patients with locally advanced gastric cancer to develop and validate a multitask deep learning model, named co-attention tri-oriented spatial Mamba (CTSMamba), to simultaneously predict LNM and OS. CTSMamba was trained and validated on 398 patients, and the performance was further validated on 623 patients at two additional centers. Notably, CTSMamba exhibited significantly more robust performance than a clinical model in predicting LNM across all of the cohorts. Additionally, integrating CTSMamba survival scores with clinical predictors further improved personalized OS prediction. These results support the potential of CTSMamba to accurately predict LNM and OS from longitudinal images, potentially providing clinicians with a tool to inform individualized treatment approaches and optimized prognostic strategies. CTSMamba is a multitask deep learning model trained on longitudinal CT images of neoadjuvant chemotherapy-treated locally advanced gastric cancer that accurately predicts lymph node metastasis and overall survival to inform clinical decision-making. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.
Page 53 of 100995 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.