Sort by:
Page 27 of 74733 results

Urethra contours on MRI: multidisciplinary consensus educational atlas and reference standard for artificial intelligence benchmarking

song, y., Nguyen, L., Dornisch, A., Baxter, M. T., Barrett, T., Dale, A., Dess, R. T., Harisinghani, M., Kamran, S. C., Liss, M. A., Margolis, D. J., Weinberg, E. P., Woolen, S. A., Seibert, T. M.

medrxiv logopreprintJul 2 2025
IntroductionThe urethra is a recommended avoidance structure for prostate cancer treatment. However, even subspecialist physicians often struggle to accurately identify the urethra on available imaging. Automated segmentation tools show promise, but a lack of reliable ground truth or appropriate evaluation standards has hindered validation and clinical adoption. This study aims to establish a reference-standard dataset with expert consensus contours, define clinically meaningful evaluation metrics, and assess the performance and generalizability of a deep-learning-based segmentation model. Materials and MethodsA multidisciplinary panel of four experienced subspecialists in prostate MRI generated consensus contours of the male urethra for 71 patients across six imaging centers. Four of those cases were previously used in an international study (PURE-MRI), wherein 62 physicians attempted to contour the prostate and urethra on the patient images. Separately, we developed a deep-learning AI model for urethra segmentation using another 151 cases from one center and evaluated it against the consensus reference standard and compared to human performance using Dice Score, percent urethra Coverage, and Maximum 2D (axial, in-plane) Hausdorff Distance (HD) from the reference standard. ResultsIn the PURE-MRI dataset, the AI model outperformed most physicians, achieving a median Dice of 0.41 (vs. 0.33 for physicians), Coverage of 81% (vs. 36%), and Max 2D HD of 1.8 mm (vs. 1.6 mm). In the larger dataset, performance remained consistent, with a Dice of 0.40, Coverage of 89%, and Max 2D HD of 2.0 mm, indicating strong generalizability across a broader patient population and more varied imaging conditions. ConclusionWe established a multidisciplinary consensus benchmark for segmentation of the urethra. The deep-learning model performs comparably to specialist physicians and demonstrates consistent results across multiple institutions. It shows promise as a clinical decision-support tool for accurate and reliable urethra segmentation in prostate cancer radiotherapy planning and studies of dose-toxicity associations.

Ensemble methods and partially-supervised learning for accurate and robust automatic murine organ segmentation.

Daenen LHBA, de Bruijn J, Staut N, Verhaegen F

pubmed logopapersJul 2 2025
Delineation of multiple organs in murine µCT images is crucial for preclinical studies but requires manual volumetric segmentation, a tedious and time-consuming process prone to inter-observer variability. Automatic deep learning-based segmentation can improve speed and reproducibility. While 2D and 3D deep learning models have been developed for anatomical segmentation, their generalization to external datasets has not been extensively investigated. Furthermore, ensemble learning, combining predictions of multiple 2D models, and partially-supervised learning (PSL), enabling training on partially-labeled datasets, have not been explored for preclinical purposes. This study demonstrates the first use of PSL frameworks and the superiority of 3D models in accuracy and generalizability to external datasets. Ensemble methods performed on par or better than the best individual 2D network, but only 3D models consistently generalized to external datasets (Dice Similarity Coefficient (DSC) > 0.8). PSL frameworks showed promising results across various datasets and organs, but its generalization to external data can be improved for some organs. This work highlights the superiority of 3D models over 2D and ensemble counterparts in accuracy and generalizability for murine µCT image segmentation. Additionally, a promising PSL framework is presented for leveraging multiple datasets without complete annotations. Our model can increase time-efficiency and improve reproducibility in preclinical radiotherapy workflows by circumventing manual contouring bottlenecks. Moreover, high segmentation accuracy of 3D models allows monitoring multiple organs over time using repeated µCT imaging, potentially reducing the number of mice sacrificed in studies, adhering to the 3R principle, specifically Reduction and Refinement.

Multitask Deep Learning Based on Longitudinal CT Images Facilitates Prediction of Lymph Node Metastasis and Survival in Chemotherapy-Treated Gastric Cancer.

Qiu B, Zheng Y, Liu S, Song R, Wu L, Lu C, Yang X, Wang W, Liu Z, Cui Y

pubmed logopapersJul 2 2025
Accurate preoperative assessment of lymph node metastasis (LNM) and overall survival (OS) status is essential for patients with locally advanced gastric cancer receiving neoadjuvant chemotherapy, providing timely guidance for clinical decision-making. However, current approaches to evaluate LNM and OS have limited accuracy. In this study, we used longitudinal CT images from 1,021 patients with locally advanced gastric cancer to develop and validate a multitask deep learning model, named co-attention tri-oriented spatial Mamba (CTSMamba), to simultaneously predict LNM and OS. CTSMamba was trained and validated on 398 patients, and the performance was further validated on 623 patients at two additional centers. Notably, CTSMamba exhibited significantly more robust performance than a clinical model in predicting LNM across all of the cohorts. Additionally, integrating CTSMamba survival scores with clinical predictors further improved personalized OS prediction. These results support the potential of CTSMamba to accurately predict LNM and OS from longitudinal images, potentially providing clinicians with a tool to inform individualized treatment approaches and optimized prognostic strategies. CTSMamba is a multitask deep learning model trained on longitudinal CT images of neoadjuvant chemotherapy-treated locally advanced gastric cancer that accurately predicts lymph node metastasis and overall survival to inform clinical decision-making. This article is part of a special series: Driving Cancer Discoveries with Computational Research, Data Science, and Machine Learning/AI.

Heterogeneity Habitats -Derived Radiomics of Gd-EOB-DTPA Enhanced MRI for Predicting Proliferation of Hepatocellular Carcinoma.

Sun S, Yu Y, Xiao S, He Q, Jiang Z, Fan Y

pubmed logopapersJul 2 2025
To construct and validate the optimal model for preoperative prediction of proliferative HCC based on habitat-derived radiomics features of Gd-EOB-DTPA-Enhanced MRI. A total of 187 patients who underwent Gd-EOB-DTPA-enhanced MRI before curative partial hepatectomy were divided into training (n=130, 50 proliferative and 80 nonproliferative HCC) and validation cohort (n=57, 25 proliferative and 32 nonproliferative HCC). Habitat subregion generation was performed using the Gaussian Mixture Model (GMM) clustering method to cluster all pixels to identify similar subregions within the tumor. Radiomic features were extracted from each tumor subregion in the arterial phase (AP) and hepatobiliary phase (HBP). Independent sample t tests, Pearson correlation coefficient, and Least Absolute Shrinkage and Selection Operator (LASSO) algorithm were performed to select the optimal features of subregions. After feature integration and selection, machine-learning classification models using the sci-kit-learn library were constructed. Receiver Operating Characteristic (ROC) curves and the DeLong test were performed to compare the identified performance for predicting proliferative HCC among these models. The optimal number of clusters was determined to be 3 based on the Silhouette coefficient. 20, 12, and 23 features were retained from the AP, HBP, and the combined AP and HBP habitat (subregions 1, 2, 3) radiomics features. Three models were constructed with these selected features in AP, HBP, and the combined AP and HBP habitat radiomics features. The ROC analysis and DeLong test show that the Naive Bayes model of AP and HBP habitat radiomics (AP-HBP-Hab-Rad) archived the best performance. Finally, the combined model using the Light Gradient Boosting Machine (LightGBM) algorithm, incorporating the AP-HBP-Hab-Rad, age, and AFP (Alpha-Fetoprotein), was identified as the optimal model for predicting proliferative HCC. For the training and validation cohort, the accuracy, sensitivity, specificity, and AUC were 0.923, 0.880, 0.950, 0.966 (95% CI: 0.937-0.994) and 0.825, 0.680, 0.937, 0.877 (95% CI: 0.786-0.969), respectively. In its validation cohort of the combined model, the AUC value was statistically higher than the other models (P<0.01). A combined model, including AP-HBP-Hab-Rad, serum AFP, and age using the LightGBM algorithm, can satisfactorily predict proliferative HCC preoperatively.

Automated grading of rectocele with an MRI radiomics model.

Lai W, Wang S, Li J, Qi R, Zhao Z, Wang M

pubmed logopapersJul 2 2025
To develop an automated grading model for rectocele (RC) based on radiomics and evaluate its efficacy. This study retrospectively analyzed a total of 9,392 magnetic resonance imaging (MRI) images obtained from 222 patients who underwent dynamic magnetic resonance defecography (DMRD) over the period from August 2021 to June 2023. The focus was specifically on the defecation phase images of the DMRD, as this phase provides critical information for assessing RC. To develop and evaluate the model, the MRI images from all patients were randomly divided into two groups. 70% of the data were allocated to the training cohort to build the model, and the remaining 30% was reserved as a test cohort to evaluate its performance. First, the severity of RC was assessed using the RC MRI grading criteria by two independent radiologists. To extract and select radiomic features, two additional radiologists independently delineated the regions of interest (ROIs). These features were then dimensionality reduced to retain only the most relevant data for the analysis. The radiomics features were reduced in dimension, and a machine learning model was developed using a Support Vector Machine (SVM). Finally, receiver operating characteristic curve (ROC) and area under the curve (AUC) were used to evaluate the classification efficiency of the model. The AUC (macro/micro) of the model using defecation phase images was 0.794/0.824, and the overall accuracy was 0.754. The radiomics model built using the combination of DMRD defecation phase images is well suited for grading RC and helping clinicians diagnose and treat the disease.

Multi-scale fusion semantic enhancement network for medical image segmentation.

Zhang Z, Xu C, Li Z, Chen Y, Nie C

pubmed logopapersJul 2 2025
The application of sophisticated computer vision techniques for medical image segmentation (MIS) plays a vital role in clinical diagnosis and treatment. Although Transformer-based models are effective at capturing global context, they are often ineffective at dealing with local feature dependencies. In order to improve this problem, we design a Multi-scale Fusion and Semantic Enhancement Network (MFSE-Net) for endoscopic image segmentation, which aims to capture global information and enhance detailed information. MFSE-Net uses a dual encoder architecture, with PVTv2 as the primary encoder to capture global features and CNNs as the secondary encoder to capture local details. The main encoder includes the LGDA (Large-kernel Grouped Deformable Attention) module for filtering noise and enhancing the semantic extraction of the four hierarchical features. The auxiliary encoder leverages the MLCF (Multi-Layered Cross-attention Fusion) module to integrate high-level semantic data from the deep CNN with fine spatial details from the shallow layers, enhancing the precision of boundaries and positioning. On the decoder side, we have introduced the PSE (Parallel Semantic Enhancement) module, which embeds the boundary and position information of the secondary encoder into the output characteristics of the backbone network. In the multi-scale decoding process, we also add SAM (Scale Aware Module) to recover global semantic information and offset for the loss of boundary details. Extensive experiments have shown that MFSE-Net overwhelmingly outperforms SOTA on the renal tumor and polyp datasets.

Multi-modal models using fMRI, urine and serum biomarkers for classification and risk prognosis in diabetic kidney disease.

Shao X, Xu H, Chen L, Bai P, Sun H, Yang Q, Chen R, Lin Q, Wang L, Li Y, Lin Y, Yu P

pubmed logopapersJul 2 2025
Functional magnetic resonance imaging (fMRI) is a powerful tool for non-invasive evaluation of micro-changes in the kidneys. This study aims to develop classification and prognostic models based on multi-modal data. A total of 172 participants were included, and high-resolution multi-parameter fMRI technology was employed to obtain T2-weighted imaging (T2WI), blood oxygen level dependent (BOLD), and diffusion tensor imaging (DTI) sequence images. Based on clinical indicators, fMRI markers, serum and urine biomarkers (CD300LF, CST4, MMRN2, SERPINA1, l-glutamic acid dimethyl ester and phosphatidylcholine), machine learning algorithms were applied to establish and validate classification diagnosis models (Models 1-6) and risk-prognostic models (Models A-E). Additionally, accuracy, sensitivity, specificity, precision, area under the curve (AUC) and recall were used to evaluate the predictive performance of the models. A total of six classification models were established. Model 5 (fMRI + clinical indicators) exhibited superior performance, with an accuracy of 0.833 (95% confidence interval [CI]: 0.653-0.944). Notably, the multi-modal model incorporating image, serum and urine multi-omics and clinical indicators (Model 6) demonstrated higher predictive performance, achieving an accuracy of 0.923 (95% CI: 0.749-0.991). Furthermore, a total of five prognostic models at 2-year and 3-year follow-up were established. The Model E exhibited superior performance, achieving AUC values of 0.975 at the 2-year follow-up and 0.932 at the 3-year follow-up. Furthermore, Model E can identify patients with a high-risk prognosis. In clinical practice, the multi-modal models presented in this study demonstrate potential to enhance clinical decision-making capabilities regarding patient classification and prognosis prediction.

Combining multi-parametric MRI radiomics features with tumor abnormal protein to construct a machine learning-based predictive model for prostate cancer.

Zhang C, Wang Z, Shang P, Zhou Y, Zhu J, Xu L, Chen Z, Yu M, Zang Y

pubmed logopapersJul 2 2025
This study aims to investigate the diagnostic value of integrating multi-parametric magnetic resonance imaging (mpMRI) radiomic features with tumor abnormal protein (TAP) and clinical characteristics for diagnosing prostate cancer. A cohort of 109 patients who underwent both mpMRI and TAP assessments prior to prostate biopsy were enrolled. Radiomic features were meticulously extracted from T2-weighted imaging (T2WI) and the apparent diffusion coefficient (ADC) maps. Feature selection was performed using t-tests and the Least Absolute Shrinkage and Selection Operator (LASSO) regression, followed by model construction using the random forest algorithm. To further enhance the model's accuracy and predictive performance, this study incorporated clinical factors including age, serum prostate-specific antigen (PSA) levels, and prostate volume. By integrating these clinical indicators with radiomic features, a more comprehensive and precise predictive model was developed. Finally, the model's performance was quantified by calculating accuracy, sensitivity, specificity, precision, recall, F1 score, and the area under the curve (AUC). From mpMRI sequences of T2WI, dADC(b = 100/1000 s/mm<sup>2</sup>), and dADC(b = 100/2000 s/mm<sup>2</sup>), 8, 10, and 13 radiomic features were identified as significantly correlated with prostate cancer, respectively. Random forest models constructed based on these three sets of radiomic features achieved AUCs of 0.83, 0.86, and 0.87, respectively. When integrating all three sets of data to formulate a random forest model, an AUC of 0.84 was obtained. Additionally, a random forest model constructed on TAP and clinical characteristics achieved an AUC of 0.85. Notably, combining mpMRI radiomic features with TAP and clinical characteristics, or integrating dADC (b = 100/2000 s/mm²) sequence with TAP and clinical characteristics to construct random forest models, improved the AUCs to 0.91 and 0.92, respectively. The proposed model, which integrates radiomic features, TAP and clinical characteristics using machine learning, demonstrated high predictive efficiency in diagnosing prostate cancer.

Clinical validation of AI assisted animal ultrasound models for diagnosis of early liver trauma.

Song Q, He X, Wang Y, Gao H, Tan L, Ma J, Kang L, Han P, Luo Y, Wang K

pubmed logopapersJul 2 2025
The study aimed to develop an AI-assisted ultrasound model for early liver trauma identification, using data from Bama miniature pigs and patients in Beijing, China. A deep learning model was created and fine-tuned with animal and clinical data, achieving high accuracy metrics. In internal tests, the model outperformed both Junior and Senior sonographers. External tests showed the model's effectiveness, with a Dice Similarity Coefficient of 0.74, True Positive Rate of 0.80, Positive Predictive Value of 0.74, and 95% Hausdorff distance of 14.84. The model's performance was comparable to Junior sonographers and slightly lower than Senior sonographers. This AI model shows promise for liver injury detection, offering a valuable tool with diagnostic capabilities similar to those of less experienced human operators.

PanTS: The Pancreatic Tumor Segmentation Dataset

Wenxuan Li, Xinze Zhou, Qi Chen, Tianyu Lin, Pedro R. A. S. Bassi, Szymon Plotka, Jaroslaw B. Cwikla, Xiaoxi Chen, Chen Ye, Zheren Zhu, Kai Ding, Heng Li, Kang Wang, Yang Yang, Yucheng Tang, Daguang Xu, Alan L. Yuille, Zongwei Zhou

arxiv logopreprintJul 2 2025
PanTS is a large-scale, multi-institutional dataset curated to advance research in pancreatic CT analysis. It contains 36,390 CT scans from 145 medical centers, with expert-validated, voxel-wise annotations of over 993,000 anatomical structures, covering pancreatic tumors, pancreas head, body, and tail, and 24 surrounding anatomical structures such as vascular/skeletal structures and abdominal/thoracic organs. Each scan includes metadata such as patient age, sex, diagnosis, contrast phase, in-plane spacing, slice thickness, etc. AI models trained on PanTS achieve significantly better performance in pancreatic tumor detection, localization, and segmentation compared to those trained on existing public datasets. Our analysis indicates that these gains are directly attributable to the 16x larger-scale tumor annotations and indirectly supported by the 24 additional surrounding anatomical structures. As the largest and most comprehensive resource of its kind, PanTS offers a new benchmark for developing and evaluating AI models in pancreatic CT analysis.
Page 27 of 74733 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.