Sort by:
Page 26 of 74733 results

Hybrid-View Attention for csPCa Classification in TRUS

Zetian Feng, Juan Fu, Xuebin Zou, Hongsheng Ye, Hong Wu, Jianhua Zhou, Yi Wang

arxiv logopreprintJul 4 2025
Prostate cancer (PCa) is a leading cause of cancer-related mortality in men, and accurate identification of clinically significant PCa (csPCa) is critical for timely intervention. Transrectal ultrasound (TRUS) is widely used for prostate biopsy; however, its low contrast and anisotropic spatial resolution pose diagnostic challenges. To address these limitations, we propose a novel hybrid-view attention (HVA) network for csPCa classification in 3D TRUS that leverages complementary information from transverse and sagittal views. Our approach integrates a CNN-transformer hybrid architecture, where convolutional layers extract fine-grained local features and transformer-based HVA models global dependencies. Specifically, the HVA comprises intra-view attention to refine features within a single view and cross-view attention to incorporate complementary information across views. Furthermore, a hybrid-view adaptive fusion module dynamically aggregates features along both channel and spatial dimensions, enhancing the overall representation. Experiments are conducted on an in-house dataset containing 590 subjects who underwent prostate biopsy. Comparative and ablation results prove the efficacy of our method. The code is available at https://github.com/mock1ngbrd/HVAN.

Hybrid-View Attention Network for Clinically Significant Prostate Cancer Classification in Transrectal Ultrasound

Zetian Feng, Juan Fu, Xuebin Zou, Hongsheng Ye, Hong Wu, Jianhua Zhou, Yi Wang

arxiv logopreprintJul 4 2025
Prostate cancer (PCa) is a leading cause of cancer-related mortality in men, and accurate identification of clinically significant PCa (csPCa) is critical for timely intervention. Transrectal ultrasound (TRUS) is widely used for prostate biopsy; however, its low contrast and anisotropic spatial resolution pose diagnostic challenges. To address these limitations, we propose a novel hybrid-view attention (HVA) network for csPCa classification in 3D TRUS that leverages complementary information from transverse and sagittal views. Our approach integrates a CNN-transformer hybrid architecture, where convolutional layers extract fine-grained local features and transformer-based HVA models global dependencies. Specifically, the HVA comprises intra-view attention to refine features within a single view and cross-view attention to incorporate complementary information across views. Furthermore, a hybrid-view adaptive fusion module dynamically aggregates features along both channel and spatial dimensions, enhancing the overall representation. Experiments are conducted on an in-house dataset containing 590 subjects who underwent prostate biopsy. Comparative and ablation results prove the efficacy of our method. The code is available at https://github.com/mock1ngbrd/HVAN.

Development of a prediction model by combining tumor diameter and clinical parameters of adrenal incidentaloma.

Iwamoto Y, Kimura T, Morimoto Y, Sugisaki T, Dan K, Iwamoto H, Sanada J, Fushimi Y, Shimoda M, Fujii T, Nakanishi S, Mune T, Kaku K, Kaneto H

pubmed logopapersJul 3 2025
When adrenal incidentalomas are detected, diagnostic procedures are complicated by the need for endocrine-stimulating tests and imaging using various modalities to evaluate whether the tumor is a hormone-producing adrenal tumor. This study aimed to develop a machine-learning-based clinical model that combines computed tomography (CT) imaging and clinical parameters for adrenal tumor classification. This was a retrospective cohort study involving 162 patients who underwent hormone testing for adrenal incidentalomas at our institution. Nominal logistic regression analysis was used to identify the predictive factors for hormone-producing adrenal tumors, and three random forest classification models were developed using clinical and imaging parameters. The study included 55 patients with non-functioning adrenal tumors (NFAT), 44 with primary aldosteronism (PA), 22 with mild autonomous cortisol secretion (MACS), 18 with Cushing's syndrome (CS), and 23 with pheochromocytoma (Pheo). A random forest classification model combining the adrenal tumor diameter on CT, early morning hormone measurements, and several clinical parameters was constructed, and showed high diagnostic accuracy for PA, Pheo, and CS (area under the curve: 0.88, 0.85, and 0.80, respectively). However, sufficient diagnostic accuracy has not yet been achieved for MACS. This model provides a noninvasive and efficient tool for adrenal tumor classification, potentially reducing the need for additional hormonal stimulation tests. However, further validation studies are required to confirm the clinical utility of this method.

Interpretable and generalizable deep learning model for preoperative assessment of microvascular invasion and outcome in hepatocellular carcinoma based on MRI: a multicenter study.

Dong X, Jia X, Zhang W, Zhang J, Xu H, Xu L, Ma C, Hu H, Luo J, Zhang J, Wang Z, Ji W, Yang D, Yang Z

pubmed logopapersJul 3 2025
This study aimed to develop an interpretable, domain-generalizable deep learning model for microvascular invasion (MVI) assessment in hepatocellular carcinoma (HCC). Utilizing a retrospective dataset of 546 HCC patients from five centers, we developed and validated a clinical-radiological model and deep learning models aimed at MVI prediction. The models were developed on a dataset of 263 cases consisting of data from three centers, internally validated on a set of 66 patients, and externally tested on two independent sets. An adversarial network-based deep learning (AD-DL) model was developed to learn domain-invariant features from multiple centers within the training set. The area under the receiver operating characteristic curve (AUC) was calculated using pathological MVI status. With the best-performed model, early recurrence-free survival (ERFS) stratification was validated on the external test set by the log-rank test, and the differentially expressed genes (DEGs) associated with MVI status were tested on the RNA sequencing analysis of the Cancer Imaging Archive. The AD-DL model demonstrated the highest diagnostic performance and generalizability with an AUC of 0.793 in the internal test set, 0.801 in external test set 1, and 0.773 in external test set 2. The model's prediction of MVI status also demonstrated a significant correlation with ERFS (p = 0.048). DEGs associated with MVI status were primarily enriched in the metabolic processes and the Wnt signaling pathway, and the epithelial-mesenchymal transition process. The AD-DL model allows preoperative MVI prediction and ERFS stratification in HCC patients, which has a good generalizability and biological interpretability. The adversarial network-based deep learning model predicts MVI status well in HCC patients and demonstrates good generalizability. By integrating bioinformatics analysis of the model's predictions, it achieves biological interpretability, facilitating its clinical translation. Current MVI assessment models for HCC lack interpretability and generalizability. The adversarial network-based model's performance surpassed clinical radiology and squeeze-and-excitation network-based models. Biological function analysis was employed to enhance the interpretability and clinical translatability of the adversarial network-based model.

Fat-water MRI separation using deep complex convolution network.

Ganeshkumar M, Kandasamy D, Sharma R, Mehndiratta A

pubmed logopapersJul 3 2025
Deep complex convolutional networks (DCCNs) utilize complex-valued convolutions and can process complex-valued MRI signals directly without splitting them into two real-valued magnitude and phase components. The performance of DCCN and real-valued U-Net is thoroughly investigated in the physics-informed subject-specific ad-hoc reconstruction method for fat-water separation and is compared against a widely used reference approach. A comprehensive test dataset (n = 33) was used for performance analysis. The 2012 ISMRM fat-water separation workshop dataset containing 28 batches of multi-echo MRIs with 3-15 echoes from the abdomen, thigh, knee, and phantoms, acquired with 1.5 T and 3 T scanners were used. Additionally, five MAFLD patients multi-echo MRIs acquired from our clinical radiology department were also used. The quantitative results demonstrated that DCCN produced fat-water maps with better normalized RMS error and structural similarity index with the reference approach, compared to real-valued U-Nets in the ad-hoc reconstruction method for fat-water separation. The DCCN achieved an overall average SSIM of 0.847 ± 0.069 and 0.861 ± 0.078 in generating fat and water maps, respectively, in contrast the U-Net achieved only 0.653 ± 0.166 and 0.729 ± 0.134. The average liver PDFF from DCCN achieved a correlation coefficient R of 0.847 with the reference approach.

Can Whole-Thyroid-Based CT Radiomics Model Achieve the Performance of Lesion-Based Model in Predicting the Thyroid Nodules Malignancy? - A Comparative Study.

Yuan W, Wu J, Mai W, Li H, Li Z

pubmed logopapersJul 3 2025
Machine learning is now extensively implemented in medical imaging for preoperative risk stratification and post-therapeutic outcome assessment, enhancing clinical decision-making. Numerous studies have focused on predicting whether thyroid nodules are benign or malignant using a nodule-based approach, which is time-consuming, inefficient, and overlooks the impact of the peritumoral region. To evaluate the effectiveness of using the whole-thyroid as the region of interest in differentiating between benign and malignant thyroid nodules, exploring the potential application value of the entire thyroid. This study enrolled 1121 patients with thyroid nodules between February 2017 and May 2023. All participants underwent contrast-enhanced CT scans prior to surgical intervention. Radiomics features were extracted from arterial phase images, and feature dimensionality reduction was performed using the Least Absolute Shrinkage and Selection Operator (LASSO) algorithm. Four machine learning models were trained on the selected features within the training cohort and subsequently evaluated on the independent validation cohort. The diagnostic performance of whole-thyroid versus nodule-based radiomics models was compared through receiver operating characteristic (ROC) curve analysis and area under the curve (AUC) metrics. The nodule-based logistic regression model achieved an AUC of 0.81 in the validation set, with sensitivity, specificity, and accuracy of 78.6%, 69.4%, and 75.6%, respectively. The whole-thyroid-based random forest model attained an AUC of 0.80, with sensitivity, specificity, and accuracy of 90.0%, 51.9.%, and 80.1%, respectively. The AUC advantage ratios on the LR, DT, RF, and SVM models are approximately - 2.47%, 0.00%, - 4.76%, and - 4.94%, respectively. The Delong test showed no significant differences among the four machine learning models regarding the region of interest defined by either the thyroid primary lesion or the whole thyroid. There was no significant difference in distinguishing between benign and malignant thyroid nodules using either a nodule-based or whole-thyroid-based strategy for ROI outlining. We hypothesize that the whole-thyroid approach provides enhanced diagnostic capability for detecting papillary thyroid carcinomas (PTCs) with ill-defined margins.

Radiological and Biological Dictionary of Radiomics Features: Addressing Understandable AI Issues in Personalized Prostate Cancer, Dictionary Version PM1.0.

Salmanpour MR, Amiri S, Gharibi S, Shariftabrizi A, Xu Y, Weeks WB, Rahmim A, Hacihaliloglu I

pubmed logopapersJul 3 2025
Artificial intelligence (AI) can advance medical diagnostics, but interpretability limits its clinical use. This work links standardized quantitative Radiomics features (RF) extracted from medical images with clinical frameworks like PI-RADS, ensuring AI models are understandable and aligned with clinical practice. We investigate the connection between visual semantic features defined in PI-RADS and associated risk factors, moving beyond abnormal imaging findings, and establishing a shared framework between medical and AI professionals by creating a standardized radiological/biological RF dictionary. Six interpretable and seven complex classifiers, combined with nine interpretable feature selection algorithms (FSA), were applied to RFs extracted from segmented lesions in T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and apparent diffusion coefficient (ADC) multiparametric MRI sequences to predict TCIA-UCLA scores, grouped as low-risk (scores 1-3) and high-risk (scores 4-5). We then utilized the created dictionary to interpret the best predictive models. Combining sequences with FSAs including ANOVA F-test, Correlation Coefficient, and Fisher Score, and utilizing logistic regression, identified key features: The 90th percentile from T2WI, (reflecting hypo-intensity related to prostate cancer risk; Variance from T2WI (lesion heterogeneity; shape metrics including Least Axis Length and Surface Area to Volume ratio from ADC, describing lesion shape and compactness; and Run Entropy from ADC (texture consistency). This approach achieved the highest average accuracy of 0.78 ± 0.01, significantly outperforming single-sequence methods (p-value < 0.05). The developed dictionary for Prostate-MRI (PM1.0) serves as a common language and fosters collaboration between clinical professionals and AI developers to advance trustworthy AI solutions that support reliable/interpretable clinical decisions.

MRI-based habitat, intra-, and peritumoral machine learning model for perineural invasion prediction in rectal cancer.

Zhong J, Huang T, Jiang R, Zhou Q, Wu G, Zeng Y

pubmed logopapersJul 3 2025
This study aimed to analyze preoperative multimodal magnetic resonance images of patients with rectal cancer using habitat-based, intratumoral, peritumoral, and combined radiomics models for non-invasive prediction of perineural invasion (PNI) status. Data were collected from 385 pathologically confirmed rectal cancer cases across two centers. Patients from Center 1 were randomly assigned to training and internal validation groups at an 8:2 ratio; the external validation group comprised patients from Center 2. Tumors were divided into three subregions via K-means clustering. Radiomics features were isolated from intratumoral and peritumoral (3 mm beyond the tumor) regions, as well as subregions, to form a combined dataset based on T2-weighted imaging and diffusion-weighted imaging. The support vector machine algorithm was used to construct seven predictive models. intratumoral, peritumoral, and subregion features were integrated to generate an additional model, referred to as the Total model. For each radiomics feature, its contribution to prediction outcomes was quantified using Shapley values, providing interpretable evidence to support clinical decision-making. The Total combined model outperformed other predictive models in the training, internal validation, and external validation sets (area under the curve values: 0.912, 0.882, and 0.880, respectively). The integration of intratumoral, peritumoral, and subregion features represents an effective approach for predicting PNI in rectal cancer, providing valuable guidance for rectal cancer treatment, along with enhanced clinical decision-making precision and reliability.

Multi-modal models using fMRI, urine and serum biomarkers for classification and risk prognosis in diabetic kidney disease.

Shao X, Xu H, Chen L, Bai P, Sun H, Yang Q, Chen R, Lin Q, Wang L, Li Y, Lin Y, Yu P

pubmed logopapersJul 2 2025
Functional magnetic resonance imaging (fMRI) is a powerful tool for non-invasive evaluation of micro-changes in the kidneys. This study aims to develop classification and prognostic models based on multi-modal data. A total of 172 participants were included, and high-resolution multi-parameter fMRI technology was employed to obtain T2-weighted imaging (T2WI), blood oxygen level dependent (BOLD), and diffusion tensor imaging (DTI) sequence images. Based on clinical indicators, fMRI markers, serum and urine biomarkers (CD300LF, CST4, MMRN2, SERPINA1, l-glutamic acid dimethyl ester and phosphatidylcholine), machine learning algorithms were applied to establish and validate classification diagnosis models (Models 1-6) and risk-prognostic models (Models A-E). Additionally, accuracy, sensitivity, specificity, precision, area under the curve (AUC) and recall were used to evaluate the predictive performance of the models. A total of six classification models were established. Model 5 (fMRI + clinical indicators) exhibited superior performance, with an accuracy of 0.833 (95% confidence interval [CI]: 0.653-0.944). Notably, the multi-modal model incorporating image, serum and urine multi-omics and clinical indicators (Model 6) demonstrated higher predictive performance, achieving an accuracy of 0.923 (95% CI: 0.749-0.991). Furthermore, a total of five prognostic models at 2-year and 3-year follow-up were established. The Model E exhibited superior performance, achieving AUC values of 0.975 at the 2-year follow-up and 0.932 at the 3-year follow-up. Furthermore, Model E can identify patients with a high-risk prognosis. In clinical practice, the multi-modal models presented in this study demonstrate potential to enhance clinical decision-making capabilities regarding patient classification and prognosis prediction.

Diagnostic performance of artificial intelligence based on contrast-enhanced computed tomography in pancreatic ductal adenocarcinoma: a systematic review and meta-analysis.

Yan G, Chen X, Wang Y

pubmed logopapersJul 2 2025
This meta-analysis systematically evaluated the diagnostic performance of artificial intelligence (AI) based on contrast-enhanced computed tomography (CECT) in detecting pancreatic ductal adenocarcinoma (PDAC). Following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Diagnostic Test Accuracy (PRISMA-DTA) guidelines, a comprehensive literature search was conducted across PubMed, Embase, and Web of Science from inception to March 2025. Bivariate random-effects models pooled sensitivity, specificity, and area under the curve (AUC). Heterogeneity was quantified via I² statistics, with subgroup analyses examining sources of variability, including AI methodologies, model architectures, sample sizes, geographic distributions, control groups and tumor stages. Nineteen studies involving 5,986 patients in internal validation cohorts and 2,069 patients in external validation cohorts were included. AI models demonstrated robust diagnostic accuracy in internal validation, with pooled sensitivity of 0.94 (95% CI 0.89-0.96), specificity of 0.93 (95% CI 0.90-0.96), and AUC of 0.98 (95% CI 0.96-0.99). External validation revealed moderately reduced sensitivity (0.84; 95% CI 0.78-0.89) and AUC (0.94; 95% CI 0.92-0.96), while specificity remained comparable (0.93; 95% CI 0.87-0.96). Substantial heterogeneity (I² > 85%) was observed, predominantly attributed to methodological variations in AI architectures and disparities in cohort sizes. AI demonstrates excellent diagnostic performance for PDAC on CECT, achieving high sensitivity and specificity across validation scenarios. However, its efficacy varies significantly with clinical context and tumor stage. Therefore, prospective multicenter trials that utilize standardized protocols and diverse cohorts, including early-stage tumors and complex benign conditions, are essential to validate the clinical utility of AI.
Page 26 of 74733 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.