Sort by:
Page 267 of 3423416 results

Development and interpretation of a pathomics-based model for the prediction of immune therapy response in colorectal cancer.

Luo Y, Tian Q, Xu L, Zeng D, Zhang H, Zeng T, Tang H, Wang C, Chen Y

pubmed logopapersMay 31 2025
Colorectal cancer (CRC) is the third most common malignancy and the second leading cause of cancer-related deaths worldwide, with a 5-year survival rate below 20 %. Immunotherapy, particularly immune checkpoint blockade (ICB)-based therapies, has become an important approach for CRC treatment. However, only specific patient subsets demonstrate significant clinical benefits. Although the TIDE algorithm can predict immunotherapy responses, the reliance on transcriptome sequencing data limits its clinical applicability. Recent advances in artificial intelligence and computational pathology provide new avenues for medical image analysis.In this study, we classified TCGA-CRC samples into immunotherapy responder and non-responder groups using the TIDE algorithm. Further, a pathomics model based on convolutional neural networks was constructed to directly predict immunotherapy responses from histopathological images. Single-cell analysis revealed that fibroblasts may induce immunotherapy resistance in CRC through collagen-CD44 and ITGA1 + ITGB1 signaling axes. The developed pathomics model demonstrated excellent classification performance in the test set, with an AUC of 0.88 at the patch level and 0.85 at the patient level. Moreover, key pathomics features were identified through SHAP analysis. This innovative predictive tool provides a novel method for clinical decision-making in CRC immunotherapy, with potential to optimize treatment strategies and advance precision medicine.

Diagnostic Accuracy of an Artificial Intelligence-based Platform in Detecting Periapical Radiolucencies on Cone-Beam Computed Tomography Scans of Molars.

Allihaibi M, Koller G, Mannocci F

pubmed logopapersMay 31 2025
This study aimed to evaluate the diagnostic performance of an artificial intelligence (AI)-based platform (Diagnocat) in detecting periapical radiolucencies (PARLs) in cone-beam computed tomography (CBCT) scans of molars. Specifically, we assessed Diagnocat's performance in detecting PARLs in non-root-filled molars and compared its diagnostic performance between preoperative and postoperative scans. This retrospective study analyzed preoperative and postoperative CBCT scans of 134 molars (327 roots). PARLs detected by Diagnocat were compared with assessments independently performed by two experienced endodontists, serving as the reference standard. Diagnostic performance was assessed at both tooth and root levels using sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), F1 score, and the area under the receiver operating characteristic curve (AUC-ROC). In preoperative scans of non-root-filled molars, Diagnocat demonstrated high sensitivity (teeth: 93.9%, roots: 86.2%), moderate specificity (teeth: 65.2%, roots: 79.9%), accuracy (teeth: 79.1%, roots: 82.6%), PPV (teeth: 71.8%, roots: 75.8%), NPV (teeth: 91.8%, roots: 88.8%), and F1 score (teeth: 81.3%, roots: 80.7%) for PARL detection. The AUC was 0.76 at the tooth level and 0.79 at the root level. Postoperative scans showed significantly lower PPV (teeth: 54.2%; roots: 46.9%) and F1 scores (teeth: 67.2%; roots: 59.2%). Diagnocat shows promise in detecting PARLs in CBCT scans of non-root-filled molars, demonstrating high sensitivity but moderate specificity, highlighting the need for human oversight to prevent overdiagnosis. However, diagnostic performance declined significantly in postoperative scans of root-filled molars. Further research is needed to optimize the platform's performance and support its integration into clinical practice. AI-based platforms such as Diagnocat can assist clinicians in detecting PARLs in CBCT scans, enhancing diagnostic efficiency and supporting decision-making. However, human expertise remains essential to minimize the risk of overdiagnosis and avoid unnecessary treatment.

Accelerated proton resonance frequency-based magnetic resonance thermometry by optimized deep learning method.

Xu S, Zong S, Mei CS, Shen G, Zhao Y, Wang H

pubmed logopapersMay 31 2025
Proton resonance frequency (PRF)-based magnetic resonance (MR) thermometry plays a critical role in thermal ablation therapies through focused ultrasound (FUS). For clinical applications, accurate and rapid temperature feedback is essential to ensure both the safety and effectiveness of these treatments. This work aims to improve temporal resolution in dynamic MR temperature map reconstructions using an enhanced deep-learning method, thereby supporting the real-time monitoring required for effective FUS treatments. Five classical neural network architectures-cascade net, complex-valued U-Net, shift window transformer for MRI, real-valued U-Net, and U-Net with residual blocks-along with training-optimized methods were applied to reconstruct temperature maps from 2-fold and 4-fold undersampled k-space data. The training enhancements included pre-training/training-phase data augmentations, knowledge distillation, and a novel amplitude-phase decoupling loss function. Phantom and ex vivo tissue heating experiments were conducted using a FUS transducer. Ground truth was the complex MR images with accurate temperature changes, and datasets were manually undersampled to simulate such acceleration here. Separate testing datasets were used to evaluate real-time performance and temperature accuracy. Furthermore, our proposed deep learning-based rapid reconstruction approach was validated on a clinical dataset obtained from patients with uterine fibroids, demonstrating its clinical applicability. Acceleration factors of 1.9 and 3.7 were achieved for 2× and 4× k-space under samplings, respectively. The deep learning-based reconstruction using ResUNet incorporating the four optimizations, showed superior performance. For 2-fold acceleration, the RMSE of temperature map patches were 0.89°C and 1.15°C for the phantom and ex vivo testing datasets, respectively. The DICE coefficient for the 43°C isotherm-enclosed regions was 0.81, and the Bland-Altman analysis indicated a bias of -0.25°C with limits of agreement of ±2.16°C. In the 4-fold under-sampling case, these evaluation metrics showed approximately a 10% reduction in accuracy. Additionally, the DICE coefficient measuring the overlap between the reconstructed temperature maps (using the optimized ResUNet) and the ground truth, specifically in regions where the temperature exceeded the 43°C threshold, were 0.77 and 0.74 for the 2× and 4× under-sampling scenarios, respectively. This study demonstrates that deep learning-based reconstruction significantly enhances the accuracy and efficiency of MR thermometry, particularly in the context of FUS-based clinical treatments for uterine fibroids. This approach could also be extended to other applications such as essential tremor and prostate cancer treatments where MRI-guided FUS plays a critical role.

Discriminating Clear Cell From Non-Clear Cell Renal Cell Carcinoma: A Machine Learning Approach Using Contrast-enhanced Ultrasound Radiomics.

Liang M, Wu S, Ou B, Wu J, Qiu H, Zhao X, Luo B

pubmed logopapersMay 31 2025
The aim of this investigation is to assess the clinical usefulness of a machine learning model using contrast-enhanced ultrasound (CEUS) radiomics in discriminating clear cell renal cell carcinoma (ccRCC) from non-ccRCC. A total of 292 patients with pathologically confirmed RCC subtypes underwent CEUS (development set. n = 231; validation set, n = 61) in a retrospective study. Radiomics features were derived from CEUS images acquired during the cortical and parenchymal phases. Radiomics models were developed using logistic regression (LR), support vector machine, decision tree, naive Bayes, gradient boosting machine, and random forest. The suitable model was identified based on the area under the receiver operating characteristic curve (AUC). Appropriate clinical CEUS features were identified through univariate and multivariate LR analyses to develop a clinical model. By integrating radiomics and clinical CEUS features, a combined model was established. A comprehensive evaluation of the models' performance was conducted. After the reduction and selection process were applied to 2250 radiomics features, the final set of 8 features was considered valuable. Among the models, the LR model had the highest performance on the validation set and showed good robustness. In both the development and validation sets, both the radiomics (AUC, 0.946 and 0.927) and the combined models (AUC, 0.949 and 0.925) outperformed the clinical model (AUC, 0.851 and 0.768), showing higher AUC values (all p < 0.05). The combined model exhibited favorable calibration and clinical benefit. The combined model integrating clinical CEUS and CEUS radiomics features demonstrated good diagnostic performance in discriminating ccRCC from non-ccRCC.

From Guidelines to Intelligence: How AI Refines Thyroid Nodule Biopsy Decisions.

Zeng W, He Y, Xu R, Mai W, Chen Y, Li S, Yi W, Ma L, Xiong R, Liu H

pubmed logopapersMay 31 2025
To evaluate the value of combining American College of Radiology (ACR) Thyroid Imaging Reporting and Data System (TI-RADS) with the Demetics ultrasound diagnostic system in reducing the rate of fine-needle aspiration (FNA) biopsies for thyroid nodules. A retrospective study analyzed 548 thyroid nodules from 454 patients, all meeting ACR TI-RADS guidelines (category ≥3 and diameter ≥10 mm) for FNA. Nodule was reclassified using the combined ACR TI-RADS and Demetics system (De TI-RADS), and the biopsy rates were compared. Using ACR TI-RADS alone, the biopsy rate was 70.6% (387/548), with a positive predictive value (PPV) of 52.5% (203/387), an unnecessary biopsy rate of 47.5% (184/387) and a missed diagnosis rate of 11.0% (25/228). Incorporating Demetics reduced the biopsy rate to 48.1% (264/548), the unnecessary biopsy rate to 17.4% (46/265) and the missed diagnosis rate to 4.4% (10/228), while increasing PPV to 82.6% (218/264). All differences between ACR TI-RADS and De TI-RADS were statistically significant (p < 0.05). The integration of ACR TI-RADS with the Demetics system improves nodule risk assessment by enhancing diagnostic and efficiency. This approach reduces unnecessary biopsies and missed diagnoses while increasing PPV, offering a more reliable tool for clinicians and patients.

Study of AI algorithms on mpMRI and PHI for the diagnosis of clinically significant prostate cancer.

Luo Z, Li J, Wang K, Li S, Qian Y, Xie W, Wu P, Wang X, Han J, Zhu W, Wang H, He Y

pubmed logopapersMay 31 2025
To study the feasibility of multiple factors in improving the diagnostic accuracy of clinically significant prostate cancer (csPCa). A retrospective study with 131 patients analyzes age, PSA, PHI and pathology. Patients with ISUP > 2 were classified as csPCa, and others are non-csPCa. The mpMRI images were processed by a homemade AI algorithm, obtaining positive or negative AI results. Four logistic regression models were fitted, with pathological findings as the dependent variable. The predicted probability of the patients was used to test the prediction efficacy of the models. The DeLong test was performed to compare differences in the area under the receiver operating characteristic (ROC) curves (AUCs) between the models. The study includes 131 patients: 62 were diagnosed with csPCa and 69 were non-csPCa. Statically significant differences were found in age, PSA, PIRADS score, AI results, and PHI values between the 2 groups (all P ≤ 0.001). The conventional model (R<sup>2</sup> = 0.389), the AI model (R<sup>2</sup> = 0.566), and the PHI model (R<sup>2</sup> = 0.515) were compared to the full model (R<sup>2</sup> = 0.626) with ANOVA and showed statistically significant differences (all P < 0.05). The AUC of the full model (0.921 [95% CI: 0.871-0.972]) was significantly higher than that of the conventional model (P = 0.001), AI model (P < 0.001), and PHI model (P = 0.014). Combining multiple factors such as age, PSA, PIRADS score and PHI, adding AI algorithm based on mpMRI, the diagnostic accuracy of csPCa can be improved.

Subclinical atrial fibrillation prediction based on deep learning and strain analysis using echocardiography.

Huang SH, Lin YC, Chen L, Unankard S, Tseng VS, Tsao HM, Tang GJ

pubmed logopapersMay 31 2025
Subclinical atrial fibrillation (SCAF), also known as atrial high-rate episodes (AHREs), refers to asymptomatic heart rate elevations associated with increased risks of atrial fibrillation and cardiovascular events. Although deep learning (DL) models leveraging echocardiographic images from ultrasound are widely used for cardiac function analysis, their application to AHRE prediction remains unexplored. This study introduces a novel DL-based framework for automatic AHRE detection using echocardiograms. The approach encompasses left atrium (LA) segmentation, LA strain feature extraction, and AHRE classification. Data from 117 patients with cardiac implantable electronic devices undergoing echocardiography were analyzed, with 80% allocated to the development set and 20% to the test set. LA segmentation accuracy was quantified using the Dice coefficient, yielding scores of 0.923 for the LA cavity and 0.741 for the LA wall. For AHRE classification, metrics such as area under the curve (AUC), accuracy, sensitivity, and specificity were employed. A transformer-based model integrating patient characteristics demonstrated robust performance, achieving mean AUC of 0.815, accuracy of 0.809, sensitivity of 0.800, and specificity of 0.783 for a 24-h AHRE duration threshold. This framework represents a reliable tool for AHRE assessment and holds significant potential for early SCAF detection, enhancing clinical decision-making and patient outcomes.

Machine learning-based hemodynamics quantitative assessment of pulmonary circulation using computed tomographic pulmonary angiography.

Xie H, Zhao X, Zhang N, Liu J, Yang G, Cao Y, Xu J, Xu L, Sun Z, Wen Z, Chai S, Liu D

pubmed logopapersMay 30 2025
Pulmonary hypertension (pH) is a malignant pulmonary circulation disease. Right heart catheterization (RHC) is the gold standard procedure for quantitative evaluation of pulmonary hemodynamics. Accurate and noninvasive quantitative evaluation of pulmonary hemodynamics is challenging due to the limitations of currently available assessment methods. Patients who underwent computed tomographic pulmonary angiography (CTPA) and RHC examinations within 2 weeks were included. The dataset was randomly divided into a training set and a test set at an 8:2 ratio. A radiomic feature model and another two-dimensional (2D) feature model aimed to quantitatively evaluate of pulmonary hemodynamics were constructed. The performance of models was determined by calculating the mean squared error, the intraclass correlation coefficient (ICC) and the area under the precision-recall curve (AUC-PR) and performing Bland-Altman analyses. 345 patients: 271 patients with PH (mean age 50 ± 17 years, 93 men) and 74 without PH (mean age 55 ± 16 years, 26 men) were identified. The predictive results of pulmonary hemodynamics of radiomic feature model integrating 5 2D features and other 30 radiomic features were consistent with the results from RHC, and outperformed another 2D feature model. The radiomic feature model exhibited moderate to good reproducibility to predict pulmonary hemodynamic parameters (ICC reached 0.87). In addition, pH can be accurately identified based on a classification model (AUC-PR =0.99). This study provides a noninvasive method for comprehensively and quantitatively evaluating pulmonary hemodynamics using CTPA images, which has the potential to serve as an alternative to RHC, pending further validation.

Multimodal AI framework for lung cancer diagnosis: Integrating CNN and ANN models for imaging and clinical data analysis.

Oncu E, Ciftci F

pubmed logopapersMay 30 2025
Lung cancer remains a leading cause of cancer-related mortality worldwide, emphasizing the critical need for accurate and early diagnostic solutions. This study introduces a novel multimodal artificial intelligence (AI) framework that integrates Convolutional Neural Networks (CNNs) and Artificial Neural Networks (ANNs) to improve lung cancer classification and severity assessment. The CNN model, trained on 1019 preprocessed CT images, classified lung tissue into four histological categories, adenocarcinoma, large cell carcinoma, squamous cell carcinoma, and normal, with a weighted accuracy of 92 %. Interpretability is enhanced using Gradient-weighted Class Activation Mapping (Grad-CAM), which highlights the salient image regions influencing the model's predictions. In parallel, an ANN trained on clinical data from 999 patients-spanning 24 key features such as demographic, symptomatic, and genetic factors-achieves 99 % accuracy in predicting cancer severity (low, medium, high). SHapley Additive exPlanations (SHAP) are employed to provide both global and local interpretability of the ANN model, enabling transparent decision-making. Both models were rigorously validated using k-fold cross-validation to ensure robustness and reduce overfitting. This hybrid approach effectively combines spatial imaging data and structured clinical information, demonstrating strong predictive performance and offering an interpretable and comprehensive AI-based solution for lung cancer diagnosis and management.

Mammogram mastery: Breast cancer image classification using an ensemble of deep learning with explainable artificial intelligence.

Kumar Mondal P, Jahan MK, Byeon H

pubmed logopapersMay 30 2025
Breast cancer is a serious public health problem and is one of the leading causes of cancer-related deaths in women worldwide. Early detection of the disease can significantly increase the chances of survival. However, manual analysis of mammogram mastery images is complex and time-consuming, which can lead to disagreements among experts. For this reason, automated diagnostic systems can play a significant role in increasing the accuracy and efficiency of diagnosis. In this study, we present an effective deep learning (DL) method, which classifies mammogram mastery images into cancer and noncancer categories using a collected dataset. Our model is pretrained based on the Inception V3 architecture. First, we run 5-fold cross-validation tests on the fully trained and fine-tuned Inception V3 model. Next, we apply a combined method based on likelihood and mean, where the fine-tuned Inception V3 model demonstrated superior performance in classification. Our DL model achieved 99% accuracy and 99% F1 score. In addition, interpretable AI techniques were used to enhance the transparency of the classification process. The finely tuned Inception V3 model demonstrated the highest performance in classification, confirming its effectiveness in automatic breast cancer detection. The experimental results clearly indicate that our proposed DL-based method for breast cancer image classification is highly effective, especially its application in image-based diagnostic methods. This study brings to the fore the huge potential of AI-based solutions, which can play a significant role in increasing the accuracy and reliability of breast cancer diagnosis.
Page 267 of 3423416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.