Sort by:
Page 195 of 2352345 results

Study of AI algorithms on mpMRI and PHI for the diagnosis of clinically significant prostate cancer.

Luo Z, Li J, Wang K, Li S, Qian Y, Xie W, Wu P, Wang X, Han J, Zhu W, Wang H, He Y

pubmed logopapersMay 31 2025
To study the feasibility of multiple factors in improving the diagnostic accuracy of clinically significant prostate cancer (csPCa). A retrospective study with 131 patients analyzes age, PSA, PHI and pathology. Patients with ISUP > 2 were classified as csPCa, and others are non-csPCa. The mpMRI images were processed by a homemade AI algorithm, obtaining positive or negative AI results. Four logistic regression models were fitted, with pathological findings as the dependent variable. The predicted probability of the patients was used to test the prediction efficacy of the models. The DeLong test was performed to compare differences in the area under the receiver operating characteristic (ROC) curves (AUCs) between the models. The study includes 131 patients: 62 were diagnosed with csPCa and 69 were non-csPCa. Statically significant differences were found in age, PSA, PIRADS score, AI results, and PHI values between the 2 groups (all P ≤ 0.001). The conventional model (R<sup>2</sup> = 0.389), the AI model (R<sup>2</sup> = 0.566), and the PHI model (R<sup>2</sup> = 0.515) were compared to the full model (R<sup>2</sup> = 0.626) with ANOVA and showed statistically significant differences (all P < 0.05). The AUC of the full model (0.921 [95% CI: 0.871-0.972]) was significantly higher than that of the conventional model (P = 0.001), AI model (P < 0.001), and PHI model (P = 0.014). Combining multiple factors such as age, PSA, PIRADS score and PHI, adding AI algorithm based on mpMRI, the diagnostic accuracy of csPCa can be improved.

NeoPred: dual-phase CT AI forecasts pathologic response to neoadjuvant chemo-immunotherapy in NSCLC.

Zheng J, Yan Z, Wang R, Xiao H, Chen Z, Ge X, Li Z, Liu Z, Yu H, Liu H, Wang G, Yu P, Fu J, Zhang G, Zhang J, Liu B, Huang Y, Deng H, Wang C, Fu W, Zhang Y, Wang R, Jiang Y, Lin Y, Huang L, Yang C, Cui F, He J, Liang H

pubmed logopapersMay 31 2025
Accurate preoperative prediction of major pathological response or pathological complete response after neoadjuvant chemo-immunotherapy remains a critical unmet need in resectable non-small-cell lung cancer (NSCLC). Conventional size-based imaging criteria offer limited reliability, while biopsy confirmation is available only post-surgery. We retrospectively assembled 509 consecutive NSCLC cases from four Chinese thoracic-oncology centers (March 2018 to March 2023) and prospectively enrolled 50 additional patients. Three 3-dimensional convolutional neural networks (pre-treatment CT, pre-surgical CT, dual-phase CT) were developed; the best-performing dual-phase model (NeoPred) optionally integrated clinical variables. Model performance was measured by area under the receiver-operating-characteristic curve (AUC) and compared with nine board-certified radiologists. In an external validation set (n=59), NeoPred achieved an AUC of 0.772 (95% CI: 0.650 to 0.895), sensitivity 0.591, specificity 0.733, and accuracy 0.627; incorporating clinical data increased the AUC to 0.787. In a prospective cohort (n=50), NeoPred reached an AUC of 0.760 (95% CI: 0.628 to 0.891), surpassing the experts' mean AUC of 0.720 (95% CI: 0.574 to 0.865). Model assistance raised the pooled expert AUC to 0.829 (95% CI: 0.707 to 0.951) and accuracy to 0.820. Marked performance persisted within radiological stable-disease subgroups (external AUC 0.742, 95% CI: 0.468 to 1.000; prospective AUC 0.833, 95% CI: 0.497 to 1.000). Combining dual-phase CT and clinical variables, NeoPred reliably and non-invasively predicts pathological response to neoadjuvant chemo-immunotherapy in NSCLC, outperforms unaided expert assessment, and significantly enhances radiologist performance. Further multinational trials are needed to confirm generalizability and support surgical decision-making.

Development and validation of a 3-D deep learning system for diabetic macular oedema classification on optical coherence tomography images.

Zhu H, Ji J, Lin JW, Wang J, Zheng Y, Xie P, Liu C, Ng TK, Huang J, Xiong Y, Wu H, Lin L, Zhang M, Zhang G

pubmed logopapersMay 31 2025
To develop and validate an automated diabetic macular oedema (DME) classification system based on the images from different three-dimensional optical coherence tomography (3-D OCT) devices. A multicentre, platform-based development study using retrospective and cross-sectional data. Data were subjected to a two-level grading system by trained graders and a retina specialist, and categorised into three types: no DME, non-centre-involved DME and centre-involved DME (CI-DME). The 3-D convolutional neural networks algorithm was used for DME classification system development. The deep learning (DL) performance was compared with the diabetic retinopathy experts. Data were collected from Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Chaozhou People's Hospital and The Second Affiliated Hospital of Shantou University Medical College from January 2010 to December 2023. 7790 volumes of 7146 eyes from 4254 patients were annotated, of which 6281 images were used as the development set and 1509 images were used as the external validation set, split based on the centres. Accuracy, F1-score, sensitivity, specificity, area under receiver operating characteristic curve (AUROC) and Cohen's kappa were calculated to evaluate the performance of the DL algorithm. In classifying DME with non-DME, our model achieved an AUROCs of 0.990 (95% CI 0.983 to 0.996) and 0.916 (95% CI 0.902 to 0.930) for hold-out testing dataset and external validation dataset, respectively. To distinguish CI-DME from non-centre-involved-DME, our model achieved AUROCs of 0.859 (95% CI 0.812 to 0.906) and 0.881 (95% CI 0.859 to 0.902), respectively. In addition, our system showed comparable performance (Cohen's κ: 0.85 and 0.75) to the retina experts (Cohen's κ: 0.58-0.92 and 0.70-0.71). Our DL system achieved high accuracy in multiclassification tasks on DME classification with 3-D OCT images, which can be applied to population-based DME screening.

Mammogram mastery: Breast cancer image classification using an ensemble of deep learning with explainable artificial intelligence.

Kumar Mondal P, Jahan MK, Byeon H

pubmed logopapersMay 30 2025
Breast cancer is a serious public health problem and is one of the leading causes of cancer-related deaths in women worldwide. Early detection of the disease can significantly increase the chances of survival. However, manual analysis of mammogram mastery images is complex and time-consuming, which can lead to disagreements among experts. For this reason, automated diagnostic systems can play a significant role in increasing the accuracy and efficiency of diagnosis. In this study, we present an effective deep learning (DL) method, which classifies mammogram mastery images into cancer and noncancer categories using a collected dataset. Our model is pretrained based on the Inception V3 architecture. First, we run 5-fold cross-validation tests on the fully trained and fine-tuned Inception V3 model. Next, we apply a combined method based on likelihood and mean, where the fine-tuned Inception V3 model demonstrated superior performance in classification. Our DL model achieved 99% accuracy and 99% F1 score. In addition, interpretable AI techniques were used to enhance the transparency of the classification process. The finely tuned Inception V3 model demonstrated the highest performance in classification, confirming its effectiveness in automatic breast cancer detection. The experimental results clearly indicate that our proposed DL-based method for breast cancer image classification is highly effective, especially its application in image-based diagnostic methods. This study brings to the fore the huge potential of AI-based solutions, which can play a significant role in increasing the accuracy and reliability of breast cancer diagnosis.

Diagnostic Efficiency of an Artificial Intelligence-Based Technology in Dental Radiography.

Obrubov AA, Solovykh EA, Nadtochiy AG

pubmed logopapersMay 30 2025
We present results of the development of Dentomo artificial intelligence model based on two neural networks. The model includes a database and a knowledge base harmonized with SNOMED CT that allows processing and interpreting the results of cone beam computed tomography (CBCT) scans of the dental system, in particular, identifying and classifying teeth, identifying CT signs of pathology and previous treatments. Based on these data, artificial intelligence can draw conclusions and generate medical reports, systematize the data, and learn from the results. The diagnostic effectiveness of Dentomo was evaluated. The first results of the study have demonstrated that the model based on neural networks and artificial intelligence is a valuable tool for analyzing CBCT scans in clinical practice and optimizing the dentist workflow.

Bidirectional Projection-Based Multi-Modal Fusion Transformer for Early Detection of Cerebral Palsy in Infants.

Qi K, Huang T, Jin C, Yang Y, Ying S, Sun J, Yang J

pubmed logopapersMay 30 2025
Periventricular white matter injury (PWMI) is the most frequent magnetic resonance imaging (MRI) finding in infants with Cerebral Palsy (CP). We aim to detect CP and identify subtle, sparse PWMI lesions in infants under two years of age with immature brain structures. Based on the characteristic that the responsible lesions are located within five target regions, we first construct a multi-modal dataset including 243 cases with the mask annotations of five target regions for delineating anatomical structures on T1-Weighted Imaging (T1WI) images, masks for lesions on T2-Weighted Imaging (T2WI) images, and categories (CP or Non-CP). Furthermore, we develop a bidirectional projection-based multi-modal fusion transformer (BiP-MFT), incorporating a Bidirectional Projection Fusion Module (BPFM) for integrating the features between five target regions on T1WI images and lesions on T2WI images. Our BiP-MFT achieves subject-level classification accuracy of 0.90, specificity of 0.87, and sensitivity of 0.94. It surpasses the best results of nine comparative methods, with 0.10, 0.08, and 0.09 improvements in classification accuracy, specificity and sensitivity respectively. Our BPFM outperforms eight compared feature fusion strategies using Transformer and U-Net backbones on our dataset. Ablation studies on the dataset annotations and model components justify the effectiveness of our annotation method and the model rationality. The proposed dataset and codes are available at https://github.com/Kai-Qi/BiP-MFT.

Deep learning without borders: recent advances in ultrasound image classification for liver diseases diagnosis.

Yousefzamani M, Babapour Mofrad F

pubmed logopapersMay 30 2025
Liver diseases are among the top global health burdens. Recently, there has been an increasing significance of diagnostics without discomfort to the patient; among them, ultrasound is the most used. Deep learning, in particular convolutional neural networks, has revolutionized the classification of liver diseases by automatically performing some specific analyses of difficult images. This review summarizes the progress that has been made in deep learning techniques for the classification of liver diseases using ultrasound imaging. It evaluates various models from CNNs to their hybrid versions, such as CNN-Transformer, for detecting fatty liver, fibrosis, and liver cancer, among others. Several challenges in the generalization of data and models across a different clinical environment are also discussed. Deep learning has great prospects for automatic diagnosis of liver diseases. Most of the models have performed with high accuracy in different clinical studies. Despite this promise, challenges relating to generalization have remained. Future hardware developments and access to quality clinical data continue to further improve the performance of these models and ensure their vital role in the diagnosis of liver diseases.

Artificial Intelligence for Assessment of Digital Mammography Positioning Reveals Persistent Challenges.

Margolies LR, Spear GG, Payne JI, Iles SE, Abdolell M

pubmed logopapersMay 30 2025
Mammographic breast cancer detection depends on high-quality positioning, which is traditionally assessed and monitored subjectively. This study used artificial intelligence (AI) to evaluate mammography positioning on digital screening mammograms to identify and quantify unmet mammography positioning quality (MPQ). Data were collected within an IRB-approved collaboration. In total, 126 367 digital mammography studies (553 339 images) were processed. Unmet MPQ criteria, including exaggeration, portion cutoff, posterior tissue missing, nipple not in profile, too high on image receptor, inadequate pectoralis length, sagging, and posterior nipple line (PNL) length difference, were evaluated using MPQ AI algorithms. The similarity of unmet MPQ occurrence and rank order was compared for each health system. Altogether, 163 759 and 219 785 unmet MPQ criteria were identified, respectively, at the health systems. The rank order and the probability distribution of the unmet MPQ criteria were not statistically significantly different between health systems (P = .844 and P = .92, respectively). The 3 most-common unmet MPQ criteria were: short PNL length on the craniocaudal (CC) view, inadequate pectoralis muscle, and excessive exaggeration on the CC view. The percentages of unmet positioning criteria out of the total potential unmet positioning criteria at health system 1 and health system 2 were 8.4% (163 759/1 949 922) and 7.3% (219 785/3 030 129), respectively. Artificial intelligence identified a similar distribution of unmet MPQ criteria in 2 health systems' daily work. Knowledge of current commonly unmet MPQ criteria can facilitate the improvement of mammography quality through tailored education strategies.

Machine learning-based hemodynamics quantitative assessment of pulmonary circulation using computed tomographic pulmonary angiography.

Xie H, Zhao X, Zhang N, Liu J, Yang G, Cao Y, Xu J, Xu L, Sun Z, Wen Z, Chai S, Liu D

pubmed logopapersMay 30 2025
Pulmonary hypertension (pH) is a malignant pulmonary circulation disease. Right heart catheterization (RHC) is the gold standard procedure for quantitative evaluation of pulmonary hemodynamics. Accurate and noninvasive quantitative evaluation of pulmonary hemodynamics is challenging due to the limitations of currently available assessment methods. Patients who underwent computed tomographic pulmonary angiography (CTPA) and RHC examinations within 2 weeks were included. The dataset was randomly divided into a training set and a test set at an 8:2 ratio. A radiomic feature model and another two-dimensional (2D) feature model aimed to quantitatively evaluate of pulmonary hemodynamics were constructed. The performance of models was determined by calculating the mean squared error, the intraclass correlation coefficient (ICC) and the area under the precision-recall curve (AUC-PR) and performing Bland-Altman analyses. 345 patients: 271 patients with PH (mean age 50 ± 17 years, 93 men) and 74 without PH (mean age 55 ± 16 years, 26 men) were identified. The predictive results of pulmonary hemodynamics of radiomic feature model integrating 5 2D features and other 30 radiomic features were consistent with the results from RHC, and outperformed another 2D feature model. The radiomic feature model exhibited moderate to good reproducibility to predict pulmonary hemodynamic parameters (ICC reached 0.87). In addition, pH can be accurately identified based on a classification model (AUC-PR =0.99). This study provides a noninvasive method for comprehensively and quantitatively evaluating pulmonary hemodynamics using CTPA images, which has the potential to serve as an alternative to RHC, pending further validation.
Page 195 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.