Sort by:
Page 168 of 1701698 results

Comparative analysis of diagnostic performance in mammography: A reader study on the impact of AI assistance.

Ramli Hamid MT, Ab Mumin N, Abdul Hamid S, Mohd Ariffin N, Mat Nor K, Saib E, Mohamed NA

pubmed logopapersJan 1 2025
This study evaluates the impact of artificial intelligence (AI) assistance on the diagnostic performance of radiologists with varying levels of experience in interpreting mammograms in a Malaysian tertiary referral center, particularly in women with dense breasts. A retrospective study including 434 digital mammograms interpreted by two general radiologists (12 and 6 years of experience) and two trainees (2 years of experience). Diagnostic performance was assessed with and without AI assistance (Lunit INSIGHT MMG), using sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic curve (AUC). Inter-reader agreement was measured using kappa statistics. AI assistance significantly improved the diagnostic performance of all reader groups across all metrics (p < 0.05). The senior radiologist consistently achieved the highest sensitivity (86.5% without AI, 88.0% with AI) and specificity (60.5% without AI, 59.2% with AI). The junior radiologist demonstrated the highest PPV (56.9% without AI, 74.6% with AI) and NPV (90.3% without AI, 92.2% with AI). The trainees showed the lowest performance, but AI significantly enhanced their accuracy. AI assistance was particularly beneficial in interpreting mammograms of women with dense breasts. AI assistance significantly enhances the diagnostic accuracy and consistency of radiologists in mammogram interpretation, with notable benefits for less experienced readers. These findings support the integration of AI into clinical practice, particularly in resource-limited settings where access to specialized breast radiologists is constrained.

OA-HybridCNN (OHC): An advanced deep learning fusion model for enhanced diagnostic accuracy in knee osteoarthritis imaging.

Liao Y, Yang G, Pan W, Lu Y

pubmed logopapersJan 1 2025
Knee osteoarthritis (KOA) is a leading cause of disability globally. Early and accurate diagnosis is paramount in preventing its progression and improving patients' quality of life. However, the inconsistency in radiologists' expertise and the onset of visual fatigue during prolonged image analysis often compromise diagnostic accuracy, highlighting the need for automated diagnostic solutions. In this study, we present an advanced deep learning model, OA-HybridCNN (OHC), which integrates ResNet and DenseNet architectures. This integration effectively addresses the gradient vanishing issue in DenseNet and augments prediction accuracy. To evaluate its performance, we conducted a thorough comparison with other deep learning models using five-fold cross-validation and external tests. The OHC model outperformed its counterparts across all performance metrics. In external testing, OHC exhibited an accuracy of 91.77%, precision of 92.34%, and recall of 91.36%. During the five-fold cross-validation, its average AUC and ACC were 86.34% and 87.42%, respectively. Deep learning, particularly exemplified by the OHC model, has greatly improved the efficiency and accuracy of KOA imaging diagnosis. The adoption of such technologies not only alleviates the burden on radiologists but also significantly enhances diagnostic precision.

Convolutional neural network using magnetic resonance brain imaging to predict outcome from tuberculosis meningitis.

Dong THK, Canas LS, Donovan J, Beasley D, Thuong-Thuong NT, Phu NH, Ha NT, Ourselin S, Razavi R, Thwaites GE, Modat M

pubmed logopapersJan 1 2025
Tuberculous meningitis (TBM) leads to high mortality, especially amongst individuals with HIV. Predicting the incidence of disease-related complications is challenging, for which purpose the value of brain magnetic resonance imaging (MRI) has not been well investigated. We used a convolutional neural network (CNN) to explore the complementary contribution of brain MRI to the conventional prognostic determinants. We pooled data from two randomised control trials of HIV-positive and HIV-negative adults with clinical TBM in Vietnam to predict the occurrence of death or new neurological complications in the first two months after the subject's first MRI session. We developed and compared three models: a logistic regression with clinical, demographic and laboratory data as reference, a CNN that utilised only T1-weighted MRI volumes, and a model that fused all available information. All models were fine-tuned using two repetitions of 5-fold cross-validation. The final evaluation was based on a random 70/30 training/test split, stratified by the outcome and HIV status. Based on the selected model, we explored the interpretability maps derived from the models. 215 patients were included, with an event prevalence of 22.3%. On the test set our non-imaging model had higher AUC (71.2% [Formula: see text] 1.1%) than the imaging-only model (67.3% [Formula: see text] 2.6%). The fused model was superior to both, with an average AUC = 77.3% [Formula: see text] 4.0% in the test set. The non-imaging variables were more informative in the HIV-positive group, while the imaging features were more predictive in the HIV-negative group. All three models performed better in the HIV-negative cohort. The interpretability maps show the model's focus on the lateral fissures, the corpus callosum, the midbrain, and peri-ventricular tissues. Imaging information can provide added value to predict unwanted outcomes of TBM. However, to confirm this finding, a larger dataset is needed.

Application research of artificial intelligence software in the analysis of thyroid nodule ultrasound image characteristics.

Xu C, Wang Z, Zhou J, Hu F, Wang Y, Xu Z, Cai Y

pubmed logopapersJan 1 2025
Thyroid nodule, as a common clinical endocrine disease, has become increasingly prevalent worldwide. Ultrasound, as the premier method of thyroid imaging, plays an important role in accurately diagnosing and managing thyroid nodules. However, there is a high degree of inter- and intra-observer variability in image interpretation due to the different knowledge and experience of sonographers who have huge ultrasound examination tasks everyday. Artificial intelligence based on computer-aided diagnosis technology maybe improve the accuracy and time efficiency of thyroid nodules diagnosis. This study introduced an artificial intelligence software called SW-TH01/II to evaluate ultrasound image characteristics of thyroid nodules including echogenicity, shape, border, margin, and calcification. We included 225 ultrasound images from two hospitals in Shanghai, respectively. The sonographers and software performed characteristics analysis on the same group of images. We analyzed the consistency of the two results and used the sonographers' results as the gold standard to evaluate the accuracy of SW-TH01/II. A total of 449 images were included in the statistical analysis. For the seven indicators, the proportions of agreement between SW-TH01/II and sonographers' analysis results were all greater than 0.8. For the echogenicity (with very hypoechoic), aspect ratio and margin, the kappa coefficient between the two methods were above 0.75 (P < 0.001). The kappa coefficients of echogenicity (echotexture and echogenicity level), border and calcification between the two methods were above 0.6 (P < 0.001). The median time it takes for software and sonographers to interpret an image were 3 (2, 3) seconds and 26.5 (21.17, 34.33) seconds, respectively, and the difference were statistically significant (z = -18.36, P < 0.001). SW-TH01/II has a high degree of accuracy and great time efficiency benefits in judging the characteristics of thyroid nodule. It can provide more objective results and improve the efficiency of ultrasound examination. SW-TH01/II can be used to assist the sonographers in characterizing the thyroid nodule ultrasound images.

Application of artificial intelligence in X-ray imaging analysis for knee arthroplasty: A systematic review.

Zhang Z, Hui X, Tao H, Fu Z, Cai Z, Zhou S, Yang K

pubmed logopapersJan 1 2025
Artificial intelligence (AI) is a promising and powerful technology with increasing use in orthopedics. The global morbidity of knee arthroplasty is expanding. This study investigated the use of AI algorithms to review radiographs of knee arthroplasty. The Ovid-Embase, Web of Science, Cochrane Library, PubMed, China National Knowledge Infrastructure (CNKI), WeiPu (VIP), WanFang, and China Biology Medicine (CBM) databases were systematically screened from inception to March 2024 (PROSPERO study protocol registration: CRD42024507549). The quality assessment of the diagnostic accuracy studies tool assessed the risk of bias. A total of 21 studies were included in the analysis. Of these, 10 studies identified and classified implant brands, 6 measured implant size and component alignment, 3 detected implant loosening, and 2 diagnosed prosthetic joint infections (PJI). For classifying and identifying implant brands, 5 studies demonstrated near-perfect prediction with an area under the curve (AUC) ranging from 0.98 to 1.0, and 10 achieved accuracy (ACC) between 96-100%. Regarding implant measurement, one study showed an AUC of 0.62, and two others exhibited over 80% ACC in determining component sizes. Moreover, Artificial intelligence showed good to excellent reliability across all angles in three separate studies (Intraclass Correlation Coefficient > 0.78). In predicting PJI, one study achieved an AUC of 0.91 with a corresponding ACC of 90.5%, while another reported a positive predictive value ranging from 75% to 85%. For detecting implant loosening, the AUC was found to be at least as high as 0.976 with ACC ranging from 85.8% to 97.5%. These studies show that AI is promising in recognizing implants in knee arthroplasty. Future research should follow a rigorous approach to AI development, with comprehensive and transparent reporting of methods and the creation of open-source software programs and commercial tools that can provide clinicians with objective clinical decisions.

Intelligent and precise auxiliary diagnosis of breast tumors using deep learning and radiomics.

Wang T, Zang B, Kong C, Li Y, Yang X, Yu Y

pubmed logopapersJan 1 2025
Breast cancer is the most common malignant tumor among women worldwide, and early diagnosis is crucial for reducing mortality rates. Traditional diagnostic methods have significant limitations in terms of accuracy and consistency. Imaging is a common technique for diagnosing and predicting breast cancer, but human error remains a concern. Increasingly, artificial intelligence (AI) is being employed to assist physicians in reducing diagnostic errors. We developed an intelligent diagnostic model combining deep learning and radiomics to enhance breast tumor diagnosis. The model integrates MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, improving feature processing and efficiency while reducing parameters. Using AI-Dhabyani and TCIA breast ultrasound datasets, we validated the model internally and externally, comparing it to VGG16, ResNet, AlexNet, and MobileNet. Results: The internal validation set achieved an accuracy of 83.84% with an AUC of 0.92, outperforming other models. The external validation set showed an accuracy of 69.44% with an AUC of 0.75, demonstrating high robustness and generalizability. Conclusions: We developed an intelligent diagnostic model using deep learning and radiomics to improve breast tumor diagnosis. The model combines MobileNet with ResNeXt-inspired depthwise separable and grouped convolutions, enhancing feature processing and efficiency while reducing parameters. It was validated internally and externally using the AI-Dhabyani and TCIA breast ultrasound datasets and compared with VGG16, ResNet, AlexNet, and MobileNet.

Radiomics of Dynamic Contrast-Enhanced MRI for Predicting Radiation-Induced Hepatic Toxicity After Intensity Modulated Radiotherapy for Hepatocellular Carcinoma: A Machine Learning Predictive Model Based on the SHAP Methodology.

Liu F, Chen L, Wu Q, Li L, Li J, Su T, Li J, Liang S, Qing L

pubmed logopapersJan 1 2025
To develop an interpretable machine learning (ML) model using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) radiomic data, dosimetric parameters, and clinical data for predicting radiation-induced hepatic toxicity (RIHT) in patients with hepatocellular carcinoma (HCC) following intensity-modulated radiation therapy (IMRT). A retrospective analysis of 150 HCC patients was performed, with a 7:3 ratio used to divide the data into training and validation cohorts. Radiomic features from the original MRI sequences and Delta-radiomic features were extracted. Seven ML models based on radiomics were developed: logistic regression (LR), random forest (RF), support vector machine (SVM), eXtreme Gradient Boosting (XGBoost), adaptive boosting (AdaBoost), decision tree (DT), and artificial neural network (ANN). The predictive performance of the models was evaluated using receiver operating characteristic (ROC) curve analysis and calibration curves. Shapley additive explanations (SHAP) were employed to interpret the contribution of each variable and its risk threshold. Original radiomic features and Delta-radiomic features were extracted from DCE-MRI images and filtered to generate Radiomics-scores and Delta-Radiomics-scores. These were then combined with independent risk factors (Body Mass Index (BMI), V5, and pre-Child-Pugh score(pre-CP)) identified through univariate and multivariate logistic regression and Spearman correlation analysis to construct the ML models. In the training cohort, the AUC values were 0.8651 for LR, 0.7004 for RF, 0.6349 for SVM, 0.6706 for XGBoost, 0.7341 for AdaBoost, 0.6806 for Decision Tree, and 0.6786 for ANN. The corresponding accuracies were 84.4%, 65.6%, 75.0%, 65.6%, 71.9%, 68.8%, and 71.9%, respectively. The validation cohort further confirmed the superiority of the LR model, which was selected as the optimal model. SHAP analysis revealed that Delta-radiomics made a substantial positive contribution to the model. The interpretable ML model based on radiomics provides a non-invasive tool for predicting RIHT in patients with HCC, demonstrating satisfactory discriminative performance.

A novel spectral transformation technique based on special functions for improved chest X-ray image classification.

Aljohani A

pubmed logopapersJan 1 2025
Chest X-ray image classification plays an important role in medical diagnostics. Machine learning algorithms enhanced the performance of these classification algorithms by introducing advance techniques. These classification algorithms often requires conversion of a medical data to another space in which the original data is reduced to important values or moments. We developed a mechanism which converts a given medical image to a spectral space which have a base set composed of special functions. In this study, we propose a chest X-ray image classification method based on spectral coefficients. The spectral coefficients are based on an orthogonal system of Legendre type smooth polynomials. We developed the mathematical theory to calculate spectral moment in Legendre polynomails space and use these moments to train traditional classifier like SVM and random forest for a classification task. The procedure is applied to a latest data set of X-Ray images. The data set is composed of X-Ray images of three different classes of patients, normal, Covid infected and pneumonia. The moments designed in this study, when used in SVM or random forest improves its ability to classify a given X-Ray image at a high accuracy. A parametric study of the proposed approach is presented. The performance of these spectral moments is checked in Support vector machine and Random forest algorithm. The efficiency and accuracy of the proposed method is presented in details. All our simulation is performed in computation softwares, Matlab and Python. The image pre processing and spectral moments generation is performed in Matlab and the implementation of the classifiers is performed with python. It is observed that the proposed approach works well and provides satisfactory results (0.975 accuracy), however further studies are required to establish a more accurate and fast version of this approach.

Improving lung cancer diagnosis and survival prediction with deep learning and CT imaging.

Wang X, Sharpnack J, Lee TCM

pubmed logopapersJan 1 2025
Lung cancer is a major cause of cancer-related deaths, and early diagnosis and treatment are crucial for improving patients' survival outcomes. In this paper, we propose to employ convolutional neural networks to model the non-linear relationship between the risk of lung cancer and the lungs' morphology revealed in the CT images. We apply a mini-batched loss that extends the Cox proportional hazards model to handle the non-convexity induced by neural networks, which also enables the training of large data sets. Additionally, we propose to combine mini-batched loss and binary cross-entropy to predict both lung cancer occurrence and the risk of mortality. Simulation results demonstrate the effectiveness of both the mini-batched loss with and without the censoring mechanism, as well as its combination with binary cross-entropy. We evaluate our approach on the National Lung Screening Trial data set with several 3D convolutional neural network architectures, achieving high AUC and C-index scores for lung cancer classification and survival prediction. These results, obtained from simulations and real data experiments, highlight the potential of our approach to improving the diagnosis and treatment of lung cancer.

Radiomics and Deep Learning as Important Techniques of Artificial Intelligence - Diagnosing Perspectives in Cytokeratin 19 Positive Hepatocellular Carcinoma.

Wang F, Yan C, Huang X, He J, Yang M, Xian D

pubmed logopapersJan 1 2025
Currently, there are inconsistencies among different studies on preoperative prediction of Cytokeratin 19 (CK19) expression in HCC using traditional imaging, radiomics, and deep learning. We aimed to systematically analyze and compare the performance of non-invasive methods for predicting CK19-positive HCC, thereby providing insights for the stratified management of HCC patients. A comprehensive literature search was conducted in PubMed, EMBASE, Web of Science, and the Cochrane Library from inception to February 2025. Two investigators independently screened and extracted data based on inclusion and exclusion criteria. Eligible studies were included, and key findings were summarized in tables to provide a clear overview. Ultimately, 22 studies involving 3395 HCC patients were included. 72.7% (16/22) focused on traditional imaging, 36.4% (8/22) on radiomics, 9.1% (2/22) on deep learning, and 54.5% (12/22) on combined models. The magnetic resonance imaging was the most commonly used imaging modality (19/22), and over half of the studies (12/22) were published between 2022 and 2025. Moreover, 27.3% (6/22) were multicenter studies, 36.4% (8/22) included a validation set, and only 13.6% (3/22) were prospective. The area under the curve (AUC) range of using clinical and traditional imaging was 0.560 to 0.917. The AUC ranges of radiomics were 0.648 to 0.951, and the AUC ranges of deep learning were 0.718 to 0.820. Notably, the AUC ranges of combined models of clinical, imaging, radiomics and deep learning were 0.614 to 0.995. Nevertheless, the multicenter external data were limited, with only 13.6% (3/22) incorporating validation. The combined model integrating traditional imaging, radiomics and deep learning achieves excellent potential and performance for predicting CK19 in HCC. Based on current limitations, future research should focus on building an easy-to-use dynamic online tool, combining multicenter-multimodal imaging and advanced deep learning approaches to enhance the accuracy and robustness of model predictions.
Page 168 of 1701698 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.