Sort by:
Page 286 of 2982972 results

A hybrid AI method for lung cancer classification using explainable AI techniques.

Shivwanshi RR, Nirala NS

pubmed logopapersMay 8 2025
The use of Artificial Intelligence (AI) methods for the analysis of CT (computed tomography) images has greatly contributed to the development of an effective computer-assisted diagnosis (CAD) system for lung cancer (LC). However, complex structures, multiple radiographic interrelations, and the dynamic locations of abnormalities within lung CT images make extracting relevant information to process and implement LC CAD systems difficult. These prominent problems are addressed in this paper by presenting a hybrid method of LC malignancy classification, which may help researchers and experts properly engineer the model's performance by observing how the model makes decisions. The proposed methodology is named IncCat-LCC: Explainer (Inception Net Cat Boost LC Classification: Explainer), which consists of feature extraction (FE) using the handcrafted radiomic Feature (HcRdF) extraction technique, InceptionNet CNN Feature (INCF) extraction, Vision Transformer Feature (ViTF) extraction, and XGBOOST (XGB)-based feature selection, and the GPU based CATBOOST (CB) classification technique. The proposed framework achieves better and highest performance scores for lung nodule multiclass malignancy classification when evaluated using metrics such as accuracy, precision, recall, f-1 score, specificity, and area under the roc curve as 96.74 %, 93.68 %, 96.74 %, 95.19 %, 98.47 % and 99.76 % consecutively for classifying highly normal class. Observing the explainable artificial intelligence (XAI) explanations will help readers understand the model performance and the statistical outcomes of the evaluation parameter. The work presented in this article may improve the existing LC CAD system and help assess the important parameters using XAI to recognize the factors contributing to enhanced performance and reliability.

A diffusion-stimulated CT-US registration model with self-supervised learning and synthetic-to-real domain adaptation.

Li S, Jia B, Huang W, Zhang X, Zhou W, Wang C, Teng G

pubmed logopapersMay 8 2025
In abdominal interventional procedures, achieving precise registration of 2D ultrasound (US) frames with 3D computed tomography (CT) scans presents a significant challenge. Traditional tracking methods often rely on high-precision sensors, which can be prohibitively expensive. Furthermore, the clinical need for real-time registration with a broad capture range frequently exceeds the performance of standard image-based optimization techniques. Current automatic registration methods that utilize deep learning are either heavily reliant on manual annotations for training or struggle to effectively bridge the gap between different imaging domains. To address these challenges, we propose a novel diffusion-stimulated CT-US registration model. This model harnesses the physical diffusion properties of US to generate synthetic US images from preoperative CT data. Additionally, we introduce a synthetic-to-real domain adaptation strategy using a diffusion model to mitigate the discrepancies between real and synthetic US images. A dual-stream self-supervised regression neural network, trained on these synthetic images, is then used to estimate the pose within the CT space. The effectiveness of our proposed approach is verified through validation using US and CT scans from a dual-modality human abdominal phantom. The results of our experiments confirm that our method can accurately initialize the US image pose within an acceptable range of error and subsequently refine it to achieve precise alignment. This enables real-time, tracker-independent, and robust rigid registration of CT and US images.

Machine learning model for diagnosing salivary gland adenoid cystic carcinoma based on clinical and ultrasound features.

Su HZ, Li ZY, Hong LC, Wu YH, Zhang F, Zhang ZB, Zhang XD

pubmed logopapersMay 8 2025
To develop and validate machine learning (ML) models for diagnosing salivary gland adenoid cystic carcinoma (ACC) in the salivary glands based on clinical and ultrasound features. A total of 365 patients with ACC or non-ACC of the salivary glands treated at two centers were enrolled in training cohort, internal and external validation cohorts. Synthetic minority oversampling technique was used to address the class imbalance. The least absolute shrinkage and selection operator (LASSO) regression identified optimal features, which were subsequently utilized to construct predictive models employing five ML algorithms. The performance of the models was evaluated across a comprehensive array of learning metrics, prominently the area under the receiver operating characteristic curve (AUC). Through LASSO regression analysis, six key features-sex, pain symptoms, number, cystic areas, rat tail sign, and polar vessel-were identified and subsequently utilized to develop five ML models. Among these models, the support vector machine (SVM) model demonstrated superior performance, achieving the highest AUCs of 0.899 and 0.913, accuracy of 90.54% and 91.53%, and F1 scores of 0.774 and 0.783 in both the internal and external validation cohorts, respectively. Decision curve analysis further revealed that the SVM model offered enhanced clinical utility compared to the other models. The ML model based on clinical and US features provide an accurate and noninvasive method for distinguishing ACC from non-ACC. This machine learning model, constructed based on clinical and ultrasound characteristics, serves as a valuable tool for the identification of salivary gland adenoid cystic carcinoma. Rat tail sign and polar vessel on US predict adenoid cystic carcinoma (ACC). Machine learning models based on clinical and US features can identify ACC. The support vector machine model performed robustly and accurately.

Cross-Institutional Evaluation of Large Language Models for Radiology Diagnosis Extraction: A Prompt-Engineering Perspective.

Moassefi M, Houshmand S, Faghani S, Chang PD, Sun SH, Khosravi B, Triphati AG, Rasool G, Bhatia NK, Folio L, Andriole KP, Gichoya JW, Erickson BJ

pubmed logopapersMay 8 2025
The rapid evolution of large language models (LLMs) offers promising opportunities for radiology report annotation, aiding in determining the presence of specific findings. This study evaluates the effectiveness of a human-optimized prompt in labeling radiology reports across multiple institutions using LLMs. Six distinct institutions collected 500 radiology reports: 100 in each of 5 categories. A standardized Python script was distributed to participating sites, allowing the use of one common locally executed LLM with a standard human-optimized prompt. The script executed the LLM's analysis for each report and compared predictions to reference labels provided by local investigators. Models' performance using accuracy was calculated, and results were aggregated centrally. The human-optimized prompt demonstrated high consistency across sites and pathologies. Preliminary analysis indicates significant agreement between the LLM's outputs and investigator-provided reference across multiple institutions. At one site, eight LLMs were systematically compared, with Llama 3.1 70b achieving the highest performance in accurately identifying the specified findings. Comparable performance with Llama 3.1 70b was observed at two additional centers, demonstrating the model's robust adaptability to variations in report structures and institutional practices. Our findings illustrate the potential of optimized prompt engineering in leveraging LLMs for cross-institutional radiology report labeling. This approach is straightforward while maintaining high accuracy and adaptability. Future work will explore model robustness to diverse report structures and further refine prompts to improve generalizability.

Medical machine learning operations: a framework to facilitate clinical AI development and deployment in radiology.

de Almeida JG, Messiou C, Withey SJ, Matos C, Koh DM, Papanikolaou N

pubmed logopapersMay 8 2025
The integration of machine-learning technologies into radiology practice has the potential to significantly enhance diagnostic workflows and patient care. However, the successful deployment and maintenance of medical machine-learning (MedML) systems in radiology requires robust operational frameworks. Medical machine-learning operations (MedMLOps) offer a structured approach ensuring persistent MedML reliability, safety, and clinical relevance. MedML systems are increasingly employed to analyse sensitive clinical and radiological data, which continuously changes due to advancements in data acquisition and model development. These systems can alleviate the workload of radiologists by streamlining diagnostic tasks, such as image interpretation and triage. MedMLOps ensures that such systems stay accurate and dependable by facilitating continuous performance monitoring, systematic validation, and simplified model maintenance-all critical to maintaining trust in machine-learning-driven diagnostics. Furthermore, MedMLOps aligns with established principles of patient data protection and regulatory compliance, including recent developments in the European Union, emphasising transparency, documentation, and safe model retraining. This enables radiologists to implement modern machine-learning tools with control and oversight at the forefront, ensuring reliable model performance within the dynamic context of clinical practice. MedMLOps empowers radiologists to deliver consistent, high-quality care with confidence, ensuring that MedML systems stay aligned with evolving medical standards and patient needs. MedMLOps can assist multiple stakeholders in radiology by ensuring models are available, continuously monitored and easy to use and maintain while preserving patient privacy. MedMLOps can better serve patients by facilitating the clinical implementation of cutting-edge MedML and clinicians by ensuring that MedML models are only utilised when they are performing as expected. KEY POINTS: Question MedML applications are becoming increasingly adopted in clinics, but the necessary infrastructure to sustain these applications is currently not well-defined. Findings Adapting machine learning operations concepts enhances MedML ecosystems by improving interoperability, automating monitoring/validation, and reducing deployment burdens on clinicians and medical informaticians. Clinical relevance Implementing these solutions eases the faster and safer adoption of advanced MedML models, ensuring consistent performance while reducing workload for clinicians, benefiting patient care through streamlined diagnostic workflows.

Predicting treatment response to systemic therapy in advanced gallbladder cancer using multiphase enhanced CT images.

Wu J, Zheng Z, Li J, Shen X, Huang B

pubmed logopapersMay 8 2025
Accurate estimation of treatment response can help clinicians identify patients who would potentially benefit from systemic therapy. This study aimed to develop and externally validate a model for predicting treatment response to systemic therapy in advanced gallbladder cancer (GBC). We recruited 399 eligible GBC patients across four institutions. Multivariable logistic regression analysis was performed to identify independent clinical factors related to therapeutic efficacy. This deep learning (DL) radiomics signature was developed for predicting treatment response using multiphase enhanced CT images. Then, the DL radiomic-clinical (DLRSC) model was built by combining the DL signature and significant clinical factors, and its predictive performance was evaluated using area under the curve (AUC). Gradient-weighted class activation mapping analysis was performed to help clinicians better understand the predictive results. Furthermore, patients were stratified into low- and high-score groups by the DLRSC model. The progression-free survival (PFS) and overall survival (OS) between the two different groups were compared. Multivariable analysis revealed that tumor size was a significant predictor of efficacy. The DLRSC model showed great predictive performance, with AUCs of 0.86 (95% CI, 0.82-0.89) and 0.84 (95% CI, 0.80-0.87) in the internal and external test datasets, respectively. This model showed great discrimination, calibration, and clinical utility. Moreover, Kaplan-Meier survival analysis revealed that low-score group patients who were insensitive to systemic therapy predicted by the DLRSC model had worse PFS and OS. The DLRSC model allows for predicting treatment response in advanced GBC patients receiving systemic therapy. The survival benefit provided by the DLRSC model was also assessed. Question No effective tools exist for identifying patients who would potentially benefit from systemic therapy in clinical practice. Findings Our combined model allows for predicting treatment response to systemic therapy in advanced gallbladder cancer. Clinical relevance With the help of this model, clinicians could inform patients of the risk of potential ineffective treatment. Such a strategy can reduce unnecessary adverse events and effectively help reallocate societal healthcare resources.

An automated hip fracture detection, classification system on pelvic radiographs and comparison with 35 clinicians.

Yilmaz A, Gem K, Kalebasi M, Varol R, Gencoglan ZO, Samoylenko Y, Tosyali HK, Okcu G, Uvet H

pubmed logopapersMay 8 2025
Accurate diagnosis of orthopedic injuries, especially pelvic and hip fractures, is vital in trauma management. While pelvic radiographs (PXRs) are widely used, misdiagnosis is common. This study proposes an automated system that uses convolutional neural networks (CNNs) to detect potential fracture areas and predict fracture conditions, aiming to outperform traditional object detection-based systems. We developed two deep learning models for hip fracture detection and prediction, trained on PXRs from three hospitals. The first model utilized automated hip area detection, cropping, and classification of the resulting patches. The images were preprocessed using the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. The YOLOv5 architecture was employed for the object detection model, while three different pre-trained deep neural network (DNN) architectures were used for classification, applying transfer learning. Their performance was evaluated on a test dataset, and compared with 35 clinicians. YOLOv5 achieved a 92.66% accuracy on regular images and 88.89% on CLAHE-enhanced images. The classifier models, MobileNetV2, Xception, and InceptionResNetV2, achieved accuracies between 94.66% and 97.67%. In contrast, the clinicians demonstrated a mean accuracy of 84.53% and longer prediction durations. The DNN models showed significantly better accuracy and speed compared to human evaluators (p < 0.0005, p < 0.01). These DNN models highlight promising utility in trauma diagnosis due to their high accuracy and speed. Integrating such systems into clinical practices may enhance the diagnostic efficiency of PXRs.

Construction of risk prediction model of sentinel lymph node metastasis in breast cancer patients based on machine learning algorithm.

Yang Q, Liu C, Wang Y, Dong G, Sun J

pubmed logopapersMay 8 2025
The aim of this study was to develop and validate a machine learning (ML) based prediction model for sentinel lymph node metastasis in breast cancer to identify patients with a high risk of sentinel lymph node metastasis. In this machine learning study, we retrospectively collected 225 female breast cancer patients who underwent sentinel lymph node biopsy (SLNB). Feature screening was performed using the logistic regression analysis. Subsequently, five ML algorithms, namely LOGIT, LASSO, XGBOOST, RANDOM FOREST model and GBM model were employed to train and develop an ML model. In addition, model interpretation was performed by the Shapley Additive Explanations (SHAP) analysis to clarify the importance of each feature of the model and its decision basis. Combined univariate and multivariate logistic regression analysis, identified Multifocal, LVI, Maximum Diameter, Shape US, Maximum Cortical Thickness as significant predictors. We than successfully leveraged machine learning algorithms, particularly the RANDOM FOREST model, to develop a predictive model for sentinel lymph node metastasis in breast cancer. Finally, the SHAP method identified Maximum Diameter and Maximum Cortical Thickness as the primary decision factors influencing the ML model's predictions. With the integration of pathological and imaging characteristics, ML algorithm can accurately predict sentinel lymph node metastasis in breast cancer patients. The RANDOM FOREST model showed ideal performance. With the incorporation of these models in the clinic, can helpful for clinicians to identify patients at risk of sentinel lymph node metastasis of breast cancer and make more reasonable treatment decisions.

Quantitative analysis and clinical determinants of orthodontically induced root resorption using automated tooth segmentation from CBCT imaging.

Lin J, Zheng Q, Wu Y, Zhou M, Chen J, Wang X, Kang T, Zhang W, Chen X

pubmed logopapersMay 8 2025
Orthodontically induced root resorption (OIRR) is difficult to assess accurately using traditional 2D imaging due to distortion and low sensitivity. While CBCT offers more precise 3D evaluation, manual segmentation remains labor-intensive and prone to variability. Recent advances in deep learning enable automatic, accurate tooth segmentation from CBCT images. This study applies deep learning and CBCT technology to quantify OIRR and analyze its risk factors, aiming to improve assessment accuracy, efficiency, and clinical decision-making. This study retrospectively analyzed CBCT scans of 108 orthodontic patients to assess OIRR using deep learning-based tooth segmentation and volumetric analysis. Statistical analysis was performed using linear regression to evaluate the influence of patient-related factors. A significance level of p < 0.05 was considered statistically significant. Root volume significantly decreased after orthodontic treatment (p < 0.001). Age, gender, open (deep) bite, severe crowding, and other factors significantly influenced root resorption rates in different tooth positions. Multivariable regression analysis showed these factors can predict root resorption, explaining 3% to 15.4% of the variance. This study applied a deep learning model to accurately assess root volume changes using CBCT, revealing significant root volume reduction after orthodontic treatment. It found that underage patients experienced less root resorption, while factors like anterior open bite and deep overbite influenced resorption in specific teeth, though skeletal pattern, overjet, and underbite were not significant predictors.

nnU-Net-based high-resolution CT features quantification for interstitial lung diseases.

Lin Q, Zhang Z, Xiong X, Chen X, Ma T, Chen Y, Li T, Long Z, Luo Q, Sun Y, Jiang L, He W, Deng Y

pubmed logopapersMay 8 2025
To develop a new high-resolution (HR)CT abnormalities quantification tool (CVILDES) for interstitial lung diseases (ILDs) based on the nnU-Net network structure and to determine whether the quantitative parameters derived from this new software could offer a reliable and precise assessment in a clinical setting that is in line with expert visual evaluation. HRCT scans from 83 cases of ILDs and 20 cases of other diffuse lung diseases were labeled section by section by multiple radiologists and were used as training data for developing a deep learning model based on nnU-Net, employing a supervised learning approach. For clinical validation, a cohort including 51 cases of interstitial pneumonia with autoimmune features (IPAF) and 14 cases of idiopathic pulmonary fibrosis (IPF) had CT parenchymal patterns evaluated quantitatively with CVILDES and by visual evaluation. Subsequently, we assessed the correlation of the two methodologies for ILD features quantification. Furthermore, the correlation between the quantitative results derived from the two methods and pulmonary function parameters (DL<sub>CO</sub>%, FVC%, and FEV%) was compared. All CT data were successfully quantified using CVILDES. CVILDES-quantified results (total ILD extent, ground-glass opacity, consolidation, reticular pattern and honeycombing) showed a strong correlation with visual evaluation and were numerically close to the visual evaluation results (r = 0.64-0.89, p < 0.0001), particularly for the extent of fibrosis (r = 0.82, p < 0.0001). As judged by correlation with pulmonary function parameters, CVILDES quantification was comparable or even superior to visual evaluation. nnU-Net-based CVILDES was comparable to visual evaluation for ILD abnormalities quantification. Question Visual assessment of ILD on HRCT is time-consuming and exhibits poor inter-observer agreement, making it challenging to accurately evaluate the therapeutic efficacy. Findings nnU-Net-based Computer vision-based ILD evaluation system (CVILDES) accurately segmented and quantified the HRCT features of ILD, and results were comparable to visual evaluation. Clinical relevance This study developed a new tool that has the potential to be applied in the quantitative assessment of ILD.
Page 286 of 2982972 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.