Sort by:
Page 164 of 2442432 results

Enhancing Lung Cancer Diagnosis: An Optimization-Driven Deep Learning Approach with CT Imaging.

Lakshminarasimha K, Priyeshkumar AT, Karthikeyan M, Sakthivel R

pubmed logopapersJun 23 2025
Lung cancer (LC) remains a leading cause of mortality worldwide, affecting individuals across all genders and age groups. Early and accurate diagnosis is critical for effective treatment and improved survival rates. Computed Tomography (CT) imaging is widely used for LC detection and classification. However, manual identification can be time-consuming and error-prone due to the visual similarities among various LC types. Deep learning (DL) has shown significant promise in medical image analysis. Although numerous studies have investigated LC detection using deep learning techniques, the effective extraction of highly correlated features remains a significant challenge, thereby limiting diagnostic accuracy. Furthermore, most existing models encounter substantial computational complexity and find it difficult to efficiently handle the high-dimensional nature of CT images. This study introduces an optimized CBAM-EfficientNet model to enhance feature extraction and improve LC classification. EfficientNet is utilized to reduce computational complexity, while the Convolutional Block Attention Module (CBAM) emphasizes essential spatial and channel features. Additionally, optimization algorithms including Gray Wolf Optimization (GWO), Whale Optimization (WO), and the Bat Algorithm (BA) are applied to fine-tune hyperparameters and boost predictive accuracy. The proposed model, integrated with different optimization strategies, is evaluated on two benchmark datasets. The GWO-based CBAM-EfficientNet achieves outstanding classification accuracies of 99.81% and 99.25% on the Lung-PET-CT-Dx and LIDC-IDRI datasets, respectively. Following GWO, the BA-based CBAM-EfficientNet achieves 99.44% and 98.75% accuracy on the same datasets. Comparative analysis highlights the superiority of the proposed model over existing approaches, demonstrating strong potential for reliable and automated LC diagnosis. Its lightweight architecture also supports real-time implementation, offering valuable assistance to radiologists in high-demand clinical environments.

Cost-effectiveness of a novel AI technology to quantify coronary inflammation and cardiovascular risk in patients undergoing routine coronary computed tomography angiography.

Tsiachristas A, Chan K, Wahome E, Kearns B, Patel P, Lyasheva M, Syed N, Fry S, Halborg T, West H, Nicol E, Adlam D, Modi B, Kardos A, Greenwood JP, Sabharwal N, De Maria GL, Munir S, McAlindon E, Sohan Y, Tomlins P, Siddique M, Shirodaria C, Blankstein R, Desai M, Neubauer S, Channon KM, Deanfield J, Akehurst R, Antoniades C

pubmed logopapersJun 23 2025
Coronary computed tomography angiography (CCTA) is a first-line investigation for chest pain in patients with suspected obstructive coronary artery disease (CAD). However, many acute cardiac events occur in the absence of obstructive CAD. We assessed the lifetime cost-effectiveness of integrating a novel artificial intelligence-enhanced image analysis algorithm (AI-Risk) that stratifies the risk of cardiac events by quantifying coronary inflammation, combined with the extent of coronary artery plaque and clinical risk factors, by analysing images from routine CCTA. A hybrid decision-tree with population cohort Markov model was developed from 3393 consecutive patients who underwent routine CCTA for suspected obstructive CAD and followed up for major adverse cardiac events over a median (interquartile range) of 7.7(6.4-9.1) years. In a prospective real-world evaluation survey of 744 consecutive patients undergoing CCTA for chest pain investigation, the availability of AI-Risk assessment led to treatment initiation or intensification in 45% of patients. In a further prospective study of 1214 consecutive patients with extensive guidelines recommended cardiovascular risk profiling, AI-Risk stratification led to treatment initiation or intensification in 39% of patients beyond the current clinical guideline recommendations. Treatment guided by AI-Risk modelled over a lifetime horizon could lead to fewer cardiac events (relative reductions of 11%, 4%, 4%, and 12% for myocardial infarction, ischaemic stroke, heart failure, and cardiac death, respectively). Implementing AI-Risk Classification in routine interpretation of CCTA is highly likely to be cost-effective (incremental cost-effectiveness ratio £1371-3244), both in scenarios of current guideline compliance, or when applied only to patients without obstructive CAD. Compared with standard care, the addition of AI-Risk assessment in routine CCTA interpretation is cost-effective, by refining risk-guided medical management.

GPT-4o and Specialized AI in Breast Ultrasound Imaging: A comparative Study on Accuracy, Agreement, Limitations, and Diagnostic Potential.

Sanli DET, Sanli AN, Buyukdereli Atadag Y, Kurt A, Esmerer E

pubmed logopapersJun 23 2025
This study aimed to evaluate the ability of ChatGPT and Breast Ultrasound Helper, a special ChatGPT-based subprogram trained on ultrasound image analysis, to analyze and differentiate benign and malignant breast lesions on ultrasound images. Ultrasound images of histopathologically confirmed breast cancer and fibroadenoma patients were read GPT-4o (the latest ChatGPT version) and Breast Ultrasound Helper (BUH), a tool from the "Explore" section of ChatGPT. Both were prompted in English using ACR BI-RADS Breast Ultrasound Lexicon criteria: lesion shape, orientation, margin, internal echo pattern, echogenicity, posterior acoustic features, microcalcifications or hyperechoic foci, perilesional hyperechoic rim, edema or architectural distortion, lesion size, and BI-RADS category. Two experienced radiologists evaluated the images and the responses of the programs in consensus. The outputs, BI-RADS category agreement, and benign/malignant discrimination were statistically compared. A total of 232 ultrasound images were analyzed, of which 133 (57.3%) were malignant and 99 (42.7%) benign. In comparative analysis, BUH showed superior performance overall, with higher kappa values and statistically significant results across multiple features (P .001). However, the overall level of agreement with the radiologists' consensus for all features was similar for BUH (κ: 0.387-0.755) and GPT-4o (κ: 0.317-0.803). On the other hand, BI-RADS category agreement was slightly higher in GPT-4o than in BUH (69.4% versus 65.9%), but BUH was slightly more successful in distinguishing benign lesions from malignant lesions (65.9% versus 67.7%). Although both AI tools show moderate-good performance in ultrasound image analysis, their limited compatibility with radiologists' evaluations and BI-RADS categorization suggests that their clinical application in breast ultrasound interpretation is still early and unreliable.

Enabling Early Identification of Malignant Vertebral Compression Fractures via 2.5D Convolutional Neural Network Model with CT Image Analysis.

Huang C, Li E, Hu J, Huang Y, Wu Y, Wu B, Tang J, Yang L

pubmed logopapersJun 23 2025
This study employed a retrospective data analysis approach combined with model development and validation. The present study introduces a 2.5D convolutional neural network (CNN) model leveraging CT imaging to facilitate the early detection of malignant vertebral compression fractures (MVCFs), potentially reducing reliance on invasive biopsies. Vertebral histopathological biopsy is recognized as the gold standard for differentiating between osteoporotic and malignant vertebral compression fractures (VCFs). Nevertheless, its application is restricted due to its invasive nature and high cost, highlighting the necessity for alternative methods to identify MVCFs. The clinical, imaging, and pathological data of patients who underwent vertebral augmentation and biopsy at Institution 1 and Institution 2 were collected and analyzed. Based on the vertebral CT images of these patients, 2D, 2.5D, and 3D CNN models were developed to identify the patients with osteoporotic vertebral compression fractures (OVCF) and MVCF. To verify the clinical application value of the CNN model, two rounds of reader studies were performed. The 2.5D CNN model performed well, and its performance in identifying MVCF patients was significantly superior to that of the 2D and 3D CNN models. In the training dataset, the area under the receiver operating characteristic curve (AUC) of the 2.5D CNN model was 0.996 and an F1 score of 0.915. In the external cohort test, the AUC was 0.815 and an F1 score of 0.714. And clinicians' ability to identify MVCF patients has been enhanced by the 2.5D CNN model. With the assistance of the 2.5D CNN model, the AUC of senior clinicians was 0.882, and the F1 score was 0.774. For junior clinicians, the 2.5D CNN model-assisted AUC was 0.784 and the F1 score was 0.667. The development of our 2.5D CNN model marks a significant step towards non-invasive identification of MVCF patients,. The 2.5D CNN model may be a potential model to assist clinicians in better identifying MVCF patients.

Comparative Analysis of Multimodal Large Language Models GPT-4o and o1 vs Clinicians in Clinical Case Challenge Questions

Jung, J., Kim, H., Bae, S., Park, J. Y.

medrxiv logopreprintJun 23 2025
BackgroundGenerative Pre-trained Transformer 4 (GPT-4) has demonstrated strong performance in standardized medical examinations but has limitations in real-world clinical settings. The newly released multimodal GPT-4o model, which integrates text and image inputs to enhance diagnostic capabilities, and the multimodal o1 model, which incorporates advanced reasoning, may address these limitations. ObjectiveThis study aimed to compare the performance of GPT-4o and o1 against clinicians in real-world clinical case challenges. MethodsThis retrospective, cross-sectional study used Medscape case challenge questions from May 2011 to June 2024 (n = 1,426). Each case included text and images of patient history, physical examination findings, diagnostic test results, and imaging studies. Clinicians were required to choose one answer from among multiple options, with the most frequent response defined as the clinicians decision. Data-based decisions were made using GPT models (3.5 Turbo, 4 Turbo, 4 Omni, and o1) to interpret the text and images, followed by a process to provide a formatted answer. We compared the performances of the clinicians and GPT models using Mixed-effects logistic regression analysis. ResultsOf the 1,426 questions, clinicians achieved an overall accuracy of 85.0%, whereas GPT-4o and o1 demonstrated higher accuracies of 88.4% and 94.3% (mean difference 3.4%; P = .005 and mean difference 9.3%; P < .001), respectively. In the multimodal performance analysis, which included cases involving images (n = 917), GPT-4o achieved an accuracy of 88.3%, and o1 achieved 93.9%, both significantly outperforming clinicians (mean difference 4.2%; P = .005 and mean difference 9.8%; P < .001). o1 showed the highest accuracy across all question categories, achieving 92.6% in diagnosis (mean difference 14.5%; P < .001), 97.0% in disease characteristics (mean difference 7.2%; P < .001), 92.6% in examination (mean difference 7.3%; P = .002), and 94.8% in treatment (mean difference 4.3%; P = .005), consistently outperforming clinicians. In terms of medical specialty, o1 achieved 93.6% accuracy in internal medicine (mean difference 10.3%; P < .001), 96.6% in major surgery (mean difference 9.2%; P = .030), 97.3% in psychiatry (mean difference 10.6%; P = .030), and 95.4% in minor specialties (mean difference 10.0%; P < .001), significantly surpassing clinicians. Across five trials, GPT-4o and o1 provided the correct answer 5/5 times in 86.2% and 90.7% of the cases, respectively. ConclusionsThe GPT-4o and o1 models achieved higher accuracy than clinicians in clinical case challenge questions, particularly in disease diagnosis. The GPT-4o and o1 could serve as valuable tools to assist healthcare professionals in clinical settings.

Stacking Ensemble Learning-based Models Enabling Accurate Diagnosis of Cardiac Amyloidosis using SPECT/CT:an International and Multicentre Study

Mo, Q., Cui, J., Jia, S., Zhang, Y., Xiao, Y., Liu, C., Zhou, C., Spielvogel, C. P., Calabretta, R., Zhou, W., Cao, K., Hacker, M., Li, X., Zhao, M.

medrxiv logopreprintJun 23 2025
PURPOSECardiac amyloidosis (CA), a life-threatening infiltrative cardiomyopathy, can be non-invasively diagnosed using [99mTc]Tc-bisphosphonate SPECT/CT. However, subjective visual interpretation risks diagnostic inaccuracies. We developed and validated a machine learning (ML) framework leveraging SPECT/CT radiomics to automate CA detection. METHODSThis retrospective multicenter study analyzed 290 patients of suspected CA who underwent [99mTc]Tc-PYP or [99mTc]Tc-DPD SPECT/CT. Radiomic features were extracted from co-registered SPECT and CT images, harmonized via intra-class correlation and Pearson correlation filtering, and optimized through LASSO regression. A stacking ensemble model incorporating support vector machine (SVM), random forest (RF), gradient boosting decision tree (GBDT), and adaptive boosting (AdaBoost) classifiers was constructed. The model was validated using an internal validation set (n = 54) and two external test set (n = 54 and n = 58).Model performance was evaluated using the area under the receiver operating characteristic curve (AUC), calibration, and decision curve analysis (DCA). Feature importance was interpreted using SHapley Additive exPlanations (SHAP) values. RESULTSOf 290 patients, 117 (40.3%) had CA. The stacking radiomics model attained AUCs of 0.871, 0.824, and 0.839 in the validation, test 1, and test 2 cohorts, respectively, significantly outperforming the clinical model (AUC 0.546 in validation set, P<0.05). DCA demonstrated superior net benefit over the clinical model across relevant thresholds, and SHAP analysis highlighted wavelet-transformed first-order and texture features as key predictors. CONCLUSIONA stacking ML model with SPECT/CT radiomics improves CA diagnosis, showing strong generalizability across varied imaging protocols and populations and highlighting its potential as a decision-support tool.

Chest X-ray Foundation Model with Global and Local Representations Integration.

Yang Z, Xu X, Zhang J, Wang G, Kalra MK, Yan P

pubmed logopapersJun 23 2025
Chest X-ray (CXR) is the most frequently ordered imaging test, supporting diverse clinical tasks from thoracic disease detection to postoperative monitoring. However, task-specific classification models are limited in scope, require costly labeled data, and lack generalizability to out-of-distribution datasets. To address these challenges, we introduce CheXFound, a self-supervised vision foundation model that learns robust CXR representations and generalizes effectively across a wide range of downstream tasks. We pretrained CheXFound on a curated CXR-987K dataset, comprising over approximately 987K unique CXRs from 12 publicly available sources. We propose a Global and Local Representations Integration (GLoRI) head for downstream adaptations, by incorporating fine- and coarse-grained disease-specific local features with global image features for enhanced performance in multilabel classification. Our experimental results showed that CheXFound outperformed state-of-the-art models in classifying 40 disease findings across different prevalence levels on the CXR-LT 24 dataset and exhibited superior label efficiency on downstream tasks with limited training data. Additionally, CheXFound achieved significant improvements on downstream tasks with out-of-distribution datasets, including opportunistic cardiovascular disease risk estimation, mortality prediction, malpositioned tube detection, and anatomical structure segmentation. The above results demonstrate CheXFound's strong generalization capabilities, which will enable diverse downstream adaptations with improved label efficiency in future applications. The project source code is publicly available at https://github.com/RPIDIAL/CheXFound.

STACT-Time: Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification

Irsyad Adam, Tengyue Zhang, Shrayes Raman, Zhuyu Qiu, Brandon Taraku, Hexiang Feng, Sile Wang, Ashwath Radhachandran, Shreeram Athreya, Vedrana Ivezic, Peipei Ping, Corey Arnold, William Speier

arxiv logopreprintJun 22 2025
Thyroid cancer is among the most common cancers in the United States. Thyroid nodules are frequently detected through ultrasound (US) imaging, and some require further evaluation via fine-needle aspiration (FNA) biopsy. Despite its effectiveness, FNA often leads to unnecessary biopsies of benign nodules, causing patient discomfort and anxiety. To address this, the American College of Radiology Thyroid Imaging Reporting and Data System (TI-RADS) has been developed to reduce benign biopsies. However, such systems are limited by interobserver variability. Recent deep learning approaches have sought to improve risk stratification, but they often fail to utilize the rich temporal and spatial context provided by US cine clips, which contain dynamic global information and surrounding structural changes across various views. In this work, we propose the Spatio-Temporal Cross Attention for Cine Thyroid Ultrasound Time Series Classification (STACT-Time) model, a novel representation learning framework that integrates imaging features from US cine clips with features from segmentation masks automatically generated by a pretrained model. By leveraging self-attention and cross-attention mechanisms, our model captures the rich temporal and spatial context of US cine clips while enhancing feature representation through segmentation-guided learning. Our model improves malignancy prediction compared to state-of-the-art models, achieving a cross-validation precision of 0.91 (plus or minus 0.02) and an F1 score of 0.89 (plus or minus 0.02). By reducing unnecessary biopsies of benign nodules while maintaining high sensitivity for malignancy detection, our model has the potential to enhance clinical decision-making and improve patient outcomes.

From "time is brain" to "time is collaterals": updates on the role of cerebral collateral circulation in stroke.

Marilena M, Romana PF, Guido A, Gianluca R, Sebastiano F, Enrico P, Sabrina A

pubmed logopapersJun 22 2025
Acute ischemic stroke (AIS) remains the leading cause of mortality and disability worldwide. While revascularization therapies-such as intravenous thrombolysis (IVT) and endovascular thrombectomy (EVT)-have significantly improved outcomes, their success is strongly influenced by the status of cerebral collateral circulation. Collateral vessels sustain cerebral perfusion during vascular occlusion, limiting infarct growth and extending therapeutic windows. Despite this recognized importance, standardized methods for assessing collateral status and integrating it into treatment strategies are still evolving. This narrative review synthesizes current evidence on the role of collateral circulation in AIS, focusing on its impact on infarct dynamics, treatment efficacy, and functional recovery. We highlight findings from major clinical trials-including MR CLEAN, DAWN, DEFUSE-3, and SWIFT PRIME which consistently demonstrate that robust collateral networks are associated with improved outcomes and expanded eligibility for reperfusion therapies. Advances in neuroimaging, such as multiphase CTA and perfusion MRI, alongside emerging AI-driven automated collateral grading, are reshaping patients' selection and clinical decision-making. We also discuss novel therapeutic strategies aimed at enhancing collateral flow, such as vasodilators, neuroprotective agents, statins, and stem cell therapies. Despite growing evidence supporting collateral-based treatment approaches, real-time clinical implementation remains limited by challenges in standardization and access. Cerebral collateral circulation is a critical determinant of stroke prognosis and treatment response. Incorporating collateral assessment into acute stroke workflows-supported by advanced imaging, artificial intelligence, and personalized medicine-offers a promising pathway to optimize outcomes. As the field moves beyond a strict "time is brain" model, the emerging paradigm of "time is collaterals" may better reflect the dynamic interplay between perfusion, tissue viability, and therapeutic opportunity in AIS management.

CT Radiomics-Based Explainable Machine Learning Model for Accurate Differentiation of Malignant and Benign Endometrial Tumors: A Two-Center Study

Tingrui Zhang, Honglin Wu, Zekun Jiang, Yingying Wang, Rui Ye, Huiming Ni, Chang Liu, Jin Cao, Xuan Sun, Rong Shao, Xiaorong Wei, Yingchun Sun

arxiv logopreprintJun 22 2025
Aimed to develop and validate a CT radiomics-based explainable machine learning model for diagnosing malignancy and benignity specifically in endometrial cancer (EC) patients. A total of 83 EC patients from two centers, including 46 with malignant and 37 with benign conditions, were included, with data split into a training set (n=59) and a testing set (n=24). The regions of interest (ROIs) were manually segmented from pre-surgical CT scans, and 1132 radiomic features were extracted from the pre-surgical CT scans using Pyradiomics. Six explainable machine learning modeling algorithms were implemented respectively, for determining the optimal radiomics pipeline. The diagnostic performance of the radiomic model was evaluated by using sensitivity, specificity, accuracy, precision, F1 score, confusion matrices, and ROC curves. To enhance clinical understanding and usability, we separately implemented SHAP analysis and feature mapping visualization, and evaluated the calibration curve and decision curve. By comparing six modeling strategies, the Random Forest model emerged as the optimal choice for diagnosing EC, with a training AUC of 1.00 and a testing AUC of 0.96. SHAP identified the most important radiomic features, revealing that all selected features were significantly associated with EC (P < 0.05). Radiomics feature maps also provide a feasible assessment tool for clinical applications. DCA indicated a higher net benefit for our model compared to the "All" and "None" strategies, suggesting its clinical utility in identifying high-risk cases and reducing unnecessary interventions. In conclusion, the CT radiomics-based explainable machine learning model achieved high diagnostic performance, which could be used as an intelligent auxiliary tool for the diagnosis of endometrial cancer.
Page 164 of 2442432 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.