Sort by:
Page 103 of 1111106 results

CT-based quantification of intratumoral heterogeneity for predicting distant metastasis in retroperitoneal sarcoma.

Xu J, Miao JG, Wang CX, Zhu YP, Liu K, Qin SY, Chen HS, Lang N

pubmed logopapersMay 9 2025
Retroperitoneal sarcoma (RPS) is highly heterogeneous, leading to different risks of distant metastasis (DM) among patients with the same clinical stage. This study aims to develop a quantitative method for assessing intratumoral heterogeneity (ITH) using preoperative contrast-enhanced CT (CECT) scans and evaluate its ability to predict DM risk. We conducted a retrospective analysis of 274 PRS patients who underwent complete surgical resection and were monitored for ≥ 36 months at two centers. Conventional radiomics (C-radiomics), ITH radiomics, and deep-learning (DL) features were extracted from the preoperative CECT scans and developed single-modality models. Clinical indicators and high-throughput CECT features were integrated to develop a combined model for predicting DM. The performance of the models was evaluated by measuring the receiver operating characteristic curve and Harrell's concordance index (C-index). Distant metastasis-free survival (DMFS) was also predicted to further assess survival benefits. The ITH model demonstrated satisfactory predictive capability for DM in internal and external validation cohorts (AUC: 0.735, 0.765; C-index: 0.691, 0.729). The combined model that combined clinicoradiological variables, ITH-score, and DL-score achieved the best predictive performance in internal and external validation cohorts (AUC: 0.864, 0.801; C-index: 0.770, 0.752), successfully stratified patients into high- and low-risk groups for DM (p < 0.05). The combined model demonstrated promising potential for accurately predicting the DM risk and stratifying the DMFS risk in RPS patients undergoing complete surgical resection, providing a valuable tool for guiding treatment decisions and follow-up strategies. The intratumoral heterogeneity analysis facilitates the identification of high-risk retroperitoneal sarcoma patients prone to distant metastasis and poor prognoses, enabling the selection of candidates for more aggressive surgical and post-surgical interventions. Preoperative identification of retroperitoneal sarcoma (RPS) with a high potential for distant metastasis (DM) is crucial for targeted interventional strategies. Quantitative assessment of intratumoral heterogeneity achieved reasonable performance for predicting DM. The integrated model combining clinicoradiological variables, ITH radiomics, and deep-learning features effectively predicted distant metastasis-free survival.

Comparison between multimodal foundation models and radiologists for the diagnosis of challenging neuroradiology cases with text and images.

Le Guellec B, Bruge C, Chalhoub N, Chaton V, De Sousa E, Gaillandre Y, Hanafi R, Masy M, Vannod-Michel Q, Hamroun A, Kuchcinski G

pubmed logopapersMay 9 2025
The purpose of this study was to compare the ability of two multimodal models (GPT-4o and Gemini 1.5 Pro) with that of radiologists to generate differential diagnoses from textual context alone, key images alone, or a combination of both using complex neuroradiology cases. This retrospective study included neuroradiology cases from the "Diagnosis Please" series published in the Radiology journal between January 2008 and September 2024. The two multimodal models were asked to provide three differential diagnoses from textual context alone, key images alone, or the complete case. Six board-certified neuroradiologists solved the cases in the same setting, randomly assigned to two groups: context alone first and images alone first. Three radiologists solved the cases without, and then with the assistance of Gemini 1.5 Pro. An independent radiologist evaluated the quality of the image descriptions provided by GPT-4o and Gemini for each case. Differences in correct answers between multimodal models and radiologists were analyzed using McNemar test. GPT-4o and Gemini 1.5 Pro outperformed radiologists using clinical context alone (mean accuracy, 34.0 % [18/53] and 44.7 % [23.7/53] vs. 16.4 % [8.7/53]; both P < 0.01). Radiologists outperformed GPT-4o and Gemini 1.5 Pro using images alone (mean accuracy, 42.0 % [22.3/53] vs. 3.8 % [2/53], and 7.5 % [4/53]; both P < 0.01) and the complete cases (48.0 % [25.6/53] vs. 34.0 % [18/53], and 38.7 % [20.3/53]; both P < 0.001). While radiologists improved their accuracy when combining multimodal information (from 42.1 % [22.3/53] for images alone to 50.3 % [26.7/53] for complete cases; P < 0.01), GPT-4o and Gemini 1.5 Pro did not benefit from the multimodal context (from 34.0 % [18/53] for text alone to 35.2 % [18.7/53] for complete cases for GPT-4o; P = 0.48, and from 44.7 % [23.7/53] to 42.8 % [22.7/53] for Gemini 1.5 Pro; P = 0.54). Radiologists benefited significantly from the suggestion of Gemini 1.5 Pro, increasing their accuracy from 47.2 % [25/53] to 56.0 % [27/53] (P < 0.01). Both GPT-4o and Gemini 1.5 Pro correctly identified the imaging modality in 53/53 (100 %) and 51/53 (96.2 %) cases, respectively, but frequently failed to identify key imaging findings (43/53 cases [81.1 %] with incorrect identification of key imaging findings for GPT-4o and 50/53 [94.3 %] for Gemini 1.5). Radiologists show a specific ability to benefit from the integration of textual and visual information, whereas multimodal models mostly rely on the clinical context to suggest diagnoses.

Neural Network-based Automated Classification of 18F-FDG PET/CT Lesions and Prognosis Prediction in Nasopharyngeal Carcinoma Without Distant Metastasis.

Lv Y, Zheng D, Wang R, Zhou Z, Gao Z, Lan X, Qin C

pubmed logopapersMay 9 2025
To evaluate the diagnostic performance of the PET Assisted Reporting System (PARS) in nasopharyngeal carcinoma (NPC) patients without distant metastasis, and to investigate the prognostic significance of the metabolic parameters. Eighty-three NPC patients who underwent pretreatment 18F-FDG PET/CT were retrospectively collected. First, the sensitivity, specificity, and accuracy of PARS for diagnosing malignant lesions were calculated, using histopathology as the gold standard. Next, metabolic parameters of the primary tumor were derived using both PARS and manual segmentation. The differences and consistency between the 2 methods were analyzed. Finally, the prognostic value of PET metabolic parameters was evaluated. Prognostic analysis of progression-free survival (PFS) and overall survival (OS) was conducted. PARS demonstrated high patient-based accuracy (97.2%), sensitivity (88.9%), and specificity (97.4%), and 96.7%, 84.0%, and 96.9% based on lesions. Manual segmentation yielded higher metabolic tumor volume (MTV) and total lesion glycolysis (TLG) than PARS. Metabolic parameters from both methods were highly correlated and consistent. ROC analysis showed metabolic parameters exhibited differences in prognostic prediction, but generally performed well in predicting 3-year PFS and OS overall. MTV and age were independent prognostic factors; Cox proportional-hazards models incorporating them showed significant predictive improvements when combined. Kaplan-Meier analysis confirmed better prognosis in the low-risk group based on combined indicators (χ² = 42.25, P < 0.001; χ² = 20.44, P < 0.001). Preliminary validation of PARS in NPC patients without distant metastasis shows high diagnostic sensitivity and accuracy for lesion identification and classification, and metabolic parameters correlate well with manual. MTV reflects prognosis, and its combination with age enhances prognostic prediction and risk stratification.

Application of Artificial Intelligence in Cardio-Oncology Imaging for Cancer Therapy-Related Cardiovascular Toxicity: Systematic Review.

Mushcab H, Al Ramis M, AlRujaib A, Eskandarani R, Sunbul T, AlOtaibi A, Obaidan M, Al Harbi R, Aljabri D

pubmed logopapersMay 9 2025
Artificial intelligence (AI) is a revolutionary tool yet to be fully integrated into several health care sectors, including medical imaging. AI can transform how medical imaging is conducted and interpreted, especially in cardio-oncology. This study aims to systematically review the available literature on the use of AI in cardio-oncology imaging to predict cardiotoxicity and describe the possible improvement of different imaging modalities that can be achieved if AI is successfully deployed to routine practice. We conducted a database search in PubMed, Ovid MEDLINE, Cochrane Library, CINAHL, and Google Scholar from inception to 2023 using the AI research assistant tool (Elicit) to search for original studies reporting AI outcomes in adult patients diagnosed with any cancer and undergoing cardiotoxicity assessment. Outcomes included incidence of cardiotoxicity, left ventricular ejection fraction, risk factors associated with cardiotoxicity, heart failure, myocardial dysfunction, signs of cancer therapy-related cardiovascular toxicity, echocardiography, and cardiac magnetic resonance imaging. Descriptive information about each study was recorded, including imaging technique, AI model, outcomes, and limitations. The systematic search resulted in 7 studies conducted between 2018 and 2023, which are included in this review. Most of these studies were conducted in the United States (71%), included patients with breast cancer (86%), and used magnetic resonance imaging as the imaging modality (57%). The quality assessment of the studies had an average of 86% compliance in all of the tool's sections. In conclusion, this systematic review demonstrates the potential of AI to enhance cardio-oncology imaging for predicting cardiotoxicity in patients with cancer. Our findings suggest that AI can enhance the accuracy and efficiency of cardiotoxicity assessments. However, further research through larger, multicenter trials is needed to validate these applications and refine AI technologies for routine use, paving the way for improved patient outcomes in cancer survivors at risk of cardiotoxicity.

Adapting a Segmentation Foundation Model for Medical Image Classification

Pengfei Gu, Haoteng Tang, Islam A. Ebeid, Jose A. Nunez, Fabian Vazquez, Diego Adame, Marcus Zhan, Huimin Li, Bin Fu, Danny Z. Chen

arxiv logopreprintMay 9 2025
Recent advancements in foundation models, such as the Segment Anything Model (SAM), have shown strong performance in various vision tasks, particularly image segmentation, due to their impressive zero-shot segmentation capabilities. However, effectively adapting such models for medical image classification is still a less explored topic. In this paper, we introduce a new framework to adapt SAM for medical image classification. First, we utilize the SAM image encoder as a feature extractor to capture segmentation-based features that convey important spatial and contextual details of the image, while freezing its weights to avoid unnecessary overhead during training. Next, we propose a novel Spatially Localized Channel Attention (SLCA) mechanism to compute spatially localized attention weights for the feature maps. The features extracted from SAM's image encoder are processed through SLCA to compute attention weights, which are then integrated into deep learning classification models to enhance their focus on spatially relevant or meaningful regions of the image, thus improving classification performance. Experimental results on three public medical image classification datasets demonstrate the effectiveness and data-efficiency of our approach.

Predicting Knee Osteoarthritis Severity from Radiographic Predictors: Data from the Osteoarthritis Initiative.

Nurmirinta TAT, Turunen MJ, Tohka J, Mononen ME, Liukkonen MK

pubmed logopapersMay 9 2025
In knee osteoarthritis (KOA) treatment, preventive measures to reduce its onset risk are a key factor. Among individuals with radiographically healthy knees, however, future knee joint integrity and condition cannot be predicted by clinically applicable methods. We investigated if knee joint morphology derived from widely accessible and cost-effective radiographs could be helpful in predicting future knee joint integrity and condition. We combined knee joint morphology with known risk predictors such as age, height, and weight. Baseline data were utilized as predictors, and the maximal severity of KOA after 8 years served as a target variable. The three KOA categories in this study were based on Kellgren-Lawrence grading: healthy, moderate, and severe. We employed a two-stage machine learning model that utilized two random forest algorithms. We trained three models: the subject demographics (SD) model utilized only SD; the image model utilized only knee joint morphology from radiographs; the merged model utilized combined predictors. The training data comprised an 8-year follow-up of 1222 knees from 683 individuals. The SD- model obtained a weighted F1 score (WF1) of 77.2% and a balanced accuracy (BA) of 65.6%. The Image-model performance metrics were lowest, with a WF1 of 76.5% and BA of 63.8%. The top-performing merged model achieved a WF1 score of 78.3% and a BA of 68.2%. Our two-stage prediction model provided improved results based on performance metrics, suggesting potential for application in clinical settings.

Shortcut learning leads to sex bias in deep learning models for photoacoustic tomography.

Knopp M, Bender CJ, Holzwarth N, Li Y, Kempf J, Caranovic M, Knieling F, Lang W, Rother U, Seitel A, Maier-Hein L, Dreher KK

pubmed logopapersMay 9 2025
Shortcut learning has been identified as a source of algorithmic unfairness in medical imaging artificial intelligence (AI), but its impact on photoacoustic tomography (PAT), particularly concerning sex bias, remains underexplored. This study investigates this issue using peripheral artery disease (PAD) diagnosis as a specific clinical application. To examine the potential for sex bias due to shortcut learning in convolutional neural network (CNNs) and assess how such biases might affect diagnostic predictions, we created training and test datasets with varying PAD prevalence between sexes. Using these datasets, we explored (1) whether CNNs can classify the sex from imaging data, (2) how sex-specific prevalence shifts impact PAD diagnosis performance and underdiagnosis disparity between sexes, and (3) how similarly CNNs encode sex and PAD features. Our study with 147 individuals demonstrates that CNNs can classify the sex from calf muscle PAT images, achieving an AUROC of 0.75. For PAD diagnosis, models trained on data with imbalanced sex-specific disease prevalence experienced significant performance drops (up to 0.21 AUROC) when applied to balanced test sets. Additionally, greater imbalances in sex-specific prevalence within the training data exacerbated underdiagnosis disparities between sexes. Finally, we identify evidence of shortcut learning by demonstrating the effective reuse of learned feature representations between PAD diagnosis and sex classification tasks. CNN-based models trained on PAT data may engage in shortcut learning by leveraging sex-related features, leading to biased and unreliable diagnostic predictions. Addressing demographic-specific prevalence imbalances and preventing shortcut learning is critical for developing models in the medical field that are both accurate and equitable across diverse patient populations.

APD-FFNet: A Novel Explainable Deep Feature Fusion Network for Automated Periodontitis Diagnosis on Dental Panoramic Radiography.

Resul ES, Senirkentli GB, Bostanci E, Oduncuoglu BF

pubmed logopapersMay 9 2025
This study introduces APD-FFNet, a novel, explainable deep learning architecture for automated periodontitis diagnosis using panoramic radiographs. A total of 337 panoramic radiographs, annotated by a periodontist, served as the dataset. APD-FFNet combines custom convolutional and transformer-based layers within a deep feature fusion framework that captures both local and global contextual features. Performance was evaluated using accuracy, the F1 score, the area under the receiver operating characteristic curve, the Jaccard similarity coefficient, and the Matthews correlation coefficient. McNemar's test confirmed statistical significance, and SHapley Additive exPlanations provided interpretability insights. APD-FFNet achieved 94% accuracy, a 93.88% F1 score, 93.47% area under the receiver operating characteristic curve, 88.47% Jaccard similarity coefficient, and 88.46% Matthews correlation coefficient, surpassing comparable approaches. McNemar's test validated these findings (p < 0.05). Explanations generated by SHapley Additive exPlanations highlighted important regions in each radiograph, supporting clinical applicability. By merging convolutional and transformer-based layers, APD-FFNet establishes a new benchmark in automated, interpretable periodontitis diagnosis, with low hyperparameter sensitivity facilitating its integration into regular dental practice. Its adaptable design suggests broader relevance to other medical imaging domains. This is the first feature fusion method specifically devised for periodontitis diagnosis, supported by an expert-curated dataset and advanced explainable artificial intelligence. Its robust accuracy, low hyperparameter sensitivity, and transparent outputs set a new standard for automated periodontal analysis.

Robust Computation of Subcortical Functional Connectivity Guided by Quantitative Susceptibility Mapping: An Application in Parkinson's Disease Diagnosis.

Qin J, Wu H, Wu C, Guo T, Zhou C, Duanmu X, Tan S, Wen J, Zheng Q, Yuan W, Zhu Z, Chen J, Wu J, He C, Ma Y, Liu C, Xu X, Guan X, Zhang M

pubmed logopapersMay 8 2025
Previous resting state functional MRI (rs-fMRI) analyses of the basal ganglia in Parkinson's disease heavily relied on T1-weighted imaging (T1WI) atlases. However, subcortical structures are characterized by subtle contrast differences, making their accurate delineation challenging on T1WI. In this study, we aimed to introduce and validate a method that incorporates quantitative susceptibility mapping (QSM) into the rs-fMRI analytical pipeline to achieve precise subcortical nuclei segmentation and improve the stability of RSFC measurements in Parkinson's disease. A total of 321 participants (148 patients with Parkinson's Disease and 173 normal controls) were enrolled. We performed cross-modal registration at the individual level for rs-fMRI to QSM (FUNC2QSM) and T1WI (FUNC2T1), respectively.The consistency and accuracy of resting state functional connectivity (RSFC) measurements in two registration approaches were assessed by intraclass correlation coefficient and mutual information. Bootstrap analysis was performed to validate the stability of the RSFC differences between Parkinson's disease and normal controls. RSFC-based machine learning models were constructed for Parkinson's disease classification, using optimized hyperparameters (RandomizedSearchCV with 5-fold cross-validation). The consistency of RSFC measurements between the two registration methods was poor, whereas the QSM-guided approach showed better mutual information values, suggesting higher registration accuracy. The disruptions of RSFC identified with the QSM-guided approach were more stable and reliable, as confirmed by bootstrap analysis. In classification models, the QSM-guided method consistently outperformed the T1WI-guided method, achieving higher test-set ROC-AUC values (FUNC2QSM: 0.87-0.90, FUNC2T1: 0.67-0.70). The QSM-guided approach effectively enhanced the accuracy of subcortical segmentation and the stability of RSFC measurement, thus facilitating future biomarker development in Parkinson's disease.

A hybrid AI method for lung cancer classification using explainable AI techniques.

Shivwanshi RR, Nirala NS

pubmed logopapersMay 8 2025
The use of Artificial Intelligence (AI) methods for the analysis of CT (computed tomography) images has greatly contributed to the development of an effective computer-assisted diagnosis (CAD) system for lung cancer (LC). However, complex structures, multiple radiographic interrelations, and the dynamic locations of abnormalities within lung CT images make extracting relevant information to process and implement LC CAD systems difficult. These prominent problems are addressed in this paper by presenting a hybrid method of LC malignancy classification, which may help researchers and experts properly engineer the model's performance by observing how the model makes decisions. The proposed methodology is named IncCat-LCC: Explainer (Inception Net Cat Boost LC Classification: Explainer), which consists of feature extraction (FE) using the handcrafted radiomic Feature (HcRdF) extraction technique, InceptionNet CNN Feature (INCF) extraction, Vision Transformer Feature (ViTF) extraction, and XGBOOST (XGB)-based feature selection, and the GPU based CATBOOST (CB) classification technique. The proposed framework achieves better and highest performance scores for lung nodule multiclass malignancy classification when evaluated using metrics such as accuracy, precision, recall, f-1 score, specificity, and area under the roc curve as 96.74 %, 93.68 %, 96.74 %, 95.19 %, 98.47 % and 99.76 % consecutively for classifying highly normal class. Observing the explainable artificial intelligence (XAI) explanations will help readers understand the model performance and the statistical outcomes of the evaluation parameter. The work presented in this article may improve the existing LC CAD system and help assess the important parameters using XAI to recognize the factors contributing to enhanced performance and reliability.
Page 103 of 1111106 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.