Sort by:
Page 1 of 11101 results

Joint resting state and structural networks characterize pediatric bipolar patients compared to healthy controls: a multimodal fusion approach.

Yi X, Ma M, Wang X, Zhang J, Wu F, Huang H, Xiao Q, Xie A, Liu P, Grecucci A

pubmed logopapersMay 15 2025
Pediatric bipolar disorder (PBD) is a highly debilitating condition, characterized by alternating episodes of mania and depression, with intervening periods of remission. Limited information is available about the functional and structural abnormalities in PBD, particularly when comparing type I with type II subtypes. Resting-state brain activity and structural grey matter, assessed through MRI, may provide insight into the neurobiological biomarkers of this disorder. In this study, Resting state Regional Homogeneity (ReHo) and grey matter concentration (GMC) data of 58 PBD patients, and 21 healthy controls matched for age, gender, education and IQ, were analyzed in a data fusion unsupervised machine learning approach known as transposed Independent Vector Analysis. Two networks significantly differed between BPD and HC. The first network included fronto- medial regions, such as the medial and superior frontal gyrus, the cingulate, and displayed higher ReHo and GMC values in PBD compared to HC. The second network included temporo-posterior regions, as well as the insula, the caudate and the precuneus and displayed lower ReHo and GMC values in PBD compared to HC. Additionally, two networks differ between type-I vs type-II in PBD: an occipito-cerebellar network with increased ReHo and GMC in type-I compared to type-II, and a fronto-parietal network with decreased ReHo and GMC in type-I compared to type-II. Of note, the first network positively correlated with depression scores. These findings shed new light on the functional and structural abnormalities displayed by pediatric bipolar patients.

Measuring the severity of knee osteoarthritis with an aberration-free fast line scanning Raman imaging system.

Jiao C, Ye J, Liao J, Li J, Liang J, He S

pubmed logopapersMay 15 2025
Osteoarthritis (OA) is a major cause of disability worldwide, with symptoms like joint pain, limited functionality, and decreased quality of life, potentially leading to deformity and irreversible damage. Chemical changes in joint tissues precede imaging alterations, making early diagnosis challenging for conventional methods like X-rays. Although Raman imaging provides detailed chemical information, it is time-consuming. This paper aims to achieve rapid osteoarthritis diagnosis and grading using a self-developed Raman imaging system combined with deep learning denoising and acceleration algorithms. Our self-developed aberration-corrected line-scanning confocal Raman imaging device acquires a line of Raman spectra (hundreds of points) per scan using a galvanometer or displacement stage, achieving spatial and spectral resolutions of 2 μm and 0.2 nm, respectively. Deep learning algorithms enhance the imaging speed by over 4 times through effective spectrum denoising and signal-to-noise ratio (SNR) improvement. By leveraging the denoising capabilities of deep learning, we are able to acquire high-quality Raman spectral data with a reduced integration time, thereby accelerating the imaging process. Experiments on the tibial plateau of osteoarthritis patients compared three excitation wavelengths (532, 671, and 785 nm), with 671 nm chosen for optimal SNR and minimal fluorescence. Machine learning algorithms achieved a 98 % accuracy in distinguishing articular from calcified cartilage and a 97 % accuracy in differentiating osteoarthritis grades I to IV. Our fast Raman imaging system, combining an aberration-corrected line-scanning confocal Raman imager with deep learning denoising, offers improved imaging speed and enhanced spectral and spatial resolutions. It enables rapid, label-free detection of osteoarthritis severity and can identify early compositional changes before clinical imaging, allowing precise grading and tailored treatment, thus advancing orthopedic diagnostics and improving patient outcomes.

Deep Learning-Based Chronic Obstructive Pulmonary Disease Exacerbation Prediction Using Flow-Volume and Volume-Time Curve Imaging: Retrospective Cohort Study.

Jeon ET, Park H, Lee JK, Heo EY, Lee CH, Kim DK, Kim DH, Lee HW

pubmed logopapersMay 15 2025
Chronic obstructive pulmonary disease (COPD) is a common and progressive respiratory condition characterized by persistent airflow limitation and symptoms such as dyspnea, cough, and sputum production. Acute exacerbations (AE) of COPD (AE-COPD) are key determinants of disease progression; yet, existing predictive models relying mainly on spirometric measurements, such as forced expiratory volume in 1 second, reflect only a fraction of the physiological information embedded in respiratory function tests. Recent advances in artificial intelligence (AI) have enabled more sophisticated analyses of full spirometric curves, including flow-volume loops and volume-time curves, facilitating the identification of complex patterns associated with increased exacerbation risk. This study aimed to determine whether a predictive model that integrates clinical data and spirometry images with the use of AI improves accuracy in predicting moderate-to-severe and severe AE-COPD events compared to a clinical-only model. A retrospective cohort study was conducted using COPD registry data from 2 teaching hospitals from January 2004 to December 2020. The study included a total of 10,492 COPD cases, divided into a development cohort (6870 cases) and an external validation cohort (3622 cases). The AI-enhanced model (AI-PFT-Clin) used a combination of clinical variables (eg, history of AE-COPD, dyspnea, and inhaled treatments) and spirometry image data (flow-volume loop and volume-time curves). In contrast, the Clin model used only clinical variables. The primary outcomes were moderate-to-severe and severe AE-COPD events within a year of spirometry. In the external validation cohort, the AI-PFT-Clin model outperformed the Clin model, showing an area under the receiver operating characteristic curve of 0.755 versus 0.730 (P<.05) for moderate-to-severe AE-COPD and 0.713 versus 0.675 (P<.05) for severe AE-COPD. The AI-PFT-Clin model demonstrated reliable predictive capability across subgroups, including younger patients and those without previous exacerbations. Higher AI-PFT-Clin scores correlated with elevated AE-COPD risk (adjusted hazard ratio for Q4 vs Q1: 4.21, P<.001), with sustained predictive stability over a 10-year follow-up period. The AI-PFT-Clin model, by integrating clinical data with spirometry images, offers enhanced predictive accuracy for AE-COPD events compared to a clinical-only approach. This AI-based framework facilitates the early identification of high-risk individuals through the detection of physiological abnormalities not captured by conventional metrics. The model's robust performance and long-term predictive stability suggest its potential utility in proactive COPD management and personalized intervention planning. These findings highlight the promise of incorporating advanced AI techniques into routine COPD management, particularly in populations traditionally seen as lower risk, supporting improved management of COPD through tailored patient care.

Interobserver agreement between artificial intelligence models in the thyroid imaging and reporting data system (TIRADS) assessment of thyroid nodules.

Leoncini A, Trimboli P

pubmed logopapersMay 15 2025
As ultrasound (US) is the most accurate tool for assessing the thyroid nodule (TN) risk of malignancy (RoM), international societies have published various Thyroid Imaging and Reporting Data Systems (TIRADSs). With the recent advent of artificial intelligence (AI), clinicians and researchers should ask themselves how AI could interpret the terminology of the TIRADSs and whether or not AIs agree in the risk assessment of TNs. The study aim was to analyze the interobserver agreement (IOA) between AIs in assessing the RoM of TNs across various TIRADSs categories using a cases series created combining TIRADSs descriptors. ChatGPT, Google Gemini, and Claude were compared. ACR-TIRADS, EU-TIRADS, and K-TIRADS, were employed to evaluate the AI assessment. Multiple written scenarios for the three TIRADS were created, the cases were evaluated by the three AIs, and their assessments were analyzed and compared. The IOA was estimated by comparing the kappa (κ) values. Ninety scenarios were created. With ACR-TIRADS the IOA analysis gave κ = 0.58 between ChatGPT and Gemini, 0.53 between ChatGPT and Claude, and 0.90 between Gemini and Claude. With EU-TIRADS it was observed κ value = 0.73 between ChatGPT and Gemini, 0.62 between ChatGPT and Claude, and 0.72 between Gemini and Claude. With K-TIRADS it was found κ = 0.88 between ChatGPT and Gemini, 0.70 between ChatGPT and Claude, and 0.61 between Gemini and Claude. This study found that there were non-negligible variability between the three AIs. Clinicians and patients should be aware of these new findings.

Predicting Immunotherapy Response in Unresectable Hepatocellular Carcinoma: A Comparative Study of Large Language Models and Human Experts.

Xu J, Wang J, Li J, Zhu Z, Fu X, Cai W, Song R, Wang T, Li H

pubmed logopapersMay 15 2025
Hepatocellular carcinoma (HCC) is an aggressive cancer with limited biomarkers for predicting immunotherapy response. Recent advancements in large language models (LLMs) like GPT-4, GPT-4o, and Gemini offer the potential for enhancing clinical decision-making through multimodal data analysis. However, their effectiveness in predicting immunotherapy response, especially compared to human experts, remains unclear. This study assessed the performance of GPT-4, GPT-4o, and Gemini in predicting immunotherapy response in unresectable HCC, compared to radiologists and oncologists of varying expertise. A retrospective analysis of 186 patients with unresectable HCC utilized multimodal data (clinical and CT images). LLMs were evaluated with zero-shot prompting and two strategies: the 'voting method' and the 'OR rule method' for improved sensitivity. Performance metrics included accuracy, sensitivity, area under the curve (AUC), and agreement across LLMs and physicians.GPT-4o, using the 'OR rule method,' achieved 65% accuracy and 47% sensitivity, comparable to intermediate physicians but lower than senior physicians (accuracy: 72%, p = 0.045; sensitivity: 70%, p < 0.0001). Gemini-GPT, combining GPT-4, GPT-4o, and Gemini, achieved an AUC of 0.69, similar to senior physicians (AUC: 0.72, p = 0.35), with 68% accuracy, outperforming junior and intermediate physicians while remaining comparable to senior physicians (p = 0.78). However, its sensitivity (58%) was lower than senior physicians (p = 0.0097). LLMs demonstrated higher inter-model agreement (κ = 0.59-0.70) than inter-physician agreement, especially among junior physicians (κ = 0.15). This study highlights the potential of LLMs, particularly Gemini-GPT, as valuable tools in predicting immunotherapy response for HCC.

Comparison of lumbar disc degeneration grading between deep learning model SpineNet and radiologist: a longitudinal study with a 14-year follow-up.

Murto N, Lund T, Kautiainen H, Luoma K, Kerttula L

pubmed logopapersMay 15 2025
To assess the agreement between lumbar disc degeneration (DD) grading by the convolutional neural network model SpineNet and radiologist's visual grading. In a 14-year follow-up MRI study involving 19 male volunteers, lumbar DD was assessed by SpineNet and two radiologists using the Pfirrmann classification at baseline (age 37) and after 14 years (age 51). Pfirrmann summary scores (PSS) were calculated by summing individual disc grades. The agreement between the first radiologist and SpineNet was analyzed, with the second radiologist's grading used for inter-observer agreement. Significant differences were observed in the Pfirrmann grades and PSS assigned by the radiologist and SpineNet at both time points. SpineNet assigned Pfirrmann grade 1 to several discs and grade 5 to more discs compared to the radiologists. The concordance correlation coefficients (CCC) of PSS between the radiologist and SpineNet were 0.54 (95% CI: 0.28 to 0.79) at baseline and 0.54 (0.27 to 0.80) at follow-up. The average kappa (κ) values of 0.74 (0.68 to 0.81) at baseline and 0.68 (0.58 to 0.77) at follow-up. CCC of PSS between the radiologists was 0.83 (0.69 to 0.97) at baseline and 0.78 (0.61 to 0.95) at follow-up, with κ values ranging from 0.73 to 0.96. We found fair to substantial agreement in DD grading between SpineNet and the radiologist, albeit with notable discrepancies. These findings indicate that AI-based systems like SpineNet hold promise as complementary tools in radiological evaluation, including in longitudinal studies, but emphasize the need for ongoing refinement of AI algorithms.

Recent advancements in personalized management of prostate cancer biochemical recurrence after radical prostatectomy.

Falkenbach F, Ekrutt J, Maurer T

pubmed logopapersMay 15 2025
Biochemical recurrence (BCR) after radical prostatectomy exhibits heterogeneous prognostic implications. Recent advancements in imaging and biomarkers have high potential for personalizing care. Prostate-specific membrane antigen imaging (PSMA)-PET/CT has revolutionized the BCR management in prostate cancer by detecting microscopic lesions earlier than conventional staging, leading to improved cancer control outcomes and changes in treatment plans in approximately two-thirds of cases. Salvage radiotherapy, often combined with androgen deprivation therapy, remains the standard treatment for high-risk BCR postprostatectomy, with PSMA-PET/CT guiding treatment adjustments, such as the radiation field, and improving progression-free survival. Advancements in biomarkers, genomic classifiers, and artificial intelligence-based models have enhanced risk stratification and personalized treatment planning, resulting in both treatment intensification and de-escalation. While conventional risk grouping relying on Gleason score and PSA level and kinetics remain the foundation for BCR management, PSMA-PET/CT, novel biomarkers, and artificial intelligence may enable more personalized treatment strategies.

MRI-derived deep learning models for predicting 1p/19q codeletion status in glioma patients: a systematic review and meta-analysis of diagnostic test accuracy studies.

Ahmadzadeh AM, Broomand Lomer N, Ashoobi MA, Elyassirad D, Gheiji B, Vatanparast M, Rostami A, Abouei Mehrizi MA, Tabari A, Bathla G, Faghani S

pubmed logopapersMay 15 2025
We conducted a systematic review and meta-analysis to evaluate the performance of magnetic resonance imaging (MRI)-derived deep learning (DL) models in predicting 1p/19q codeletion status in glioma patients. The literature search was performed in four databases: PubMed, Web of Science, Embase, and Scopus. We included the studies that evaluated the performance of end-to-end DL models in predicting the status of glioma 1p/19q codeletion. The quality of the included studies was assessed by the Quality assessment of diagnostic accuracy studies-2 (QUADAS-2) METhodological RadiomICs Score (METRICS). We calculated diagnostic pooled estimates and heterogeneity was evaluated using I<sup>2</sup>. Subgroup analysis and sensitivity analysis were conducted to explore sources of heterogeneity. Publication bias was evaluated by Deeks' funnel plots. Twenty studies were included in the systematic review. Only two studies had a low quality. A meta-analysis of the ten studies demonstrated a pooled sensitivity of 0.77 (95% CI: 0.63-0.87), a specificity of 0.85 (95% CI: 0.74-0.92), a positive diagnostic likelihood ratio (DLR) of 5.34 (95% CI: 2.88-9.89), a negative DLR of 0.26 (95% CI: 0.16-0.45), a diagnostic odds ratio of 20.24 (95% CI: 8.19-50.02), and an area under the curve of 0.89 (95% CI: 0.86-0.91). The subgroup analysis identified a significant difference between groups depending on the segmentation method used. DL models can predict glioma 1p/19q codeletion status with high accuracy and may enhance non-invasive tumor characterization and aid in the selection of optimal therapeutic strategies.

A computed tomography-based radiomics prediction model for BRAF mutation status in colorectal cancer.

Zhou B, Tan H, Wang Y, Huang B, Wang Z, Zhang S, Zhu X, Wang Z, Zhou J, Cao Y

pubmed logopapersMay 15 2025
The aim of this study was to develop and validate CT venous phase image-based radiomics to predict BRAF gene mutation status in preoperative colorectal cancer patients. In this study, 301 patients with pathologically confirmed colorectal cancer were retrospectively enrolled, comprising 225 from Centre I (73 mutant and 152 wild-type) and 76 from Centre II (36 mutant and 40 wild-type). The Centre I cohort was randomly divided into a training set (n = 158) and an internal validation set (n = 67) in a 7:3 ratio, while Centre II served as an independent external validation set (n = 76). The whole tumor region of interest was segmented, and radiomics characteristics were extracted. To explore whether tumor expansion could improve the performance of the study objectives, the tumor contour was extended by 3 mm in this study. Finally, a t-test, Pearson correlation, and LASSO regression were used to screen out features strongly associated with BRAF mutations. Based on these features, six classifiers-Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Logistic Regression (LR), K-Nearest Neighbors (KNN), and Extreme Gradient Boosting (XGBoost)-were constructed. The model performance and clinical utility were evaluated using receiver operating characteristic (ROC) curves, decision curve analysis, accuracy, sensitivity, and specificity. Gender was an independent predictor of BRAF mutations. The unexpanded RF model, constructed using 11 imaging histologic features, demonstrated the best predictive performance. For the training cohort, it achieved an AUC of 0.814 (95% CI 0.732-0.895), an accuracy of 0.810, and a sensitivity of 0.620. For the internal validation cohort, it achieved an AUC of 0.798 (95% CI 0.690-0.907), an accuracy of 0.761, and a sensitivity of 0.609. For the external validation cohort, it achieved an AUC of 0.737 (95% CI 0.616-0.847), an accuracy of 0.658, and a sensitivity of 0.667. A machine learning model based on CT radiomics can effectively predict BRAF mutations in patients with colorectal cancer. The unexpanded RF model demonstrated optimal predictive performance.

Deep learning MRI-based radiomic models for predicting recurrence in locally advanced nasopharyngeal carcinoma after neoadjuvant chemoradiotherapy: a multi-center study.

Hu C, Xu C, Chen J, Huang Y, Meng Q, Lin Z, Huang X, Chen L

pubmed logopapersMay 15 2025
Local recurrence and distant metastasis were a common manifestation of locoregionally advanced nasopharyngeal carcinoma (LA-NPC) after neoadjuvant chemoradiotherapy (NACT). To validate the clinical value of MRI radiomic models based on deep learning for predicting the recurrence of LA-NPC patients. A total of 328 NPC patients from four hospitals were retrospectively included and divided into the training(n = 229) and validation (n = 99) cohorts randomly. Extracting 975 traditional radiomic features and 1000 deep radiomic features from contrast enhanced T1-weighted (T1WI + C) and T2-weighted (T2WI) sequences, respectively. Least absolute shrinkage and selection operator (LASSO) was applied for feature selection. Five machine learning classifiers were conducted to develop three models for LA-NPC prediction in training cohort, namely Model I: traditional radiomic features, Model II: combined the deep radiomic features with Model I, and Model III: combined Model II with clinical features. The predictive performance of these models were evaluated by receive operating characteristic (ROC) curve analysis, area under the curve (AUC), accuracy, sensitivity and specificity in both cohorts. The clinical characteristics in two cohorts showed no significant differences. Choosing 15 radiomic features and 6 deep radiomic features from T1WI + C. Choosing 9 radiomic features and 6 deep radiomic features from T2WI. In T2WI, the Model II based on Random forest (RF) (AUC = 0.87) performed best compared with other models in validation cohort. Traditional radiomic model combined with deep radiomic features shows excellent predictive performance. It could be used assist clinical doctors to predict curative effect for LA-NPC patients after NACT.
Page 1 of 11101 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.