Sort by:
Page 14 of 24236 results

Predicting Knee Osteoarthritis Severity from Radiographic Predictors: Data from the Osteoarthritis Initiative.

Nurmirinta TAT, Turunen MJ, Tohka J, Mononen ME, Liukkonen MK

pubmed logopapersMay 9 2025
In knee osteoarthritis (KOA) treatment, preventive measures to reduce its onset risk are a key factor. Among individuals with radiographically healthy knees, however, future knee joint integrity and condition cannot be predicted by clinically applicable methods. We investigated if knee joint morphology derived from widely accessible and cost-effective radiographs could be helpful in predicting future knee joint integrity and condition. We combined knee joint morphology with known risk predictors such as age, height, and weight. Baseline data were utilized as predictors, and the maximal severity of KOA after 8 years served as a target variable. The three KOA categories in this study were based on Kellgren-Lawrence grading: healthy, moderate, and severe. We employed a two-stage machine learning model that utilized two random forest algorithms. We trained three models: the subject demographics (SD) model utilized only SD; the image model utilized only knee joint morphology from radiographs; the merged model utilized combined predictors. The training data comprised an 8-year follow-up of 1222 knees from 683 individuals. The SD- model obtained a weighted F1 score (WF1) of 77.2% and a balanced accuracy (BA) of 65.6%. The Image-model performance metrics were lowest, with a WF1 of 76.5% and BA of 63.8%. The top-performing merged model achieved a WF1 score of 78.3% and a BA of 68.2%. Our two-stage prediction model provided improved results based on performance metrics, suggesting potential for application in clinical settings.

KEVS: enhancing segmentation of visceral adipose tissue in pre-cystectomy CT with Gaussian kernel density estimation.

Boucher T, Tetlow N, Fung A, Dewar A, Arina P, Kerneis S, Whittle J, Mazomenos EB

pubmed logopapersMay 9 2025
The distribution of visceral adipose tissue (VAT) in cystectomy patients is indicative of the incidence of postoperative complications. Existing VAT segmentation methods for computed tomography (CT) employing intensity thresholding have limitations relating to inter-observer variability. Moreover, the difficulty in creating ground-truth masks limits the development of deep learning (DL) models for this task. This paper introduces a novel method for VAT prediction in pre-cystectomy CT, which is fully automated and does not require ground-truth VAT masks for training, overcoming aforementioned limitations. We introduce the kernel density-enhanced VAT segmentator (KEVS), combining a DL semantic segmentation model, for multi-body feature prediction, with Gaussian kernel density estimation analysis of predicted subcutaneous adipose tissue to achieve accurate scan-specific predictions of VAT in the abdominal cavity. Uniquely for a DL pipeline, KEVS does not require ground-truth VAT masks. We verify the ability of KEVS to accurately segment abdominal organs in unseen CT data and compare KEVS VAT segmentation predictions to existing state-of-the-art (SOTA) approaches in a dataset of 20 pre-cystectomy CT scans, collected from University College London Hospital (UCLH-Cyst), with expert ground-truth annotations. KEVS presents a <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>4.80</mn> <mo>%</mo></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>6.02</mn> <mo>%</mo></mrow> </math> improvement in Dice coefficient over the second best DL and thresholding-based VAT segmentation techniques respectively when evaluated on UCLH-Cyst. This research introduces KEVS, an automated, SOTA method for the prediction of VAT in pre-cystectomy CT which eliminates inter-observer variability and is trained entirely on open-source CT datasets which do not contain ground-truth VAT masks.

Computationally enabled polychromatic polarized imaging enables mapping of matrix architectures that promote pancreatic ductal adenocarcinoma dissemination.

Qian G, Zhang H, Liu Y, Shribak M, Eliceiri KW, Provenzano PP

pubmed logopapersMay 9 2025
Pancreatic ductal adenocarcinoma (PDA) is an extremely metastatic and lethal disease. In PDA, extracellular matrix (ECM) architectures known as Tumor-Associated Collagen Signatures (TACS) regulate invasion and metastatic spread in both early dissemination and in late-stage disease. As such, TACS has been suggested as a biomarker to aid in pathologic assessment. However, despite its significance, approaches to quantitatively capture these ECM patterns currently require advanced optical systems with signaling processing analysis. Here we present an expansion of polychromatic polarized microscopy (PPM) with inherent angular information coupled to machine learning and computational pixel-wise analysis of TACS. Using this platform, we are able to accurately capture TACS architectures in H&E stained histology sections directly through PPM contrast. Moreover, PPM facilitated identification of transitions to dissemination architectures, i.e., transitions from sequestration through expansion to dissemination from both PanINs and throughout PDA. Lastly, PPM evaluation of architectures in liver metastases, the most common metastatic site for PDA, demonstrates TACS-mediated focal and local invasion as well as identification of unique patterns anchoring aligned fibers into normal-adjacent tumor, suggesting that these patterns may be precursors to metastasis expansion and local spread from micrometastatic lesions. Combined, these findings demonstrate that PPM coupled to computational platforms is a powerful tool for analyzing ECM architecture that can be employed to advance cancer microenvironment studies and provide clinically relevant diagnostic information.

Deep compressed multichannel adaptive optics scanning light ophthalmoscope.

Park J, Hagan K, DuBose TB, Maldonado RS, McNabb RP, Dubra A, Izatt JA, Farsiu S

pubmed logopapersMay 9 2025
Adaptive optics scanning light ophthalmoscopy (AOSLO) reveals individual retinal cells and their function, microvasculature, and micropathologies in vivo. As compared to the single-channel offset pinhole and two-channel split-detector nonconfocal AOSLO designs, by providing multidirectional imaging capabilities, a recent generation of multidetector and (multi-)offset aperture AOSLO modalities has been demonstrated to provide critical information about retinal microstructures. However, increasing detection channels requires expensive optical components and/or critically increases imaging time. To address this issue, we present an innovative combination of machine learning and optics as an integrated technology to compressively capture 12 nonconfocal channel AOSLO images simultaneously. Imaging of healthy participants and diseased subjects using the proposed deep compressed multichannel AOSLO showed enhanced visualization of rods, cones, and mural cells with over an order-of-magnitude improvement in imaging speed as compared to conventional offset aperture imaging. To facilitate the adaptation and integration with other in vivo microscopy systems, we made optical design, acquisition, and computational reconstruction codes open source.

Comparison between multimodal foundation models and radiologists for the diagnosis of challenging neuroradiology cases with text and images.

Le Guellec B, Bruge C, Chalhoub N, Chaton V, De Sousa E, Gaillandre Y, Hanafi R, Masy M, Vannod-Michel Q, Hamroun A, Kuchcinski G

pubmed logopapersMay 9 2025
The purpose of this study was to compare the ability of two multimodal models (GPT-4o and Gemini 1.5 Pro) with that of radiologists to generate differential diagnoses from textual context alone, key images alone, or a combination of both using complex neuroradiology cases. This retrospective study included neuroradiology cases from the "Diagnosis Please" series published in the Radiology journal between January 2008 and September 2024. The two multimodal models were asked to provide three differential diagnoses from textual context alone, key images alone, or the complete case. Six board-certified neuroradiologists solved the cases in the same setting, randomly assigned to two groups: context alone first and images alone first. Three radiologists solved the cases without, and then with the assistance of Gemini 1.5 Pro. An independent radiologist evaluated the quality of the image descriptions provided by GPT-4o and Gemini for each case. Differences in correct answers between multimodal models and radiologists were analyzed using McNemar test. GPT-4o and Gemini 1.5 Pro outperformed radiologists using clinical context alone (mean accuracy, 34.0 % [18/53] and 44.7 % [23.7/53] vs. 16.4 % [8.7/53]; both P < 0.01). Radiologists outperformed GPT-4o and Gemini 1.5 Pro using images alone (mean accuracy, 42.0 % [22.3/53] vs. 3.8 % [2/53], and 7.5 % [4/53]; both P < 0.01) and the complete cases (48.0 % [25.6/53] vs. 34.0 % [18/53], and 38.7 % [20.3/53]; both P < 0.001). While radiologists improved their accuracy when combining multimodal information (from 42.1 % [22.3/53] for images alone to 50.3 % [26.7/53] for complete cases; P < 0.01), GPT-4o and Gemini 1.5 Pro did not benefit from the multimodal context (from 34.0 % [18/53] for text alone to 35.2 % [18.7/53] for complete cases for GPT-4o; P = 0.48, and from 44.7 % [23.7/53] to 42.8 % [22.7/53] for Gemini 1.5 Pro; P = 0.54). Radiologists benefited significantly from the suggestion of Gemini 1.5 Pro, increasing their accuracy from 47.2 % [25/53] to 56.0 % [27/53] (P < 0.01). Both GPT-4o and Gemini 1.5 Pro correctly identified the imaging modality in 53/53 (100 %) and 51/53 (96.2 %) cases, respectively, but frequently failed to identify key imaging findings (43/53 cases [81.1 %] with incorrect identification of key imaging findings for GPT-4o and 50/53 [94.3 %] for Gemini 1.5). Radiologists show a specific ability to benefit from the integration of textual and visual information, whereas multimodal models mostly rely on the clinical context to suggest diagnoses.

Dynamic AI Ultrasound-Assisted Diagnosis System to Reduce Unnecessary Fine Needle Aspiration of Thyroid Nodules.

Li F, Tao S, Ji M, Liu L, Qin Z, Yang X, Wu R, Zhan J

pubmed logopapersMay 9 2025
This study aims to compare the diagnostic efficiency of the American College of Radiology-Thyroid Imaging, Reporting, and Data System (ACR-TIRADS), fine-needle aspiration (FNA) cytopathology alone, and the dynamic artificial intelligence (AI) diagnostic system. A total of 1035 patients from three hospitals were included in the study. Of these, 590 were from the retrospective dataset and 445 cases were from the prospective dataset. The diagnostic accuracy of the dynamic AI system in the thyroid nodules was evaluated in comparison to the gold standard of postoperative pathology. The sensitivity, specificity, ROC, and diagnostic differences in the κ-factor relative to the gold standard were analyzed for the AI system and the FNA. The dynamic AI diagnostic system showed good diagnostic stability in different ages and sexes and nodules of different sizes. The diagnostic AUC of the dynamic AI system showed a significant improvement from 0.89 to 0.93 compared to ACR TI-RADS. Compared to that of FNA cytopathology, the diagnostic efficacy of the dynamic AI system was found to be no statistical difference in both the retrospective cohort and the prospective cohort. The dynamic AI diagnostic system enhances the accuracy of ACR TI-RADS-based diagnoses and has the potential to replace biopsies, thus reducing the necessity for invasive procedures in patients.

Resting-state functional MRI metrics to detect freezing of gait in Parkinson's disease: a machine learning approach.

Vicidomini C, Fontanella F, D'Alessandro T, Roviello GN, De Stefano C, Stocchi F, Quarantelli M, De Pandis MF

pubmed logopapersMay 9 2025
Among the symptoms that can occur in Parkinson's disease (PD), Freezing of Gait (FOG) is a disabling phenomenon affecting a large proportion of patients, and it remains not fully understood. Accurate classification of FOG in PD is crucial for tailoring effective interventions and is necessary for a better understanding of its underlying mechanisms. In the present work, we applied four Machine Learning (ML) classifiers (Decision Tree - DT, Random Forest - RF, Multilayer Perceptron - MLP, Logistic Regression - LOG) to different four metrics derived from resting-state functional Magnetic Resonance Imaging (rs-fMRI) data processing to assess their accuracy in automatically classifying PD patients based on the presence or absence of Freezing of Gait (FOG). To validate our approach, we applied the same methodologies to distinguish PD patients from a group of Healthy Subject (HS). The performance of the four ML algorithms was validated by repeated k-fold cross-validation on randomly selected independent training and validation subsets. The results showed that when discriminating PD from HS, the best performance was achieved using RF applied to fractional Amplitude of Low-Frequency Fluctuations (fALFF) data (AUC 96.8 ± 2 %). Similarly, when discriminating PD-FOG from PD-nFOG, the RF algorithm was again the best performer on all four metrics, with AUCs above 90 %. Finally, trying to unbox how AI system black-box choices were made, we extracted features' importance scores for the best-performing method(s) and discussed them based on the results obtained to date in rs-fMRI studies on FOG in PD and, more generally, in PD. In summary, regions that were more frequently selected when differentiating both PD from HS and PD-FOG from PD-nFOG patients were mainly relevant to the extrapyramidal system, as well as visual and default mode networks. In addition, the salience network and the supplementary motor area played an additional major role in differentiating PD-FOG from PD-nFOG patients.

CT-based quantification of intratumoral heterogeneity for predicting distant metastasis in retroperitoneal sarcoma.

Xu J, Miao JG, Wang CX, Zhu YP, Liu K, Qin SY, Chen HS, Lang N

pubmed logopapersMay 9 2025
Retroperitoneal sarcoma (RPS) is highly heterogeneous, leading to different risks of distant metastasis (DM) among patients with the same clinical stage. This study aims to develop a quantitative method for assessing intratumoral heterogeneity (ITH) using preoperative contrast-enhanced CT (CECT) scans and evaluate its ability to predict DM risk. We conducted a retrospective analysis of 274 PRS patients who underwent complete surgical resection and were monitored for ≥ 36 months at two centers. Conventional radiomics (C-radiomics), ITH radiomics, and deep-learning (DL) features were extracted from the preoperative CECT scans and developed single-modality models. Clinical indicators and high-throughput CECT features were integrated to develop a combined model for predicting DM. The performance of the models was evaluated by measuring the receiver operating characteristic curve and Harrell's concordance index (C-index). Distant metastasis-free survival (DMFS) was also predicted to further assess survival benefits. The ITH model demonstrated satisfactory predictive capability for DM in internal and external validation cohorts (AUC: 0.735, 0.765; C-index: 0.691, 0.729). The combined model that combined clinicoradiological variables, ITH-score, and DL-score achieved the best predictive performance in internal and external validation cohorts (AUC: 0.864, 0.801; C-index: 0.770, 0.752), successfully stratified patients into high- and low-risk groups for DM (p < 0.05). The combined model demonstrated promising potential for accurately predicting the DM risk and stratifying the DMFS risk in RPS patients undergoing complete surgical resection, providing a valuable tool for guiding treatment decisions and follow-up strategies. The intratumoral heterogeneity analysis facilitates the identification of high-risk retroperitoneal sarcoma patients prone to distant metastasis and poor prognoses, enabling the selection of candidates for more aggressive surgical and post-surgical interventions. Preoperative identification of retroperitoneal sarcoma (RPS) with a high potential for distant metastasis (DM) is crucial for targeted interventional strategies. Quantitative assessment of intratumoral heterogeneity achieved reasonable performance for predicting DM. The integrated model combining clinicoradiological variables, ITH radiomics, and deep-learning features effectively predicted distant metastasis-free survival.

Deep learning for Parkinson's disease classification using multimodal and multi-sequences PET/MR images.

Chang Y, Liu J, Sun S, Chen T, Wang R

pubmed logopapersMay 9 2025
We aimed to use deep learning (DL) techniques to accurately differentiate Parkinson's disease (PD) from multiple system atrophy (MSA), which share similar clinical presentations. In this retrospective analysis, 206 patients who underwent PET/MR imaging at the Chinese PLA General Hospital were included, having been clinically diagnosed with either PD or MSA; an additional 38 healthy volunteers served as normal controls (NC). All subjects were randomly assigned to the training and test sets at a ratio of 7:3. The input to the model consists of 10 two-dimensional (2D) slices in axial, coronal, and sagittal planes from multi-modal images. A modified Residual Block Network with 18 layers (ResNet18) was trained with different modal images, to classify PD, MSA, and NC. A four-fold cross-validation method was applied in the training set. Performance evaluations included accuracy, precision, recall, F1 score, Receiver operating characteristic (ROC), and area under the ROC curve (AUC). Six single-modal models and seven multi-modal models were trained and tested. The PET models outperformed MRI models. The <sup>11</sup>C-methyl-N-2β-carbomethoxy-3β-(4-fluorophenyl)-tropanel (<sup>11</sup>C-CFT) -Apparent Diffusion Coefficient (ADC) model showed the best classification, which resulted in 0.97 accuracy, 0.93 precision, 0.95 recall, 0.92 F1, and 0.96 AUC. In the test set, the accuracy, precision, recall, and F1 score of the CFT-ADC model were 0.70, 0.73, 0.93, and 0.82, respectively. The proposed DL method shows potential as a high-performance assisting tool for the accurate diagnosis of PD and MSA. A multi-modal and multi-sequence model could further enhance the ability to classify PD.
Page 14 of 24236 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.