Sort by:
Page 121 of 1351344 results

Error correcting 2D-3D cascaded network for myocardial infarct scar segmentation on late gadolinium enhancement cardiac magnetic resonance images.

Schwab M, Pamminger M, Kremser C, Obmann D, Haltmeier M, Mayr A

pubmed logopapersMay 10 2025
Late gadolinium enhancement (LGE) cardiac magnetic resonance (CMR) imaging is considered the in vivo reference standard for assessing infarct size (IS) and microvascular obstruction (MVO) in ST-elevation myocardial infarction (STEMI) patients. However, the exact quantification of those markers of myocardial infarct severity remains challenging and very time-consuming. As LGE distribution patterns can be quite complex and hard to delineate from the blood pool or epicardial fat, automatic segmentation of LGE CMR images is challenging. In this work, we propose a cascaded framework of two-dimensional and three-dimensional convolutional neural networks (CNNs) which enables to calculate the extent of myocardial infarction in a fully automated way. By artificially generating segmentation errors which are characteristic for 2D CNNs during training of the cascaded framework we are enforcing the detection and correction of 2D segmentation errors and hence improve the segmentation accuracy of the entire method. The proposed method was trained and evaluated on two publicly available datasets. We perform comparative experiments where we show that our framework outperforms state-of-the-art reference methods in segmentation of myocardial infarction. Furthermore, in extensive ablation studies we show the advantages that come with the proposed error correcting cascaded method. The code of this project is publicly available at https://github.com/matthi99/EcorC.git.

CirnetamorNet: An ultrasonic temperature measurement network for microwave hyperthermia based on deep learning.

Cui F, Du Y, Qin L, Li B, Li C, Meng X

pubmed logopapersMay 9 2025
Microwave thermotherapy is a promising approach for cancer treatment, but accurate noninvasive temperature monitoring remains challenging. This study aims to achieve accurate temperature prediction during microwave thermotherapy by efficiently integrating multi-feature data, thereby improving the accuracy and reliability of noninvasive thermometry techniques. We proposed an enhanced recurrent neural network architecture, namely CirnetamorNet. The experimental data acquisition system is developed by using the material that simulates the characteristics of human tissue to construct the body model. Ultrasonic image data at different temperatures were collected, and 5 parameters with high temperature correlation were extracted from gray scale covariance matrix and Homodyned-K distribution. Using multi-feature data as input and temperature prediction as output, the CirnetamorNet model is constructed by multi-head attention mechanism. Model performance was evaluated by analyzing training losses, predicting mean square error and accuracy, and ablation experiments were performed to evaluate the contribution of each module. Compared with common models, the CirnetamorNet model performs well, with training losses as low as 1.4589 and mean square error of only 0.1856. Its temperature prediction accuracy of 0.3°C exceeds that of many advanced models. Ablation experiments show that the removal of any key module of the model will lead to performance degradation, which proves that the collaboration of all modules is significant for improving the performance of the model. The proposed CirnetamorNet model exhibits exceptional performance in noninvasive thermometry for microwave thermotherapy. It offers a novel approach to multi-feature data fusion in the medical field and holds significant practical application value.

Neural Network-based Automated Classification of 18F-FDG PET/CT Lesions and Prognosis Prediction in Nasopharyngeal Carcinoma Without Distant Metastasis.

Lv Y, Zheng D, Wang R, Zhou Z, Gao Z, Lan X, Qin C

pubmed logopapersMay 9 2025
To evaluate the diagnostic performance of the PET Assisted Reporting System (PARS) in nasopharyngeal carcinoma (NPC) patients without distant metastasis, and to investigate the prognostic significance of the metabolic parameters. Eighty-three NPC patients who underwent pretreatment 18F-FDG PET/CT were retrospectively collected. First, the sensitivity, specificity, and accuracy of PARS for diagnosing malignant lesions were calculated, using histopathology as the gold standard. Next, metabolic parameters of the primary tumor were derived using both PARS and manual segmentation. The differences and consistency between the 2 methods were analyzed. Finally, the prognostic value of PET metabolic parameters was evaluated. Prognostic analysis of progression-free survival (PFS) and overall survival (OS) was conducted. PARS demonstrated high patient-based accuracy (97.2%), sensitivity (88.9%), and specificity (97.4%), and 96.7%, 84.0%, and 96.9% based on lesions. Manual segmentation yielded higher metabolic tumor volume (MTV) and total lesion glycolysis (TLG) than PARS. Metabolic parameters from both methods were highly correlated and consistent. ROC analysis showed metabolic parameters exhibited differences in prognostic prediction, but generally performed well in predicting 3-year PFS and OS overall. MTV and age were independent prognostic factors; Cox proportional-hazards models incorporating them showed significant predictive improvements when combined. Kaplan-Meier analysis confirmed better prognosis in the low-risk group based on combined indicators (χ² = 42.25, P < 0.001; χ² = 20.44, P < 0.001). Preliminary validation of PARS in NPC patients without distant metastasis shows high diagnostic sensitivity and accuracy for lesion identification and classification, and metabolic parameters correlate well with manual. MTV reflects prognosis, and its combination with age enhances prognostic prediction and risk stratification.

Adherence to SVS Abdominal Aortic Aneurysm Guidelines Among Pati ents Detected by AI-Based Algorithm.

Wilson EM, Yao K, Kostiuk V, Bader J, Loh S, Mojibian H, Fischer U, Ochoa Chaar CI, Aboian E

pubmed logopapersMay 9 2025
This study evaluates adherence to the latest Society for Vascular Surgery (SVS) guidelines on imaging surveillance, physician evaluation, and surgical intervention for abdominal aortic aneurysm (AAA). AI-based natural language processing applied retrospectively identified AAA patients from imaging scans at a tertiary care center between January-March 2019 and 2021, excluding the pandemic period. Retrospective chart review assessed demographics, comorbidities, imaging, and follow-up adherence. Statistical significance was set at p<0.05. Among 479 identified patients, 279 remained in the final cohort following exclusion of deceased patients. Imaging surveillance adherence was 67.7% (189/279), with males comprising 72.5% (137/189) (Figure 1). The mean age for adherent patients was 73.9 (SD ±9.5) vs. 75.2 (SD ±10.8) for non-adherent patients (Table 1). Adherent females were significantly younger than non-adherent females (76.7 vs. 81.1 years; p=0.003) with no significant age difference in adherent males. Adherent patients were more likely to be evaluated by a vascular provider within six months (p<0.001), but aneurysm size did not affect imaging adherence: 3.0-4.0cm (p=0.24), 4.0-5.0cm (p=0.88), >5.0cm (p=0.29). Based on SVS surgical criteria, 18 males (AAA >5.5cm) and 17 females (AAA >5.0cm) qualified for intervention and repair rates increased in 2021. 34 males (20 in 2019 v. 14 in 2021) and 7 females (2021 only) received surgical intervention below the threshold for repair. Despite consistent SVS guidelines, adherence remains moderate. AI-based detection and follow-up algorithms may enhance adherence and long-term AAA patient outcomes, however further research is needed to assess the specific impacts of AI.

Predicting Knee Osteoarthritis Severity from Radiographic Predictors: Data from the Osteoarthritis Initiative.

Nurmirinta TAT, Turunen MJ, Tohka J, Mononen ME, Liukkonen MK

pubmed logopapersMay 9 2025
In knee osteoarthritis (KOA) treatment, preventive measures to reduce its onset risk are a key factor. Among individuals with radiographically healthy knees, however, future knee joint integrity and condition cannot be predicted by clinically applicable methods. We investigated if knee joint morphology derived from widely accessible and cost-effective radiographs could be helpful in predicting future knee joint integrity and condition. We combined knee joint morphology with known risk predictors such as age, height, and weight. Baseline data were utilized as predictors, and the maximal severity of KOA after 8 years served as a target variable. The three KOA categories in this study were based on Kellgren-Lawrence grading: healthy, moderate, and severe. We employed a two-stage machine learning model that utilized two random forest algorithms. We trained three models: the subject demographics (SD) model utilized only SD; the image model utilized only knee joint morphology from radiographs; the merged model utilized combined predictors. The training data comprised an 8-year follow-up of 1222 knees from 683 individuals. The SD- model obtained a weighted F1 score (WF1) of 77.2% and a balanced accuracy (BA) of 65.6%. The Image-model performance metrics were lowest, with a WF1 of 76.5% and BA of 63.8%. The top-performing merged model achieved a WF1 score of 78.3% and a BA of 68.2%. Our two-stage prediction model provided improved results based on performance metrics, suggesting potential for application in clinical settings.

KEVS: enhancing segmentation of visceral adipose tissue in pre-cystectomy CT with Gaussian kernel density estimation.

Boucher T, Tetlow N, Fung A, Dewar A, Arina P, Kerneis S, Whittle J, Mazomenos EB

pubmed logopapersMay 9 2025
The distribution of visceral adipose tissue (VAT) in cystectomy patients is indicative of the incidence of postoperative complications. Existing VAT segmentation methods for computed tomography (CT) employing intensity thresholding have limitations relating to inter-observer variability. Moreover, the difficulty in creating ground-truth masks limits the development of deep learning (DL) models for this task. This paper introduces a novel method for VAT prediction in pre-cystectomy CT, which is fully automated and does not require ground-truth VAT masks for training, overcoming aforementioned limitations. We introduce the kernel density-enhanced VAT segmentator (KEVS), combining a DL semantic segmentation model, for multi-body feature prediction, with Gaussian kernel density estimation analysis of predicted subcutaneous adipose tissue to achieve accurate scan-specific predictions of VAT in the abdominal cavity. Uniquely for a DL pipeline, KEVS does not require ground-truth VAT masks. We verify the ability of KEVS to accurately segment abdominal organs in unseen CT data and compare KEVS VAT segmentation predictions to existing state-of-the-art (SOTA) approaches in a dataset of 20 pre-cystectomy CT scans, collected from University College London Hospital (UCLH-Cyst), with expert ground-truth annotations. KEVS presents a <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>4.80</mn> <mo>%</mo></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>6.02</mn> <mo>%</mo></mrow> </math> improvement in Dice coefficient over the second best DL and thresholding-based VAT segmentation techniques respectively when evaluated on UCLH-Cyst. This research introduces KEVS, an automated, SOTA method for the prediction of VAT in pre-cystectomy CT which eliminates inter-observer variability and is trained entirely on open-source CT datasets which do not contain ground-truth VAT masks.

Comparison between multimodal foundation models and radiologists for the diagnosis of challenging neuroradiology cases with text and images.

Le Guellec B, Bruge C, Chalhoub N, Chaton V, De Sousa E, Gaillandre Y, Hanafi R, Masy M, Vannod-Michel Q, Hamroun A, Kuchcinski G

pubmed logopapersMay 9 2025
The purpose of this study was to compare the ability of two multimodal models (GPT-4o and Gemini 1.5 Pro) with that of radiologists to generate differential diagnoses from textual context alone, key images alone, or a combination of both using complex neuroradiology cases. This retrospective study included neuroradiology cases from the "Diagnosis Please" series published in the Radiology journal between January 2008 and September 2024. The two multimodal models were asked to provide three differential diagnoses from textual context alone, key images alone, or the complete case. Six board-certified neuroradiologists solved the cases in the same setting, randomly assigned to two groups: context alone first and images alone first. Three radiologists solved the cases without, and then with the assistance of Gemini 1.5 Pro. An independent radiologist evaluated the quality of the image descriptions provided by GPT-4o and Gemini for each case. Differences in correct answers between multimodal models and radiologists were analyzed using McNemar test. GPT-4o and Gemini 1.5 Pro outperformed radiologists using clinical context alone (mean accuracy, 34.0 % [18/53] and 44.7 % [23.7/53] vs. 16.4 % [8.7/53]; both P < 0.01). Radiologists outperformed GPT-4o and Gemini 1.5 Pro using images alone (mean accuracy, 42.0 % [22.3/53] vs. 3.8 % [2/53], and 7.5 % [4/53]; both P < 0.01) and the complete cases (48.0 % [25.6/53] vs. 34.0 % [18/53], and 38.7 % [20.3/53]; both P < 0.001). While radiologists improved their accuracy when combining multimodal information (from 42.1 % [22.3/53] for images alone to 50.3 % [26.7/53] for complete cases; P < 0.01), GPT-4o and Gemini 1.5 Pro did not benefit from the multimodal context (from 34.0 % [18/53] for text alone to 35.2 % [18.7/53] for complete cases for GPT-4o; P = 0.48, and from 44.7 % [23.7/53] to 42.8 % [22.7/53] for Gemini 1.5 Pro; P = 0.54). Radiologists benefited significantly from the suggestion of Gemini 1.5 Pro, increasing their accuracy from 47.2 % [25/53] to 56.0 % [27/53] (P < 0.01). Both GPT-4o and Gemini 1.5 Pro correctly identified the imaging modality in 53/53 (100 %) and 51/53 (96.2 %) cases, respectively, but frequently failed to identify key imaging findings (43/53 cases [81.1 %] with incorrect identification of key imaging findings for GPT-4o and 50/53 [94.3 %] for Gemini 1.5). Radiologists show a specific ability to benefit from the integration of textual and visual information, whereas multimodal models mostly rely on the clinical context to suggest diagnoses.

Resting-state functional MRI metrics to detect freezing of gait in Parkinson's disease: a machine learning approach.

Vicidomini C, Fontanella F, D'Alessandro T, Roviello GN, De Stefano C, Stocchi F, Quarantelli M, De Pandis MF

pubmed logopapersMay 9 2025
Among the symptoms that can occur in Parkinson's disease (PD), Freezing of Gait (FOG) is a disabling phenomenon affecting a large proportion of patients, and it remains not fully understood. Accurate classification of FOG in PD is crucial for tailoring effective interventions and is necessary for a better understanding of its underlying mechanisms. In the present work, we applied four Machine Learning (ML) classifiers (Decision Tree - DT, Random Forest - RF, Multilayer Perceptron - MLP, Logistic Regression - LOG) to different four metrics derived from resting-state functional Magnetic Resonance Imaging (rs-fMRI) data processing to assess their accuracy in automatically classifying PD patients based on the presence or absence of Freezing of Gait (FOG). To validate our approach, we applied the same methodologies to distinguish PD patients from a group of Healthy Subject (HS). The performance of the four ML algorithms was validated by repeated k-fold cross-validation on randomly selected independent training and validation subsets. The results showed that when discriminating PD from HS, the best performance was achieved using RF applied to fractional Amplitude of Low-Frequency Fluctuations (fALFF) data (AUC 96.8 ± 2 %). Similarly, when discriminating PD-FOG from PD-nFOG, the RF algorithm was again the best performer on all four metrics, with AUCs above 90 %. Finally, trying to unbox how AI system black-box choices were made, we extracted features' importance scores for the best-performing method(s) and discussed them based on the results obtained to date in rs-fMRI studies on FOG in PD and, more generally, in PD. In summary, regions that were more frequently selected when differentiating both PD from HS and PD-FOG from PD-nFOG patients were mainly relevant to the extrapyramidal system, as well as visual and default mode networks. In addition, the salience network and the supplementary motor area played an additional major role in differentiating PD-FOG from PD-nFOG patients.

CT-based quantification of intratumoral heterogeneity for predicting distant metastasis in retroperitoneal sarcoma.

Xu J, Miao JG, Wang CX, Zhu YP, Liu K, Qin SY, Chen HS, Lang N

pubmed logopapersMay 9 2025
Retroperitoneal sarcoma (RPS) is highly heterogeneous, leading to different risks of distant metastasis (DM) among patients with the same clinical stage. This study aims to develop a quantitative method for assessing intratumoral heterogeneity (ITH) using preoperative contrast-enhanced CT (CECT) scans and evaluate its ability to predict DM risk. We conducted a retrospective analysis of 274 PRS patients who underwent complete surgical resection and were monitored for ≥ 36 months at two centers. Conventional radiomics (C-radiomics), ITH radiomics, and deep-learning (DL) features were extracted from the preoperative CECT scans and developed single-modality models. Clinical indicators and high-throughput CECT features were integrated to develop a combined model for predicting DM. The performance of the models was evaluated by measuring the receiver operating characteristic curve and Harrell's concordance index (C-index). Distant metastasis-free survival (DMFS) was also predicted to further assess survival benefits. The ITH model demonstrated satisfactory predictive capability for DM in internal and external validation cohorts (AUC: 0.735, 0.765; C-index: 0.691, 0.729). The combined model that combined clinicoradiological variables, ITH-score, and DL-score achieved the best predictive performance in internal and external validation cohorts (AUC: 0.864, 0.801; C-index: 0.770, 0.752), successfully stratified patients into high- and low-risk groups for DM (p < 0.05). The combined model demonstrated promising potential for accurately predicting the DM risk and stratifying the DMFS risk in RPS patients undergoing complete surgical resection, providing a valuable tool for guiding treatment decisions and follow-up strategies. The intratumoral heterogeneity analysis facilitates the identification of high-risk retroperitoneal sarcoma patients prone to distant metastasis and poor prognoses, enabling the selection of candidates for more aggressive surgical and post-surgical interventions. Preoperative identification of retroperitoneal sarcoma (RPS) with a high potential for distant metastasis (DM) is crucial for targeted interventional strategies. Quantitative assessment of intratumoral heterogeneity achieved reasonable performance for predicting DM. The integrated model combining clinicoradiological variables, ITH radiomics, and deep-learning features effectively predicted distant metastasis-free survival.

Deep learning for Parkinson's disease classification using multimodal and multi-sequences PET/MR images.

Chang Y, Liu J, Sun S, Chen T, Wang R

pubmed logopapersMay 9 2025
We aimed to use deep learning (DL) techniques to accurately differentiate Parkinson's disease (PD) from multiple system atrophy (MSA), which share similar clinical presentations. In this retrospective analysis, 206 patients who underwent PET/MR imaging at the Chinese PLA General Hospital were included, having been clinically diagnosed with either PD or MSA; an additional 38 healthy volunteers served as normal controls (NC). All subjects were randomly assigned to the training and test sets at a ratio of 7:3. The input to the model consists of 10 two-dimensional (2D) slices in axial, coronal, and sagittal planes from multi-modal images. A modified Residual Block Network with 18 layers (ResNet18) was trained with different modal images, to classify PD, MSA, and NC. A four-fold cross-validation method was applied in the training set. Performance evaluations included accuracy, precision, recall, F1 score, Receiver operating characteristic (ROC), and area under the ROC curve (AUC). Six single-modal models and seven multi-modal models were trained and tested. The PET models outperformed MRI models. The <sup>11</sup>C-methyl-N-2β-carbomethoxy-3β-(4-fluorophenyl)-tropanel (<sup>11</sup>C-CFT) -Apparent Diffusion Coefficient (ADC) model showed the best classification, which resulted in 0.97 accuracy, 0.93 precision, 0.95 recall, 0.92 F1, and 0.96 AUC. In the test set, the accuracy, precision, recall, and F1 score of the CFT-ADC model were 0.70, 0.73, 0.93, and 0.82, respectively. The proposed DL method shows potential as a high-performance assisting tool for the accurate diagnosis of PD and MSA. A multi-modal and multi-sequence model could further enhance the ability to classify PD.
Page 121 of 1351344 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.