Sort by:
Page 11 of 25248 results

Generation of synthetic CT from MRI for MRI-based attenuation correction of brain PET images using radiomics and machine learning.

Hoseinipourasl A, Hossein-Zadeh GA, Sheikhzadeh P, Arabalibeik H, Alavijeh SK, Zaidi H, Ay MR

pubmed logopapersMay 12 2025
Accurate quantitative PET imaging in neurological studies requires proper attenuation correction. MRI-guided attenuation correction in PET/MRI remains challenging owing to the lack of direct relationship between MRI intensities and linear attenuation coefficients. This study aims at generating accurate patient-specific synthetic CT volumes, attenuation maps, and attenuation correction factor (ACF) sinograms with continuous values utilizing a combination of machine learning algorithms, image processing techniques, and voxel-based radiomics feature extraction approaches. Brain MR images of ten healthy volunteers were acquired using IR-pointwise encoding time reduction with radial acquisition (IR-PETRA) and VIBE-Dixon techniques. synthetic CT (SCT) images, attenuation maps, and attenuation correction factors (ACFs) were generated using the LightGBM, a fast and accurate machine learning algorithm, from the radiomics-based and image processing-based feature maps of MR images. Additionally, ultra-low-dose CT images of the same volunteers were acquired and served as the standard of reference for evaluation. The SCT images, attenuation maps, and ACF sinograms were assessed using qualitative and quantitative evaluation metrics and compared against their corresponding reference images, attenuation maps, and ACF sinograms. The voxel-wise and volume-wise comparison between synthetic and reference CT images yielded an average mean absolute error of 60.75 ± 8.8 HUs, an average structural similarity index of 0.88 ± 0.02, and an average peak signal-to-noise ratio of 32.83 ± 2.74 dB. Additionally, we compared MRI-based attenuation maps and ACF sinograms with their CT-based counterparts, revealing average normalized mean absolute errors of 1.48% and 1.33%, respectively. Quantitative assessments indicated higher correlations and similarities between LightGBM-synthesized CT and Reference CT images. Moreover, the cross-validation results showed the possibility of producing accurate SCT images, MRI-based attenuation maps, and ACF sinograms. This might spur the implementation of MRI-based attenuation correction on PET/MRI and dedicated brain PET scanners with lower computational time using CPU-based processors.

Fully volumetric body composition analysis for prognostic overall survival stratification in melanoma patients.

Borys K, Lodde G, Livingstone E, Weishaupt C, Römer C, Künnemann MD, Helfen A, Zimmer L, Galetzka W, Haubold J, Friedrich CM, Umutlu L, Heindel W, Schadendorf D, Hosch R, Nensa F

pubmed logopapersMay 12 2025
Accurate assessment of expected survival in melanoma patients is crucial for treatment decisions. This study explores deep learning-based body composition analysis to predict overall survival (OS) using baseline Computed Tomography (CT) scans and identify fully volumetric, prognostic body composition features. A deep learning network segmented baseline abdomen and thorax CTs from a cohort of 495 patients. The Sarcopenia Index (SI), Myosteatosis Fat Index (MFI), and Visceral Fat Index (VFI) were derived and statistically assessed for prognosticating OS. External validation was performed with 428 patients. SI was significantly associated with OS on both CT regions: abdomen (P ≤ 0.0001, HR: 0.36) and thorax (P ≤ 0.0001, HR: 0.27), with lower SI associated with prolonged survival. MFI was also associated with OS on abdomen (P ≤ 0.0001, HR: 1.16) and thorax CTs (P ≤ 0.0001, HR: 1.08), where higher MFI was linked to worse outcomes. Lastly, VFI was associated with OS on abdomen CTs (P ≤ 0.001, HR: 1.90), with higher VFI linked to poor outcomes. External validation replicated these results. SI, MFI, and VFI showed substantial potential as prognostic factors for OS in malignant melanoma patients. This approach leveraged existing CT scans without additional procedural or financial burdens, highlighting the seamless integration of DL-based body composition analysis into standard oncologic staging routines.

Identification of HER2-over-expression, HER2-low-expression, and HER2-zero-expression statuses in breast cancer based on <sup>18</sup>F-FDG PET/CT radiomics.

Hou X, Chen K, Luo H, Xu W, Li X

pubmed logopapersMay 12 2025
According to the updated classification system, human epidermal growth factor receptor 2 (HER2) expression statuses are divided into the following three groups: HER2-over-expression, HER2-low-expression, and HER2-zero-expression. HER2-negative expression was reclassified into HER2-low-expression and HER2-zero-expression. This study aimed to identify three different HER2 expression statuses for breast cancer (BC) patients using PET/CT radiomics and clinicopathological characteristics. A total of 315 BC patients who met the inclusion and exclusion criteria from two institutions were retrospectively included. The patients in institution 1 were divided into the training set and the independent validation set according to the ratio of 7:3, and institution 2 was used as the external validation set. According to the results of pathological examination, all BC patients were divided into HER2-over-expression, HER2-low-expression, and HER2-zero-expression. First, PET/CT radiomic features and clinicopathological features based on each patient were extracted and collected. Second, multiple methods were used to perform feature screening and feature selection. Then, four machine learning classifiers, including logistic regression (LR), k-nearest neighbor (KNN), support vector machine (SVM), and random forest (RF), were constructed to identify HER2-over-expression vs. others, HER2-low-expression vs. others, and HER2-zero-expression vs. others. The receiver operator characteristic (ROC) curve was plotted to measure the model's predictive power. According to the feature screening process, 8, 10, and 2 radiomics features and 2 clinicopathological features were finally selected to construct three prediction models (HER2-over-expression vs. others, HER2-low-expression vs. others, and HER2-zero-expression vs. others). For HER2-over-expression vs. others, the RF model outperformed other models with an AUC value of 0.843 (95%CI: 0.774-0.897), 0.785 (95%CI: 0.665-0.877), and 0.788 (95%CI: 0.708-0.868) in the training set, independent validation set, and external validation set. Concerning HER2-low-expression vs. others, the outperformance of the LR model over other models was identified with an AUC value of 0.783 (95%CI: 0.708-0.846), 0.756 (95%CI: 0.634-0.854), and 0.779 (95%CI: 0.698-0.860) in the training set, independent validation set, and external validation set. Whereas, the KNN model was confirmed as the optimal model to distinguish HER2-zero-expression from others, with an AUC value of 0.929 (95%CI: 0.890-0.958), 0.847 (95%CI: 0.764-0.910), and 0.835 (95%CI: 0.762-0.908) in the training set, independent validation set, and external validation set. Combined PET/CT radiomic models integrating with clinicopathological characteristics are non-invasively predictive of different HER2 statuses of BC patients.

Effect of Deep Learning-Based Image Reconstruction on Lesion Conspicuity of Liver Metastases in Pre- and Post-contrast Enhanced Computed Tomography.

Ichikawa Y, Hasegawa D, Domae K, Nagata M, Sakuma H

pubmed logopapersMay 12 2025
The purpose of this study was to investigate the utility of deep learning image reconstruction at medium and high intensity levels (DLIR-M and DLIR-H, respectively) for better delineation of liver metastases in pre-contrast and post-contrast CT, compared to conventional hybrid iterative reconstruction (IR) methods. Forty-one patients with liver metastases who underwent abdominal CT were studied. The raw data were reconstructed with three different algorithms: hybrid IR (ASiR-V 50%), DLIR-M (TrueFildelity-M), and DLIR-H (TrueFildelity-H). Three experienced radiologists independently rated the lesion conspicuity of liver metastases on a qualitative 5-point scale (score 1 = very poor; score 5 = excellent). The observers also selected each image series for pre- and post-contrast CT per patient that was considered most preferable for liver metastases assessment. For pre-contrast CT, lesion conspicuity scores for DLIR-H and DLIR-M were significantly higher than those for hybrid IR for two of the three observers, while there was no significant difference for one observer. For post-contrast CT, the lesion conspicuity scores for DLIR-H images were significantly higher than those for DLIR-M images for two of the three observers on post-contrast CT (Observer 1: DLIR-H, 4.3 ± 0.8 vs. DLIR-M, 3.9 ± 0.9, p = 0.0006; Observer 3: DLIR-H, 4.6 ± 0.6 vs. DLIR-M, 4.3 ± 0.6, p = 0.0013). For post-contrast CT, all observers most often selected DLIR-H as the best reconstruction method for the diagnosis of liver metastases. However, in the pre-contrast CT, there was variation among the three observers in determining the most preferred image reconstruction method, and DLIR was not necessarily preferred over hybrid IR for the diagnosis of liver metastases.

Two-Stage Automatic Liver Classification System Based on Deep Learning Approach Using CT Images.

Kılıç R, Yalçın A, Alper F, Oral EA, Ozbek IY

pubmed logopapersMay 12 2025
Alveolar echinococcosis (AE) is a parasitic disease caused by Echinococcus multilocularis, where early detection is crucial for effective treatment. This study introduces a novel method for the early diagnosis of liver diseases by differentiating between tumor, AE, and healthy cases using non-contrast CT images, which are widely accessible and eliminate the risks associated with contrast agents. The proposed approach integrates an automatic liver region detection method based on RCNN followed by a CNN-based classification framework. A dataset comprising over 27,000 thorax-abdominal images from 233 patients, including 8206 images with liver tissue, was constructed and used to evaluate the proposed method. The experimental results demonstrate the importance of the two-stage classification approach. In a 2-class classification problem for healthy and non-healthy classes, an accuracy rate of 0.936 (95% CI: 0.925 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.947) was obtained, and that for 3-class classification problem with AE, tumor, and healthy classes was obtained as 0.863 (95% CI: 0.847 <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>-</mo></math> 0.879). These results highlight the potential use of the proposed framework as a fully automatic approach for liver classification without the use of contrast agents. Furthermore, the proposed framework demonstrates competitive performance compared to other state-of-the-art techniques, suggesting its applicability in clinical practice.

Biological markers and psychosocial factors predict chronic pain conditions.

Fillingim M, Tanguay-Sabourin C, Parisien M, Zare A, Guglietti GV, Norman J, Petre B, Bortsov A, Ware M, Perez J, Roy M, Diatchenko L, Vachon-Presseau E

pubmed logopapersMay 12 2025
Chronic pain is a multifactorial condition presenting significant diagnostic and prognostic challenges. Biomarkers for the classification and the prediction of chronic pain are therefore critically needed. Here, in this multidataset study of over 523,000 participants, we applied machine learning to multidimensional biological data from the UK Biobank to identify biomarkers for 35 medical conditions associated with pain (for example, rheumatoid arthritis and gout) or self-reported chronic pain (for example, back pain and knee pain). Biomarkers derived from blood immunoassays, brain and bone imaging, and genetics were effective in predicting medical conditions associated with chronic pain (area under the curve (AUC) 0.62-0.87) but not self-reported pain (AUC 0.50-0.62). Notably, all biomarkers worked in synergy with psychosocial factors, accurately predicting both medical conditions (AUC 0.69-0.91) and self-reported pain (AUC 0.71-0.92). These findings underscore the necessity of adopting a holistic approach in the development of biomarkers to enhance their clinical utility.

Preoperative prediction of malignant transformation in sinonasal inverted papilloma: a novel MRI-based deep learning approach.

Ding C, Wen B, Han Q, Hu N, Kang Y, Wang Y, Wang C, Zhang L, Xian J

pubmed logopapersMay 12 2025
To develop a novel MRI-based deep learning (DL) diagnostic model, utilizing multicenter large-sample data, for the preoperative differentiation of sinonasal inverted papilloma (SIP) from SIP-transformed squamous cell carcinoma (SIP-SCC). This study included 568 patients from four centers with confirmed SIP (n = 421) and SIP-SCC (n = 147). Deep learning models were built using T1WI, T2WI, and CE-T1WI. A combined model was constructed by integrating these features through an attention mechanism. The diagnostic performance of radiologists, both with and without the model's assistance, was compared. Model performance was evaluated through receiver operating characteristic (ROC) analysis, calibration curves, and decision curve analysis (DCA). The combined model demonstrated superior performance in differentiating SIP from SIP-SCC, achieving AUCs of 0.954, 0.897, and 0.859 in the training, internal validation, and external validation cohorts, respectively. It showed optimal accuracy, stability, and clinical benefit, as confirmed by Brier scores and calibration curves. The diagnostic performance of radiologists, especially for less experienced ones, was significantly improved with model assistance. The MRI-based deep learning model enhances the capability to predict malignant transformation of sinonasal inverted papilloma before surgery. By facilitating earlier diagnosis and promoting timely pathological examination or surgical intervention, this approach holds the potential to enhance patient prognosis. Questions Sinonasal inverted papilloma (SIP) is prone to malignant transformation locally, leading to poor prognosis; current diagnostic methods are invasive and inaccurate, necessitating effective preoperative differentiation. Findings The MRI-based deep learning model accurately diagnoses malignant transformations of SIP, enabling junior radiologists to achieve greater clinical benefits with the assistance of the model. Clinical relevance A novel MRI-based deep learning model enhances the capability of preoperative diagnosis of malignant transformation in sinonasal inverted papilloma, providing a non-invasive tool for personalized treatment planning.

Automatic CTA analysis for blood vessels and aneurysm features extraction in EVAR planning.

Robbi E, Ravanelli D, Allievi S, Raunig I, Bonvini S, Passerini A, Trianni A

pubmed logopapersMay 12 2025
Endovascular Aneurysm Repair (EVAR) is a minimally invasive procedure crucial for treating abdominal aortic aneurysms (AAA), where precise pre-operative planning is essential. Current clinical methods rely on manual measurements, which are time-consuming and prone to errors. Although AI solutions are increasingly being developed to automate aspects of these processes, most existing approaches primarily focus on computing volumes and diameters, falling short of delivering a fully automated pre-operative analysis. This work presents BRAVE (Blood Vessels Recognition and Aneurysms Visualization Enhancement), the first comprehensive AI-driven solution for vascular segmentation and AAA analysis using pre-operative CTA scans. BRAVE offers exhaustive segmentation, identifying both the primary abdominal aorta and secondary vessels, often overlooked by existing methods, providing a complete view of the vascular structure. The pipeline performs advanced volumetric analysis of the aneurysm sac, quantifying thrombotic tissue and calcifications, and automatically identifies the proximal and distal sealing zones, critical for successful EVAR procedures. BRAVE enables fully automated processing, reducing manual intervention and improving clinical workflow efficiency. Trained on a multi-center open-access dataset, it demonstrates generalizability across different CTA protocols and patient populations, ensuring robustness in diverse clinical settings. This solution saves time, ensures precision, and standardizes the process, enhancing vascular surgeons' decision-making.

Enhancing noninvasive pancreatic cystic neoplasm diagnosis with multimodal machine learning.

Huang W, Xu Y, Li Z, Li J, Chen Q, Huang Q, Wu Y, Chen H

pubmed logopapersMay 12 2025
Pancreatic cystic neoplasms (PCNs) are a complex group of lesions with a spectrum of malignancy. Accurate differentiation of PCN types is crucial for patient management, as misdiagnosis can result in unnecessary surgeries or treatment delays, affecting the quality of life. The significance of developing a non-invasive, accurate diagnostic model is underscored by the need to improve patient outcomes and reduce the impact of these conditions. We developed a machine learning model capable of accurately identifying different types of PCNs in a non-invasive manner, by using a dataset comprising 449 MRI and 568 CT scans from adult patients, spanning from 2009 to 2022. The study's results indicate that our multimodal machine learning algorithm, which integrates both clinical and imaging data, significantly outperforms single-source data algorithms. Specifically, it demonstrated state-of-the-art performance in classifying PCN types, achieving an average accuracy of 91.2%, precision of 91.7%, sensitivity of 88.9%, and specificity of 96.5%. Remarkably, for patients with mucinous cystic neoplasms (MCNs), regardless of undergoing MRI or CT imaging, the model achieved a 100% prediction accuracy rate. It indicates that our non-invasive multimodal machine learning model offers strong support for the early screening of MCNs, and represents a significant advancement in PCN diagnosis for improving clinical practice and patient outcomes. We also achieved the best results on an additional pancreatic cancer dataset, which further proves the generality of our model.

Application of improved graph convolutional network for cortical surface parcellation.

Tan J, Ren X, Chen Y, Yuan X, Chang F, Yang R, Ma C, Chen X, Tian M, Chen W, Wang Z

pubmed logopapersMay 12 2025
Accurate cortical surface parcellation is essential for elucidating brain organizational principles, functional mechanisms, and the neural substrates underlying higher cognitive and emotional processes. However, the cortical surface is a highly folded complex geometry, and large regional variations make the analysis of surface data challenging. Current methods rely on geometric simplification, such as spherical expansion, which takes hours for spherical mapping and registration, a popular but costly process that does not take full advantage of inherent structural information. In this study, we propose an Attention-guided Deep Graph Convolutional network (ADGCN) for end-to-end parcellation on primitive cortical surface manifolds. ADGCN consists of a deep graph convolutional layer with a symmetrical U-shaped structure, which enables it to effectively transmit detailed information of the original brain map and learn the complex graph structure, help the network enhance feature extraction capability. What's more, we introduce the Squeeze and Excitation (SE) module, which enables the network to better capture key features, suppress unimportant features, and significantly improve parcellation performance with a small amount of computation. We evaluated the model on a public dataset of 100 artificially labeled brain surfaces. Compared with other methods, the proposed network achieves Dice coefficient of 88.53% and an accuracy of 90.27%. The network can segment the cortex directly in the original domain, and has the advantages of high efficiency, simple operation and strong interpretability. This approach facilitates the investigation of cortical changes during development, aging, and disease progression, with the potential to enhance the accuracy of neurological disease diagnosis and the objectivity of treatment efficacy evaluation.
Page 11 of 25248 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.