Sort by:
Page 60 of 78779 results

Exploring <i>SLC25A42</i> as a Radiogenomic Marker from the Perioperative Stage to Chemotherapy in Hepatitis-Related Hepatocellular Carcinoma.

Dou L, Jiang J, Yao H, Zhang B, Wang X

pubmed logopapersJun 2 2025
<b><i>Background:</i></b> The molecular mechanisms driving hepatocellular carcinoma (HCC) and predict the chemotherapy sensitive remain unclear; therefore, identification of these key biomarkers is essential for early diagnosis and treatment of HCC. <b><i>Method:</i></b> We collected and processed Computed Tomography (CT) and clinical data from 116 patients with autoimmune hepatitis (AIH) and HCC who came to our hospital's Liver Cancer Center. We then identified and extracted important characteristic features of significant patient images and correlated them with mitochondria-related genes using machine learning techniques such as multihead attention networks, lasso regression, principal component analysis (PCA), and support vector machines (SVM). These genes were integrated into radiomics signature models to explore their role in disease progression. We further correlated these results with clinical variables to screen for driver genes and evaluate the predict ability of chemotherapy sensitive of key genes in liver cancer (LC) patients. Finally, qPCR was used to validate the expression of this gene in patient samples. <b><i>Results:</i></b> Our study utilized attention networks to identify disease regions in medical images with 97% accuracy and an AUC of 94%. We extracted 942 imaging features, identifying five key features through lasso regression that accurately differentiate AIH from HCC. Transcriptome analysis revealed 132 upregulated and 101 downregulated genes in AIH, with 45 significant genes identified by XGBOOST. In HCC analysis, PCA and random forest highlighted 11 key features. Among mitochondrial genes, <i>SLC25A42</i> correlated positively with normal tissue imaging features but negatively with cancerous tissues and was identified as a driver gene. Low expression of <i>SLC25A42</i> was associated with chemotherapy sensitive in HCC patients. <b><i>Conclusions:</i></b> In conclusion, machine learning modeling combined with genomic profiling provides a promising approach to identify the driver gene <i>SLC25A42</i> in LC, which may help improve diagnostic accuracy and chemotherapy sensitivity for this disease.

Synthetic Ultrasound Image Generation for Breast Cancer Diagnosis Using cVAE-WGAN Models: An Approach Based on Generative Artificial Intelligence

Mondillo, G., Masino, M., Colosimo, S., Perrotta, A., Frattolillo, V., Abbate, F. G.

medrxiv logopreprintJun 2 2025
The scarcity and imbalance of medical image datasets hinder the development of robust computer-aided diagnosis (CAD) systems for breast cancer. This study explores the application of advanced generative models, based on generative artificial intelligence (GenAI), for the synthesis of digital breast ultrasound images. Using a hybrid Conditional Variational Autoencoder-Wasserstein Generative Adversarial Network (CVAE-WGAN) architecture, we developed a system to generate high-quality synthetic images conditioned on the class (malignant vs. normal/benign). These synthetic images, generated from the low-resolution BreastMNIST dataset and filtered for quality, were systematically integrated with real training data at different mixing ratios (W). The performance of a CNN classifier trained on these mixed datasets was evaluated against a baseline model trained only on real data balanced with SMOTE. The optimal integration (mixing weight W=0.25) produced a significant performance increase on the real test set: +8.17% in macro-average F1-score and +4.58% in accuracy compared to using real data alone. Analysis confirmed the originality of the generated samples. This approach offers a promising solution for overcoming data limitations in image-based breast cancer diagnostics, potentially improving the capabilities of CAD systems.

Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations.

Choi A, Kim HG, Choi MH, Ramasamy SK, Kim Y, Jung SE

pubmed logopapersJun 1 2025
Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed to evaluate the performance of the GPT-4 Turbo and GPT-4o in radiology resident examinations, to analyze differences across question types, and to compare their results with those of residents at different levels. A total of 776 multiple-choice questions from the Korean Society of Radiology In-Training Examinations were used, forming two question sets: one originally written in Korean and the other translated into English. We evaluated the performance of GPT-4 Turbo (gpt-4-turbo-2024-04-09) and GPT-4o (gpt-4o-2024-11-20) on these questions with the temperature set to zero, determining the accuracy based on the majority vote from five independent trials. We analyzed their results using the question type (text-only vs. image-based) and benchmarked them against nationwide radiology residents' performance. The impact of the input language (Korean or English) on model performance was examined. GPT-4o outperformed GPT-4 Turbo for both image-based (48.2% vs. 41.8%, <i>P</i> = 0.002) and text-only questions (77.9% vs. 69.0%, <i>P</i> = 0.031). On image-based questions, GPT-4 Turbo and GPT-4o showed comparable performance to that of 1st-year residents (41.8% and 48.2%, respectively, vs. 43.3%, <i>P</i> = 0.608 and 0.079, respectively) but lower performance than that of 2nd- to 4th-year residents (vs. 56.0%-63.9%, all <i>P</i> ≤ 0.005). For text-only questions, GPT-4 Turbo and GPT-4o performed better than residents across all years (69.0% and 77.9%, respectively, vs. 44.7%-57.5%, all <i>P</i> ≤ 0.039). Performance on the English- and Korean-version questions showed no significant differences for either model (all <i>P</i> ≥ 0.275). GPT-4o outperformed the GPT-4 Turbo in all question types. On image-based questions, both models' performance matched that of 1st-year residents but was lower than that of higher-year residents. Both models demonstrated superior performance compared to residents for text-only questions. The models showed consistent performances across English and Korean inputs.

Enhancing radiomics features via a large language model for classifying benign and malignant breast tumors in mammography.

Ra S, Kim J, Na I, Ko ES, Park H

pubmed logopapersJun 1 2025
Radiomics is widely used to assist in clinical decision-making, disease diagnosis, and treatment planning for various target organs, including the breast. Recent advances in large language models (LLMs) have helped enhance radiomics analysis. Herein, we sought to improve radiomics analysis by incorporating LLM-learned clinical knowledge, to classify benign and malignant tumors in breast mammography. We extracted radiomics features from the mammograms based on the region of interest and retained the features related to the target task. Using prompt engineering, we devised an input sequence that reflected the selected features and the target task. The input sequence was fed to the chosen LLM (LLaMA variant), which was fine-tuned using low-rank adaptation to enhance radiomics features. This was then evaluated on two mammogram datasets (VinDr-Mammo and INbreast) against conventional baselines. The enhanced radiomics-based method performed better than baselines using conventional radiomics features tested on two mammogram datasets, achieving accuracies of 0.671 for the VinDr-Mammo dataset and 0.839 for the INbreast dataset. Conventional radiomics models require retraining from scratch for an unseen dataset using a new set of features. In contrast, the model developed in this study effectively reused the common features between the training and unseen datasets by explicitly linking feature names with feature values, leading to extensible learning across datasets. Our method performed better than the baseline method in this retraining setting using an unseen dataset. Our method, one of the first to incorporate LLM into radiomics, has the potential to improve radiomics analysis.

Radiomics across modalities: a comprehensive review of neurodegenerative diseases.

Inglese M, Conti A, Toschi N

pubmed logopapersJun 1 2025
Radiomics allows extraction from medical images of quantitative features that are able to reveal tissue patterns that are generally invisible to human observers. Despite the challenges in visually interpreting radiomic features and the computational resources required to generate them, they hold significant value in downstream automated processing. For instance, in statistical or machine learning frameworks, radiomic features enhance sensitivity and specificity, making them indispensable for tasks such as diagnosis, prognosis, prediction, monitoring, image-guided interventions, and evaluating therapeutic responses. This review explores the application of radiomics in neurodegenerative diseases, with a focus on Alzheimer's disease, Parkinson's disease, Huntington's disease, and multiple sclerosis. While radiomics literature often focuses on magnetic resonance imaging (MRI) and computed tomography (CT), this review also covers its broader application in nuclear medicine, with use cases of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) radiomics. Additionally, we review integrated radiomics, where features from multiple imaging modalities are fused to improve model performance. This review also highlights the growing integration of radiomics with artificial intelligence and the need for feature standardisation and reproducibility to facilitate its translation into clinical practice.

An explainable adaptive channel weighting-based deep convolutional neural network for classifying renal disorders in computed tomography images.

Loganathan G, Palanivelan M

pubmed logopapersJun 1 2025
Renal disorders are a significant public health concern and a cause of mortality related to renal failure. Manual diagnosis is subjective, labor-intensive, and depends on the expertise of nephrologists in renal anatomy. To improve workflow efficiency and enhance diagnosis accuracy, we propose an automated deep learning model, called EACWNet, which incorporates adaptive channel weighting-based deep convolutional neural network and explainable artificial intelligence. The proposed model categorizes renal computed tomography images into various classes, such as cyst, normal, tumor, and stone. The adaptive channel weighting module utilizes both global and local contextual insights to refine the final feature map channel weights through the integration of a scale-adaptive channel attention module in the higher convolutional blocks of the VGG-19 backbone model employed in the proposed method. The efficacy of the EACWNet model has been assessed using a publicly available renal CT images dataset, attaining an accuracy of 98.87% and demonstrating a 1.75% improvement over the backbone model. However, this model exhibits class-wise precision variation, achieving higher precision for cyst, normal, and tumor cases but lower precision for the stone class due to its inherent variability and heterogeneity. Furthermore, the model predictions have been subjected to additional analysis using the explainable artificial intelligence method such as local interpretable model-agnostic explanations, to visualize better and understand the model predictions.

Tailoring ventilation and respiratory management in pediatric critical care: optimizing care with precision medicine.

Beauchamp FO, Thériault J, Sauthier M

pubmed logopapersJun 1 2025
Critically ill children admitted to the intensive care unit frequently need respiratory care to support the lung function. Mechanical ventilation is a complex field with multiples parameters to set. The development of precision medicine will allow clinicians to personalize respiratory care and improve patients' outcomes. Lung and diaphragmatic ultrasound, electrical impedance tomography, neurally adjusted ventilatory assist ventilation, as well as the use of monitoring data in machine learning models are increasingly used to tailor care. Each modality offers insights into different aspects of the patient's respiratory system function and enables the adjustment of treatment to better support the patient's physiology. Precision medicine in respiratory care has been associated with decreased ventilation time, increased extubation and ventilation wean success and increased ability to identify phenotypes to guide treatment and predict outcomes. This review will focus on the use of precision medicine in the setting of pediatric acute respiratory distress syndrome, asthma, bronchiolitis, extubation readiness trials and ventilation weaning, ventilation acquired pneumonia and other respiratory tract infections. Precision medicine is revolutionizing respiratory care and will decrease complications associated with ventilation. More research is needed to standardize its use and better evaluate its impact on patient outcomes.

Toward Noninvasive High-Resolution In Vivo pH Mapping in Brain Tumors by <sup>31</sup>P-Informed deepCEST MRI.

Schüre JR, Rajput J, Shrestha M, Deichmann R, Hattingen E, Maier A, Nagel AM, Dörfler A, Steidl E, Zaiss M

pubmed logopapersJun 1 2025
The intracellular pH (pH<sub>i</sub>) is critical for understanding various pathologies, including brain tumors. While conventional pH<sub>i</sub> measurement through <sup>31</sup>P-MRS suffers from low spatial resolution and long scan times, <sup>1</sup>H-based APT-CEST imaging offers higher resolution with shorter scan times. This study aims to directly predict <sup>31</sup>P-pH<sub>i</sub> maps from CEST data by using a fully connected neuronal network. Fifteen tumor patients were scanned on a 3-T Siemens PRISMA scanner and received <sup>1</sup>H-based CEST and T1 measurement, as well as <sup>31</sup>P-MRS. A neural network was trained voxel-wise on CEST and T1 data to predict <sup>31</sup>P-pH<sub>i</sub> values, using data from 11 patients for training and 4 for testing. The predicted pH<sub>i</sub> maps were additionally down-sampled to the original the <sup>31</sup>P-pH<sub>i</sub> resolution, to be able to calculate the RMSE and analyze the correlation, while higher resolved predictions were compared with conventional CEST metrics. The results demonstrated a general correspondence between the predicted deepCEST pH<sub>i</sub> maps and the measured <sup>31</sup>P-pH<sub>i</sub> in test patients. However, slight discrepancies were also observed, with a RMSE of 0.04 pH units in tumor regions. High-resolution predictions revealed tumor heterogeneity and features not visible in conventional CEST data, suggesting the model captures unique pH information and is not simply a T1 segmentation. The deepCEST pH<sub>i</sub> neural network enables the APT-CEST hidden pH-sensitivity and offers pH<sub>i</sub> maps with higher spatial resolution in shorter scan time compared with <sup>31</sup>P-MRS. Although this approach is constrained by the limitations of the acquired data, it can be extended with additional CEST features for future studies, thereby offering a promising approach for 3D pH imaging in a clinical environment.

Broadening the Net: Overcoming Challenges and Embracing Novel Technologies in Lung Cancer Screening.

Czerlanis CM, Singh N, Fintelmann FJ, Damaraju V, Chang AEB, White M, Hanna N

pubmed logopapersJun 1 2025
Lung cancer is one of the leading causes of cancer-related mortality worldwide, with most cases diagnosed at advanced stages where curative treatment options are limited. Low-dose computed tomography (LDCT) for lung cancer screening (LCS) of individuals selected based on age and smoking history has shown a significant reduction in lung cancer-specific mortality. The number needed to screen to prevent one death from lung cancer is lower than that for breast cancer, cervical cancer, and colorectal cancer. Despite the substantial impact on reducing lung cancer-related mortality and proof that LCS with LDCT is effective, uptake of LCS has been low and LCS eligibility criteria remain imperfect. While LCS programs have historically faced patient recruitment challenges, research suggests that there are novel opportunities to both identify and improve screening for at-risk populations. In this review, we discuss the global obstacles to implementing LCS programs and strategies to overcome barriers in resource-limited settings. We explore successful approaches to promote LCS through robust engagement with community partners. Finally, we examine opportunities to enhance LCS in at-risk populations not captured by current eligibility criteria, including never smokers and individuals with a family history of lung cancer, with a focus on early detection through novel artificial intelligence technologies.

Artificial intelligence in pediatric osteopenia diagnosis: evaluating deep network classification and model interpretability using wrist X-rays.

Harris CE, Liu L, Almeida L, Kassick C, Makrogiannis S

pubmed logopapersJun 1 2025
Osteopenia is a bone disorder that causes low bone density and affects millions of people worldwide. Diagnosis of this condition is commonly achieved through clinical assessment of bone mineral density (BMD). State of the art machine learning (ML) techniques, such as convolutional neural networks (CNNs) and transformer models, have gained increasing popularity in medicine. In this work, we employ six deep networks for osteopenia vs. healthy bone classification using X-ray imaging from the pediatric wrist dataset GRAZPEDWRI-DX. We apply two explainable AI techniques to analyze and interpret visual explanations for network decisions. Experimental results show that deep networks are able to effectively learn osteopenic and healthy bone features, achieving high classification accuracy rates. Among the six evaluated networks, DenseNet201 with transfer learning yielded the top classification accuracy at 95.2 %. Furthermore, visual explanations of CNN decisions provide valuable insight into the blackbox inner workings and present interpretable results. Our evaluation of deep network classification results highlights their capability to accurately differentiate between osteopenic and healthy bones in pediatric wrist X-rays. The combination of high classification accuracy and interpretable visual explanations underscores the promise of incorporating machine learning techniques into clinical workflows for the early and accurate diagnosis of osteopenia.
Page 60 of 78779 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.