Sort by:
Page 257 of 3473462 results

Preoperative blood and CT-image nutritional indicators in short-term outcomes and machine learning survival framework of intrahepatic cholangiocarcinoma.

Wang M, Xie X, Lin J, Shen Z, Zou E, Wang Y, Liang X, Chen G, Yu H

pubmed logopapersJun 1 2025
Intrahepatic cholangiocarcinoma (iCCA) is aggressive with limited treatment and poor prognosis. Preoperative nutritional status assessment is crucial for predicting outcomes in patients. This study aimed to compare the predictive capabilities of preoperative blood like albumin-bilirubin (ALBI), controlling nutritional status (CONUT), prognostic nutritional index (PNI) and CT-imaging nutritional indicators like skeletal muscle index (SMI), visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), visceral to subcutaneous adipose tissue ratio (VSR) in iCCA patients undergoing curative hepatectomy. 290 iCCA patients from two centers were studied. Preoperative blood and CT-imaging nutritional indicators were evaluated. Short-term outcomes like complications, early recurrence (ER) and very early recurrence (VER), and overall survival (OS) as long-term outcome were assessed. Six machine learning (ML) models, including Gradient Boosting (GB) survival analysis, were developed to predict OS. Preoperative blood nutritional indicators significantly associated with postoperative complications. CT-imaging nutritional indicators show insignificant associations with short-term outcomes. All preoperative nutritional indicators were not effective in predicting early tumor recurrence. For long-term outcomes, ALBI, CONUT, PNI, SMI, and VSR were significantly associated with OS. Six ML survival models demonstrated strong and stable performance. GB model showed the best predictive performance (C-index: 0.755 in training cohorts, 0.714 in validation cohorts). Time-dependent ROC, calibration, and decision curve analysis confirmed its clinical value. Preoperative ALBI, CONUT, and PNI scores significantly correlated with complications but not ER. Four Image Nutritional Indicators were ineffective in evaluating short-term outcomes. Six ML models were developed based on nutritional and clinicopathological variables to predict iCCA prognosis.

DKCN-Net: Deep kronecker convolutional neural network-based lung disease detection with federated learning.

Meda A, Nelson L, Jagdish M

pubmed logopapersJun 1 2025
In the healthcare field, lung disease detection techniques based on deep learning (DL) are widely used. However, achieving high stability while maintaining privacy remains a challenge. To address this, this research employs Federated Learning (FL), enabling doctors to train models without sharing patient data with unauthorized parties, preserving privacy in local models. The study introduces the Deep Kronecker Convolutional Neural Network (DKCN-Net) for lung disease detection. Input Computed Tomography (CT) images are sourced from the LIDC-IDRI database and denoised using the Adaptive Gaussian Filter (AGF). After that, the Lung lobe and nodule segmentation are performed using Deep Fuzzy Clustering (DFC) and a 3-Dimensional Fully Convolutional Neural Network (3D-FCN). During feature extraction, various features, including statistical, Convolutional Neural Networks (CNN), and Gray-Level Co-Occurrence Matrix (GLCM), are obtained. Lung diseases are then detected using DKCN-Net, which combines the Deep Kronecker Neural Network (DKN) and Parallel Convolutional Neural Network (PCNN). The DKCN-Net achieves an accuracy of 92.18 %, a loss of 7.82 %, a Mean Squared Error (MSE) of 0.858, a True Positive Rate (TPR) of 92.99 %, and a True Negative Rate (TNR) of 92.19 %, with a processing time of 50 s per timestamp.

Regional Cerebral Atrophy Contributes to Personalized Survival Prediction in Amyotrophic Lateral Sclerosis: A Multicentre, Machine Learning, Deformation-Based Morphometry Study.

Lajoie I, Kalra S, Dadar M

pubmed logopapersJun 1 2025
Accurate personalized survival prediction in amyotrophic lateral sclerosis is essential for effective patient care planning. This study investigates whether grey and white matter changes measured by magnetic resonance imaging can improve individual survival predictions. We analyzed data from 178 patients with amyotrophic lateral sclerosis and 166 healthy controls in the Canadian Amyotrophic Lateral Sclerosis Neuroimaging Consortium study. A voxel-wise linear mixed-effects model assessed disease-related and survival-related atrophy detected through deformation-based morphometry, controlling for age, sex, and scanner variations. Additional linear mixed-effects models explored associations between regional imaging and clinical measurements, and their associations with time to the composite outcome of death, tracheostomy, or permanent assisted ventilation. We evaluated whether incorporating imaging features alongside clinical data could improve the performance of an individual survival distribution model. Deformation-based morphometry uncovered distinct voxel-wise atrophy patterns linked to disease progression and survival, with many of these regional atrophies significantly associated with clinical manifestations of the disease. By integrating regional imaging features with clinical data, we observed a substantial enhancement in the performance of survival models across key metrics. Our analysis identified specific brain regions, such as the corpus callosum, rostral middle frontal gyrus, and thalamus, where atrophy predicted an increased risk of mortality. This study suggests that brain atrophy patterns measured by deformation-based morphometry provide valuable insights beyond clinical assessments for prognosis. It offers a more comprehensive approach to prognosis and highlights brain regions involved in disease progression and survival, potentially leading to a better understanding of amyotrophic lateral sclerosis. ANN NEUROL 2025;97:1144-1157.

Retaking assessment system based on the inspiratory state of chest X-ray image.

Matsubara N, Teramoto A, Takei M, Kitoh Y, Kawakami S

pubmed logopapersJun 1 2025
When taking chest X-rays, the patient is encouraged to take maximum inspiration and the radiological technologist takes the images at the appropriate time. If the image is not taken at maximum inspiration, retaking of the image is required. However, there is variation in the judgment of whether retaking is necessary between the operators. Therefore, we considered that it might be possible to reduce variation in judgment by developing a retaking assessment system that evaluates whether retaking is necessary using a convolutional neural network (CNN). To train the CNN, the input chest X-ray image and the corresponding correct label indicating whether retaking is necessary are required. However, chest X-ray images cannot distinguish whether inspiration is sufficient and does not need to be retaken, or insufficient and retaking is required. Therefore, we generated input images and labels from dynamic digital radiography (DDR) and conducted the training. Verification using 18 dynamic chest X-ray cases (5400 images) and 48 actual chest X-ray cases (96 images) showed that the VGG16-based architecture achieved an assessment accuracy of 82.3% even for actual chest X-ray images. Therefore, if the proposed method is used in hospitals, it could possibly reduce the variability in judgment between operators.

Diagnosis of Thyroid Nodule Malignancy Using Peritumoral Region and Artificial Intelligence: Results of Hand-Crafted, Deep Radiomics Features and Radiologists' Assessment in Multicenter Cohorts.

Abbasian Ardakani A, Mohammadi A, Yeong CH, Ng WL, Ng AH, Tangaraju KN, Behestani S, Mirza-Aghazadeh-Attari M, Suresh R, Acharya UR

pubmed logopapersJun 1 2025
To develop, test, and externally validate a hybrid artificial intelligence (AI) model based on hand-crafted and deep radiomics features extracted from B-mode ultrasound images in differentiating benign and malignant thyroid nodules compared to senior and junior radiologists. A total of 1602 thyroid nodules from four centers across two countries (Iran and Malaysia) were included for the development and validation of AI models. From each original and expanded contour, which included the peritumoral region, 2060 handcrafted and 1024 deep radiomics features were extracted to assess the effectiveness of the peritumoral region in the AI diagnosis profile. The performance of four algorithms, namely, support vector machine with linear (SVM_lin) and radial basis function (SVM_RBF) kernels, logistic regression, and K-nearest neighbor, was evaluated. The diagnostic performance of the proposed AI model was compared with two radiologists based on the American Thyroid Association (ATA) and the Thyroid Imaging Reporting & Data System (TI-RADS™) guidelines to show the model's applicability in clinical routines. Thirty-five hand-crafted and 36 deep radiomics features were considered for model development. In the training step, SVM_RBF and SVM_lin showed the best results when rectangular contours 40% greater than the original contours were used for both hand-crafted and deep features. Ensemble-learning with SVM_RBF and SVM_lin obtained AUC of 0.954, 0.949, 0.932, and 0.921 in internal and external validations of the Iran cohort and Malaysia cohorts 1 and 2, respectively, and outperformed both radiologists. The proposed AI model trained on nodule+the peripheral region performed optimally in external validations and outperformed the radiologists using the ATA and TI-RADS guidelines.

Changes of Pericoronary Adipose Tissue in Stable Heart Transplantation Recipients and Comparison with Controls.

Yang J, Chen L, Yu J, Chen J, Shi J, Dong N, Yu F, Shi H

pubmed logopapersJun 1 2025
Pericoronary adipose tissue (PCAT) is a key cardiovascular risk biomarker, yet its temporal changes after heart transplantation (HT) and comparison with controls remain unclear. This study investigates the temporal changes of PCAT in stable HT recipients and compares it to controls. In this study, we analyzed 159 stable HT recipients alongside two control groups. Both control groups were matched to a subgroup of HT recipients who did not have coronary artery stenosis. Group 1 consisted of 60 individuals matched for age, sex, and body mass index (BMI), with no history of hypertension, diabetes, hyperlipidemia, or smoking. Group 2 included 56 individuals additionally matched for hypertension, diabetes, hyperlipidemia, and smoking history. PCAT volume and fat attenuation index (FAI) were measured using AI-based software. Temporal changes in PCAT were assessed at multiple time points in HT recipients, and PCAT in the subgroup of HT recipients without coronary stenosis was compared to controls. Stable HT recipients exhibited a progressive decrease in FAI and an increase in PCAT volume over time, particularly in the first five years post-HT. Similar trends were observed in the subgroup of HT recipients without coronary stenosis. Compared to controls, PCAT FAI was significantly higher in the HT subgroup during the first five years post-HT (P < 0.001). After five years, differences persisted but diminished, with no statistically significant differences observed in the PCAT of left anterior descending artery (LAD) (P > 0.05). A negative correlation was observed between FAI and PCAT volume post-HT (r = - 0.75 ∼ - 0.53). PCAT volume and FAI undergo temporal changes in stable HT recipients, especially during the first five years post-HT. Even in HT recipients without coronary stenosis, PCAT FAI differs from controls, indicating distinct changes in this cohort.

Eliminating the second CT scan of dual-tracer total-body PET/CT via deep learning-based image synthesis and registration.

Lin Y, Wang K, Zheng Z, Yu H, Chen S, Tang W, He Y, Gao H, Yang R, Xie Y, Yang J, Hou X, Wang S, Shi H

pubmed logopapersJun 1 2025
This study aims to develop and validate a deep learning framework designed to eliminate the second CT scan of dual-tracer total-body PET/CT imaging. We retrospectively included three cohorts of 247 patients who underwent dual-tracer total-body PET/CT imaging on two separate days (time interval:1-11 days). Out of these, 167 underwent [<sup>68</sup>Ga]Ga-DOTATATE/[<sup>18</sup>F]FDG, 50 underwent [<sup>68</sup>Ga]Ga-PSMA-11/[<sup>18</sup>F]FDG, and 30 underwent [<sup>68</sup>Ga]Ga-FAPI-04/[<sup>18</sup>F]FDG. A deep learning framework was developed that integrates a registration generative adversarial network (RegGAN) with non-rigid registration techniques. This approach allows for the transformation of attenuation-correction CT (ACCT) images from the first scan into pseudo-ACCT images for the second scan, which are then used for attenuation and scatter correction (ASC) of the second tracer PET images. Additionally, the derived registration transform facilitates dual-tracer image fusion and analysis. The deep learning-based ASC PET images were evaluated using quantitative metrics, including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) across the whole body and specific regions. Furthermore, the quantitative accuracy of PET images was assessed by calculating standardized uptake value (SUV) bias in normal organs and lesions. The MAE for whole-body pseudo-ACCT images ranged from 97.64 to 112.59 HU across four tracers. The deep learning-based ASC PET images demonstrated high similarity to the ground-truth PET images. The MAE of SUV for whole-body PET images was 0.06 for [<sup>68</sup>Ga]Ga-DOTATATE, 0.08 for [<sup>68</sup>Ga]Ga-PSMA-11, 0.06 for [<sup>68</sup>Ga]Ga-FAPI-04, and 0.05 for [<sup>18</sup>F]FDG, respectively. Additionally, the median absolute percent deviation of SUV was less than 2.6% for all normal organs, while the mean absolute percent deviation of SUV was less than 3.6% for lesions across four tracers. The proposed deep learning framework, combining RegGAN and non-rigid registration, shows promise in reducing CT radiation dose for dual-tracer total-body PET/CT imaging, with successful validation across multiple tracers.

Developing approaches to incorporate donor-lung computed tomography images into machine learning models to predict severe primary graft dysfunction after lung transplantation.

Ma W, Oh I, Luo Y, Kumar S, Gupta A, Lai AM, Puri V, Kreisel D, Gelman AE, Nava R, Witt CA, Byers DE, Halverson L, Vazquez-Guillamet R, Payne PRO, Sotiras A, Lu H, Niazi K, Gurcan MN, Hachem RR, Michelson AP

pubmed logopapersJun 1 2025
Primary graft dysfunction (PGD) is a common complication after lung transplantation associated with poor outcomes. Although risk factors have been identified, the complex interactions between clinical variables affecting PGD risk are not well understood, which can complicate decisions about donor-lung acceptance. Previously, we developed a machine learning model to predict grade 3 PGD using donor and recipient electronic health record data, but it lacked granular information from donor-lung computed tomography (CT) scans, which are routinely assessed during offer review. In this study, we used a gated approach to determine optimal methods for analyzing donor-lung CT scans among patients receiving first-time, bilateral lung transplants at a single center over 10 years. We assessed 4 computer vision approaches and fused the best with electronic health record data at 3 points in the machine learning process. A total of 160 patients had donor-lung CT scans for analysis. The best imaging-only approach employed a 3D ResNet model, yielding median (interquartile range) areas under the receiver operating characteristic and precision-recall curves of 0.63 (0.49-0.72) and 0.48 (0.35-0.6), respectively. Combining imaging with clinical data using late fusion provided the highest performance, with median areas under the receiver operating characteristic and precision-recall curves of 0.74 (0.59-0.85) and 0.61 (0.47-0.72), respectively.

Robust whole-body PET image denoising using 3D diffusion models: evaluation across various scanners, tracers, and dose levels.

Yu B, Ozdemir S, Dong Y, Shao W, Pan T, Shi K, Gong K

pubmed logopapersJun 1 2025
Whole-body PET imaging plays an essential role in cancer diagnosis and treatment but suffers from low image quality. Traditional deep learning-based denoising methods work well for a specific acquisition but are less effective in handling diverse PET protocols. In this study, we proposed and validated a 3D Denoising Diffusion Probabilistic Model (3D DDPM) as a robust and universal solution for whole-body PET image denoising. The proposed 3D DDPM gradually injected noise into the images during the forward diffusion phase, allowing the model to learn to reconstruct the clean data during the reverse diffusion process. A 3D convolutional network was trained using high-quality data from the Biograph Vision Quadra PET/CT scanner to generate the score function, enabling the model to capture accurate PET distribution information extracted from the total-body datasets. The trained 3D DDPM was evaluated on datasets from four scanners, four tracer types, and six dose levels representing a broad spectrum of clinical scenarios. The proposed 3D DDPM consistently outperformed 2D DDPM, 3D UNet, and 3D GAN, demonstrating its superior denoising performance across all tested conditions. Additionally, the model's uncertainty maps exhibited lower variance, reflecting its higher confidence in its outputs. The proposed 3D DDPM can effectively handle various clinical settings, including variations in dose levels, scanners, and tracers, establishing it as a promising foundational model for PET image denoising. The trained 3D DDPM model of this work can be utilized off the shelf by researchers as a whole-body PET image denoising solution. The code and model are available at https://github.com/Miche11eU/PET-Image-Denoising-Using-3D-Diffusion-Model .

Influence of prior probability information on large language model performance in radiological diagnosis.

Fukushima T, Kurokawa R, Hagiwara A, Sonoda Y, Asari Y, Kurokawa M, Kanzawa J, Gonoi W, Abe O

pubmed logopapersJun 1 2025
Large language models (LLMs) show promise in radiological diagnosis, but their performance may be affected by the context of the cases presented. Our purpose is to investigate how providing information about prior probabilities influences the diagnostic performance of an LLM in radiological quiz cases. We analyzed 322 consecutive cases from Radiology's "Diagnosis Please" quiz using Claude 3.5 Sonnet under three conditions: without context (Condition 1), informed as quiz cases (Condition 2), and presented as primary care cases (Condition 3). Diagnostic accuracy was compared using McNemar's test. The overall accuracy rate significantly improved in Condition 2 compared to Condition 1 (70.2% vs. 64.9%, p = 0.029). Conversely, the accuracy rate significantly decreased in Condition 3 compared to Condition 1 (59.9% vs. 64.9%, p = 0.027). Providing information that may influence prior probabilities significantly affects the diagnostic performance of the LLM in radiological cases. This suggests that LLMs may incorporate Bayesian-like principles and adjust the weighting of their diagnostic responses based on prior information, highlighting the potential for optimizing LLM's performance in clinical settings by providing relevant contextual information.
Page 257 of 3473462 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.