Sort by:
Page 658 of 7657650 results

Farrow L, Anderson L, Zhong M

pubmed logopapersJun 1 2025
This study set out to test the efficacy of different techniques used to manage to class imbalance, a type of data bias, in application of a large language model (LLM) to predict patient selection for total knee arthroplasty (TKA). This study utilised data from the Artificial Intelligence to Revolutionise the Patient Care Pathway in Hip and Knee Arthroplasty (ARCHERY) project (ISRCTN18398037). Data included the pre-operative radiology reports of patients referred to secondary care for knee-related complaints from within the North of Scotland. A clinically based LLM (GatorTron) was trained regarding prediction of selection for TKA. Three methods for managing class imbalance were assessed: a standard model, use of class weighting, and majority class undersampling. A total of 7707 individual knee radiology reports were included (dated from 2015 to 2022). The mean text length was 74 words (range 26-275). Only 910/7707 (11.8%) patients underwent TKA surgery (the designated 'minority class'). Class weighting technique performed better for minority class discrimination and calibration compared with the other two techniques (Recall 0.61/AUROC 0.73 for class weighting compared with 0.54/0.70 and 0.59/0.72 for the standard model and majority class undersampling, respectively. There was also significant data loss for majority class undersampling when compared with class-weighting. Use of class-weighting appears to provide the optimal method of training a an LLM to perform analytical tasks on free-text clinical information in the face of significant data bias ('class imbalance'). Such knowledge is an important consideration in the development of high-performance clinical AI models within Trauma and Orthopaedics.

Tan Q, Miao J, Nitschke L, Nickel MD, Lerchbaumer MH, Penzkofer T, Hofbauer S, Peters R, Hamm B, Geisel D, Wagner M, Walter-Rittel TC

pubmed logopapersJun 1 2025
Deep learning (DL) accelerated controlled aliasing in parallel imaging results in higher acceleration (CAIPIRINHA)-volumetric interpolated breath-hold examination (VIBE), provides high spatial resolution T1-weighted imaging of the upper abdomen. We aimed to investigate whether DL-CAIPIRINHA-VIBE can improve image quality, vessel conspicuity, and lesion detectability compared to a standard CAIPIRINHA-VIBE in renal imaging at 3 Tesla. In this prospective study, 50 patients with 23 solid and 45 cystic renal lesions underwent MRI with clinical MR sequences, including standard CAIPIRINHA-VIBE and DL-CAIPIRINHA-VIBE sequences in the nephrographic phase at 3 Tesla. Two experienced radiologists independently evaluated both sequences and multiplanar reconstructions (MPR) of the sagittal and coronal planes for image quality with a Likert scale ranging from 1 to 5 (5 =best). Quantitative measurements including the size of the largest lesion and renal lesion contrast ratios were evaluated. DL-CAIPIRINHA-VIBE compared to standard CAIPIRINHA-VIBE showed significantly improved overall image quality, higher scores for renal border delineation, renal sinuses, vessels, adrenal glands, reduced motion artifacts and reduced perceived noise in nephrographic phase images (all p < 0.001). DL-CAIPIRINHA-VIBE with MPR showed superior lesion conspicuity and diagnostic confidence compared to standard CAIPIRINHA-VIBE. However, DL-CAIPIRINHA-VIBE presented a more synthetic appearance and more aliasing artifacts (p < 0.023). The mean size and signal intensity of renal lesions for DL-CAIPIRINHA-VIBE showed no significant differences compared to standard CAIPIRINHA-VIBE (p > 0.9). DL-CAIPIRINHA-VIBE is well suited for kidney imaging in the nephrographic phase, provides good image quality, improved delineation of anatomic structures and renal lesions.

Yuan Y, Ahn E, Feng D, Khadra M, Kim J

pubmed logopapersJun 1 2025
Bi-parametric magnetic resonance imaging (bpMRI) has become a pivotal modality in the detection and diagnosis of clinically significant prostate cancer (csPCa). Developing AI-based systems to identify csPCa using bpMRI can transform prostate cancer (PCa) management by improving efficiency and cost-effectiveness. However, current state-of-the-art methods using convolutional neural networks (CNNs) and Transformers are limited in learning in-plane and three-dimensional spatial information from anisotropic bpMRI. Their performances also depend on the availability of large, diverse, and well-annotated bpMRI datasets. To address these challenges, we propose the Zonal-aware Self-supervised Mesh Network (Z-SSMNet), which adaptively integrates multi-dimensional (2D/2.5D/3D) convolutions to learn dense intra-slice information and sparse inter-slice information of the anisotropic bpMRI in a balanced manner. We also propose a self-supervised learning (SSL) technique that effectively captures both intra-slice and inter-slice semantic information using large-scale unlabeled data. Furthermore, we constrain the network to focus on the zonal anatomical regions to improve the detection and diagnosis capability of csPCa. We conducted extensive experiments on the PI-CAI (Prostate Imaging - Cancer AI) dataset comprising 10000+ multi-center and multi-scanner data. Our Z-SSMNet excelled in both lesion-level detection (AP score of 0.633) and patient-level diagnosis (AUROC score of 0.881), securing the top position in the Open Development Phase of the PI-CAI challenge and maintained strong performance, achieving an AP score of 0.690 and an AUROC score of 0.909, and securing the second-place ranking in the Closed Testing Phase. These findings underscore the potential of AI-driven systems for csPCa diagnosis and management.

Wang M, Xie X, Lin J, Shen Z, Zou E, Wang Y, Liang X, Chen G, Yu H

pubmed logopapersJun 1 2025
Intrahepatic cholangiocarcinoma (iCCA) is aggressive with limited treatment and poor prognosis. Preoperative nutritional status assessment is crucial for predicting outcomes in patients. This study aimed to compare the predictive capabilities of preoperative blood like albumin-bilirubin (ALBI), controlling nutritional status (CONUT), prognostic nutritional index (PNI) and CT-imaging nutritional indicators like skeletal muscle index (SMI), visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), visceral to subcutaneous adipose tissue ratio (VSR) in iCCA patients undergoing curative hepatectomy. 290 iCCA patients from two centers were studied. Preoperative blood and CT-imaging nutritional indicators were evaluated. Short-term outcomes like complications, early recurrence (ER) and very early recurrence (VER), and overall survival (OS) as long-term outcome were assessed. Six machine learning (ML) models, including Gradient Boosting (GB) survival analysis, were developed to predict OS. Preoperative blood nutritional indicators significantly associated with postoperative complications. CT-imaging nutritional indicators show insignificant associations with short-term outcomes. All preoperative nutritional indicators were not effective in predicting early tumor recurrence. For long-term outcomes, ALBI, CONUT, PNI, SMI, and VSR were significantly associated with OS. Six ML survival models demonstrated strong and stable performance. GB model showed the best predictive performance (C-index: 0.755 in training cohorts, 0.714 in validation cohorts). Time-dependent ROC, calibration, and decision curve analysis confirmed its clinical value. Preoperative ALBI, CONUT, and PNI scores significantly correlated with complications but not ER. Four Image Nutritional Indicators were ineffective in evaluating short-term outcomes. Six ML models were developed based on nutritional and clinicopathological variables to predict iCCA prognosis.

Meda A, Nelson L, Jagdish M

pubmed logopapersJun 1 2025
In the healthcare field, lung disease detection techniques based on deep learning (DL) are widely used. However, achieving high stability while maintaining privacy remains a challenge. To address this, this research employs Federated Learning (FL), enabling doctors to train models without sharing patient data with unauthorized parties, preserving privacy in local models. The study introduces the Deep Kronecker Convolutional Neural Network (DKCN-Net) for lung disease detection. Input Computed Tomography (CT) images are sourced from the LIDC-IDRI database and denoised using the Adaptive Gaussian Filter (AGF). After that, the Lung lobe and nodule segmentation are performed using Deep Fuzzy Clustering (DFC) and a 3-Dimensional Fully Convolutional Neural Network (3D-FCN). During feature extraction, various features, including statistical, Convolutional Neural Networks (CNN), and Gray-Level Co-Occurrence Matrix (GLCM), are obtained. Lung diseases are then detected using DKCN-Net, which combines the Deep Kronecker Neural Network (DKN) and Parallel Convolutional Neural Network (PCNN). The DKCN-Net achieves an accuracy of 92.18 %, a loss of 7.82 %, a Mean Squared Error (MSE) of 0.858, a True Positive Rate (TPR) of 92.99 %, and a True Negative Rate (TNR) of 92.19 %, with a processing time of 50 s per timestamp.

Lajoie I, Kalra S, Dadar M

pubmed logopapersJun 1 2025
Accurate personalized survival prediction in amyotrophic lateral sclerosis is essential for effective patient care planning. This study investigates whether grey and white matter changes measured by magnetic resonance imaging can improve individual survival predictions. We analyzed data from 178 patients with amyotrophic lateral sclerosis and 166 healthy controls in the Canadian Amyotrophic Lateral Sclerosis Neuroimaging Consortium study. A voxel-wise linear mixed-effects model assessed disease-related and survival-related atrophy detected through deformation-based morphometry, controlling for age, sex, and scanner variations. Additional linear mixed-effects models explored associations between regional imaging and clinical measurements, and their associations with time to the composite outcome of death, tracheostomy, or permanent assisted ventilation. We evaluated whether incorporating imaging features alongside clinical data could improve the performance of an individual survival distribution model. Deformation-based morphometry uncovered distinct voxel-wise atrophy patterns linked to disease progression and survival, with many of these regional atrophies significantly associated with clinical manifestations of the disease. By integrating regional imaging features with clinical data, we observed a substantial enhancement in the performance of survival models across key metrics. Our analysis identified specific brain regions, such as the corpus callosum, rostral middle frontal gyrus, and thalamus, where atrophy predicted an increased risk of mortality. This study suggests that brain atrophy patterns measured by deformation-based morphometry provide valuable insights beyond clinical assessments for prognosis. It offers a more comprehensive approach to prognosis and highlights brain regions involved in disease progression and survival, potentially leading to a better understanding of amyotrophic lateral sclerosis. ANN NEUROL 2025;97:1144-1157.

Matsubara N, Teramoto A, Takei M, Kitoh Y, Kawakami S

pubmed logopapersJun 1 2025
When taking chest X-rays, the patient is encouraged to take maximum inspiration and the radiological technologist takes the images at the appropriate time. If the image is not taken at maximum inspiration, retaking of the image is required. However, there is variation in the judgment of whether retaking is necessary between the operators. Therefore, we considered that it might be possible to reduce variation in judgment by developing a retaking assessment system that evaluates whether retaking is necessary using a convolutional neural network (CNN). To train the CNN, the input chest X-ray image and the corresponding correct label indicating whether retaking is necessary are required. However, chest X-ray images cannot distinguish whether inspiration is sufficient and does not need to be retaken, or insufficient and retaking is required. Therefore, we generated input images and labels from dynamic digital radiography (DDR) and conducted the training. Verification using 18 dynamic chest X-ray cases (5400 images) and 48 actual chest X-ray cases (96 images) showed that the VGG16-based architecture achieved an assessment accuracy of 82.3% even for actual chest X-ray images. Therefore, if the proposed method is used in hospitals, it could possibly reduce the variability in judgment between operators.

Abbasian Ardakani A, Mohammadi A, Yeong CH, Ng WL, Ng AH, Tangaraju KN, Behestani S, Mirza-Aghazadeh-Attari M, Suresh R, Acharya UR

pubmed logopapersJun 1 2025
To develop, test, and externally validate a hybrid artificial intelligence (AI) model based on hand-crafted and deep radiomics features extracted from B-mode ultrasound images in differentiating benign and malignant thyroid nodules compared to senior and junior radiologists. A total of 1602 thyroid nodules from four centers across two countries (Iran and Malaysia) were included for the development and validation of AI models. From each original and expanded contour, which included the peritumoral region, 2060 handcrafted and 1024 deep radiomics features were extracted to assess the effectiveness of the peritumoral region in the AI diagnosis profile. The performance of four algorithms, namely, support vector machine with linear (SVM_lin) and radial basis function (SVM_RBF) kernels, logistic regression, and K-nearest neighbor, was evaluated. The diagnostic performance of the proposed AI model was compared with two radiologists based on the American Thyroid Association (ATA) and the Thyroid Imaging Reporting & Data System (TI-RADS™) guidelines to show the model's applicability in clinical routines. Thirty-five hand-crafted and 36 deep radiomics features were considered for model development. In the training step, SVM_RBF and SVM_lin showed the best results when rectangular contours 40% greater than the original contours were used for both hand-crafted and deep features. Ensemble-learning with SVM_RBF and SVM_lin obtained AUC of 0.954, 0.949, 0.932, and 0.921 in internal and external validations of the Iran cohort and Malaysia cohorts 1 and 2, respectively, and outperformed both radiologists. The proposed AI model trained on nodule+the peripheral region performed optimally in external validations and outperformed the radiologists using the ATA and TI-RADS guidelines.

Yang J, Chen L, Yu J, Chen J, Shi J, Dong N, Yu F, Shi H

pubmed logopapersJun 1 2025
Pericoronary adipose tissue (PCAT) is a key cardiovascular risk biomarker, yet its temporal changes after heart transplantation (HT) and comparison with controls remain unclear. This study investigates the temporal changes of PCAT in stable HT recipients and compares it to controls. In this study, we analyzed 159 stable HT recipients alongside two control groups. Both control groups were matched to a subgroup of HT recipients who did not have coronary artery stenosis. Group 1 consisted of 60 individuals matched for age, sex, and body mass index (BMI), with no history of hypertension, diabetes, hyperlipidemia, or smoking. Group 2 included 56 individuals additionally matched for hypertension, diabetes, hyperlipidemia, and smoking history. PCAT volume and fat attenuation index (FAI) were measured using AI-based software. Temporal changes in PCAT were assessed at multiple time points in HT recipients, and PCAT in the subgroup of HT recipients without coronary stenosis was compared to controls. Stable HT recipients exhibited a progressive decrease in FAI and an increase in PCAT volume over time, particularly in the first five years post-HT. Similar trends were observed in the subgroup of HT recipients without coronary stenosis. Compared to controls, PCAT FAI was significantly higher in the HT subgroup during the first five years post-HT (P < 0.001). After five years, differences persisted but diminished, with no statistically significant differences observed in the PCAT of left anterior descending artery (LAD) (P > 0.05). A negative correlation was observed between FAI and PCAT volume post-HT (r = - 0.75 ∼ - 0.53). PCAT volume and FAI undergo temporal changes in stable HT recipients, especially during the first five years post-HT. Even in HT recipients without coronary stenosis, PCAT FAI differs from controls, indicating distinct changes in this cohort.

Lin Y, Wang K, Zheng Z, Yu H, Chen S, Tang W, He Y, Gao H, Yang R, Xie Y, Yang J, Hou X, Wang S, Shi H

pubmed logopapersJun 1 2025
This study aims to develop and validate a deep learning framework designed to eliminate the second CT scan of dual-tracer total-body PET/CT imaging. We retrospectively included three cohorts of 247 patients who underwent dual-tracer total-body PET/CT imaging on two separate days (time interval:1-11 days). Out of these, 167 underwent [<sup>68</sup>Ga]Ga-DOTATATE/[<sup>18</sup>F]FDG, 50 underwent [<sup>68</sup>Ga]Ga-PSMA-11/[<sup>18</sup>F]FDG, and 30 underwent [<sup>68</sup>Ga]Ga-FAPI-04/[<sup>18</sup>F]FDG. A deep learning framework was developed that integrates a registration generative adversarial network (RegGAN) with non-rigid registration techniques. This approach allows for the transformation of attenuation-correction CT (ACCT) images from the first scan into pseudo-ACCT images for the second scan, which are then used for attenuation and scatter correction (ASC) of the second tracer PET images. Additionally, the derived registration transform facilitates dual-tracer image fusion and analysis. The deep learning-based ASC PET images were evaluated using quantitative metrics, including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) across the whole body and specific regions. Furthermore, the quantitative accuracy of PET images was assessed by calculating standardized uptake value (SUV) bias in normal organs and lesions. The MAE for whole-body pseudo-ACCT images ranged from 97.64 to 112.59 HU across four tracers. The deep learning-based ASC PET images demonstrated high similarity to the ground-truth PET images. The MAE of SUV for whole-body PET images was 0.06 for [<sup>68</sup>Ga]Ga-DOTATATE, 0.08 for [<sup>68</sup>Ga]Ga-PSMA-11, 0.06 for [<sup>68</sup>Ga]Ga-FAPI-04, and 0.05 for [<sup>18</sup>F]FDG, respectively. Additionally, the median absolute percent deviation of SUV was less than 2.6% for all normal organs, while the mean absolute percent deviation of SUV was less than 3.6% for lesions across four tracers. The proposed deep learning framework, combining RegGAN and non-rigid registration, shows promise in reducing CT radiation dose for dual-tracer total-body PET/CT imaging, with successful validation across multiple tracers.
Page 658 of 7657650 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.