Sort by:
Page 56 of 1381373 results

DKCN-Net: Deep kronecker convolutional neural network-based lung disease detection with federated learning.

Meda A, Nelson L, Jagdish M

pubmed logopapersJun 1 2025
In the healthcare field, lung disease detection techniques based on deep learning (DL) are widely used. However, achieving high stability while maintaining privacy remains a challenge. To address this, this research employs Federated Learning (FL), enabling doctors to train models without sharing patient data with unauthorized parties, preserving privacy in local models. The study introduces the Deep Kronecker Convolutional Neural Network (DKCN-Net) for lung disease detection. Input Computed Tomography (CT) images are sourced from the LIDC-IDRI database and denoised using the Adaptive Gaussian Filter (AGF). After that, the Lung lobe and nodule segmentation are performed using Deep Fuzzy Clustering (DFC) and a 3-Dimensional Fully Convolutional Neural Network (3D-FCN). During feature extraction, various features, including statistical, Convolutional Neural Networks (CNN), and Gray-Level Co-Occurrence Matrix (GLCM), are obtained. Lung diseases are then detected using DKCN-Net, which combines the Deep Kronecker Neural Network (DKN) and Parallel Convolutional Neural Network (PCNN). The DKCN-Net achieves an accuracy of 92.18 %, a loss of 7.82 %, a Mean Squared Error (MSE) of 0.858, a True Positive Rate (TPR) of 92.99 %, and a True Negative Rate (TNR) of 92.19 %, with a processing time of 50 s per timestamp.

Regional Cerebral Atrophy Contributes to Personalized Survival Prediction in Amyotrophic Lateral Sclerosis: A Multicentre, Machine Learning, Deformation-Based Morphometry Study.

Lajoie I, Kalra S, Dadar M

pubmed logopapersJun 1 2025
Accurate personalized survival prediction in amyotrophic lateral sclerosis is essential for effective patient care planning. This study investigates whether grey and white matter changes measured by magnetic resonance imaging can improve individual survival predictions. We analyzed data from 178 patients with amyotrophic lateral sclerosis and 166 healthy controls in the Canadian Amyotrophic Lateral Sclerosis Neuroimaging Consortium study. A voxel-wise linear mixed-effects model assessed disease-related and survival-related atrophy detected through deformation-based morphometry, controlling for age, sex, and scanner variations. Additional linear mixed-effects models explored associations between regional imaging and clinical measurements, and their associations with time to the composite outcome of death, tracheostomy, or permanent assisted ventilation. We evaluated whether incorporating imaging features alongside clinical data could improve the performance of an individual survival distribution model. Deformation-based morphometry uncovered distinct voxel-wise atrophy patterns linked to disease progression and survival, with many of these regional atrophies significantly associated with clinical manifestations of the disease. By integrating regional imaging features with clinical data, we observed a substantial enhancement in the performance of survival models across key metrics. Our analysis identified specific brain regions, such as the corpus callosum, rostral middle frontal gyrus, and thalamus, where atrophy predicted an increased risk of mortality. This study suggests that brain atrophy patterns measured by deformation-based morphometry provide valuable insights beyond clinical assessments for prognosis. It offers a more comprehensive approach to prognosis and highlights brain regions involved in disease progression and survival, potentially leading to a better understanding of amyotrophic lateral sclerosis. ANN NEUROL 2025;97:1144-1157.

Retaking assessment system based on the inspiratory state of chest X-ray image.

Matsubara N, Teramoto A, Takei M, Kitoh Y, Kawakami S

pubmed logopapersJun 1 2025
When taking chest X-rays, the patient is encouraged to take maximum inspiration and the radiological technologist takes the images at the appropriate time. If the image is not taken at maximum inspiration, retaking of the image is required. However, there is variation in the judgment of whether retaking is necessary between the operators. Therefore, we considered that it might be possible to reduce variation in judgment by developing a retaking assessment system that evaluates whether retaking is necessary using a convolutional neural network (CNN). To train the CNN, the input chest X-ray image and the corresponding correct label indicating whether retaking is necessary are required. However, chest X-ray images cannot distinguish whether inspiration is sufficient and does not need to be retaken, or insufficient and retaking is required. Therefore, we generated input images and labels from dynamic digital radiography (DDR) and conducted the training. Verification using 18 dynamic chest X-ray cases (5400 images) and 48 actual chest X-ray cases (96 images) showed that the VGG16-based architecture achieved an assessment accuracy of 82.3% even for actual chest X-ray images. Therefore, if the proposed method is used in hospitals, it could possibly reduce the variability in judgment between operators.

Diagnosis of Thyroid Nodule Malignancy Using Peritumoral Region and Artificial Intelligence: Results of Hand-Crafted, Deep Radiomics Features and Radiologists' Assessment in Multicenter Cohorts.

Abbasian Ardakani A, Mohammadi A, Yeong CH, Ng WL, Ng AH, Tangaraju KN, Behestani S, Mirza-Aghazadeh-Attari M, Suresh R, Acharya UR

pubmed logopapersJun 1 2025
To develop, test, and externally validate a hybrid artificial intelligence (AI) model based on hand-crafted and deep radiomics features extracted from B-mode ultrasound images in differentiating benign and malignant thyroid nodules compared to senior and junior radiologists. A total of 1602 thyroid nodules from four centers across two countries (Iran and Malaysia) were included for the development and validation of AI models. From each original and expanded contour, which included the peritumoral region, 2060 handcrafted and 1024 deep radiomics features were extracted to assess the effectiveness of the peritumoral region in the AI diagnosis profile. The performance of four algorithms, namely, support vector machine with linear (SVM_lin) and radial basis function (SVM_RBF) kernels, logistic regression, and K-nearest neighbor, was evaluated. The diagnostic performance of the proposed AI model was compared with two radiologists based on the American Thyroid Association (ATA) and the Thyroid Imaging Reporting & Data System (TI-RADS™) guidelines to show the model's applicability in clinical routines. Thirty-five hand-crafted and 36 deep radiomics features were considered for model development. In the training step, SVM_RBF and SVM_lin showed the best results when rectangular contours 40% greater than the original contours were used for both hand-crafted and deep features. Ensemble-learning with SVM_RBF and SVM_lin obtained AUC of 0.954, 0.949, 0.932, and 0.921 in internal and external validations of the Iran cohort and Malaysia cohorts 1 and 2, respectively, and outperformed both radiologists. The proposed AI model trained on nodule+the peripheral region performed optimally in external validations and outperformed the radiologists using the ATA and TI-RADS guidelines.

Changes of Pericoronary Adipose Tissue in Stable Heart Transplantation Recipients and Comparison with Controls.

Yang J, Chen L, Yu J, Chen J, Shi J, Dong N, Yu F, Shi H

pubmed logopapersJun 1 2025
Pericoronary adipose tissue (PCAT) is a key cardiovascular risk biomarker, yet its temporal changes after heart transplantation (HT) and comparison with controls remain unclear. This study investigates the temporal changes of PCAT in stable HT recipients and compares it to controls. In this study, we analyzed 159 stable HT recipients alongside two control groups. Both control groups were matched to a subgroup of HT recipients who did not have coronary artery stenosis. Group 1 consisted of 60 individuals matched for age, sex, and body mass index (BMI), with no history of hypertension, diabetes, hyperlipidemia, or smoking. Group 2 included 56 individuals additionally matched for hypertension, diabetes, hyperlipidemia, and smoking history. PCAT volume and fat attenuation index (FAI) were measured using AI-based software. Temporal changes in PCAT were assessed at multiple time points in HT recipients, and PCAT in the subgroup of HT recipients without coronary stenosis was compared to controls. Stable HT recipients exhibited a progressive decrease in FAI and an increase in PCAT volume over time, particularly in the first five years post-HT. Similar trends were observed in the subgroup of HT recipients without coronary stenosis. Compared to controls, PCAT FAI was significantly higher in the HT subgroup during the first five years post-HT (P < 0.001). After five years, differences persisted but diminished, with no statistically significant differences observed in the PCAT of left anterior descending artery (LAD) (P > 0.05). A negative correlation was observed between FAI and PCAT volume post-HT (r = - 0.75 ∼ - 0.53). PCAT volume and FAI undergo temporal changes in stable HT recipients, especially during the first five years post-HT. Even in HT recipients without coronary stenosis, PCAT FAI differs from controls, indicating distinct changes in this cohort.

Eliminating the second CT scan of dual-tracer total-body PET/CT via deep learning-based image synthesis and registration.

Lin Y, Wang K, Zheng Z, Yu H, Chen S, Tang W, He Y, Gao H, Yang R, Xie Y, Yang J, Hou X, Wang S, Shi H

pubmed logopapersJun 1 2025
This study aims to develop and validate a deep learning framework designed to eliminate the second CT scan of dual-tracer total-body PET/CT imaging. We retrospectively included three cohorts of 247 patients who underwent dual-tracer total-body PET/CT imaging on two separate days (time interval:1-11 days). Out of these, 167 underwent [<sup>68</sup>Ga]Ga-DOTATATE/[<sup>18</sup>F]FDG, 50 underwent [<sup>68</sup>Ga]Ga-PSMA-11/[<sup>18</sup>F]FDG, and 30 underwent [<sup>68</sup>Ga]Ga-FAPI-04/[<sup>18</sup>F]FDG. A deep learning framework was developed that integrates a registration generative adversarial network (RegGAN) with non-rigid registration techniques. This approach allows for the transformation of attenuation-correction CT (ACCT) images from the first scan into pseudo-ACCT images for the second scan, which are then used for attenuation and scatter correction (ASC) of the second tracer PET images. Additionally, the derived registration transform facilitates dual-tracer image fusion and analysis. The deep learning-based ASC PET images were evaluated using quantitative metrics, including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) across the whole body and specific regions. Furthermore, the quantitative accuracy of PET images was assessed by calculating standardized uptake value (SUV) bias in normal organs and lesions. The MAE for whole-body pseudo-ACCT images ranged from 97.64 to 112.59 HU across four tracers. The deep learning-based ASC PET images demonstrated high similarity to the ground-truth PET images. The MAE of SUV for whole-body PET images was 0.06 for [<sup>68</sup>Ga]Ga-DOTATATE, 0.08 for [<sup>68</sup>Ga]Ga-PSMA-11, 0.06 for [<sup>68</sup>Ga]Ga-FAPI-04, and 0.05 for [<sup>18</sup>F]FDG, respectively. Additionally, the median absolute percent deviation of SUV was less than 2.6% for all normal organs, while the mean absolute percent deviation of SUV was less than 3.6% for lesions across four tracers. The proposed deep learning framework, combining RegGAN and non-rigid registration, shows promise in reducing CT radiation dose for dual-tracer total-body PET/CT imaging, with successful validation across multiple tracers.

Developing approaches to incorporate donor-lung computed tomography images into machine learning models to predict severe primary graft dysfunction after lung transplantation.

Ma W, Oh I, Luo Y, Kumar S, Gupta A, Lai AM, Puri V, Kreisel D, Gelman AE, Nava R, Witt CA, Byers DE, Halverson L, Vazquez-Guillamet R, Payne PRO, Sotiras A, Lu H, Niazi K, Gurcan MN, Hachem RR, Michelson AP

pubmed logopapersJun 1 2025
Primary graft dysfunction (PGD) is a common complication after lung transplantation associated with poor outcomes. Although risk factors have been identified, the complex interactions between clinical variables affecting PGD risk are not well understood, which can complicate decisions about donor-lung acceptance. Previously, we developed a machine learning model to predict grade 3 PGD using donor and recipient electronic health record data, but it lacked granular information from donor-lung computed tomography (CT) scans, which are routinely assessed during offer review. In this study, we used a gated approach to determine optimal methods for analyzing donor-lung CT scans among patients receiving first-time, bilateral lung transplants at a single center over 10 years. We assessed 4 computer vision approaches and fused the best with electronic health record data at 3 points in the machine learning process. A total of 160 patients had donor-lung CT scans for analysis. The best imaging-only approach employed a 3D ResNet model, yielding median (interquartile range) areas under the receiver operating characteristic and precision-recall curves of 0.63 (0.49-0.72) and 0.48 (0.35-0.6), respectively. Combining imaging with clinical data using late fusion provided the highest performance, with median areas under the receiver operating characteristic and precision-recall curves of 0.74 (0.59-0.85) and 0.61 (0.47-0.72), respectively.

A Dual-Energy Computed Tomography Guided Intelligent Radiation Therapy Platform.

Wen N, Zhang Y, Zhang H, Zhang M, Zhou J, Liu Y, Liao C, Jia L, Zhang K, Chen J

pubmed logopapersJun 1 2025
The integration of advanced imaging and artificial intelligence technologies in radiation therapy has revolutionized cancer treatment by enhancing precision and adaptability. This study introduces a novel dual-energy computed tomography (DECT) guided intelligent radiation therapy (DEIT) platform designed to streamline and optimize the radiation therapy process. The DEIT system combines DECT, a newly designed dual-layer multileaf collimator, deep learning algorithms for auto-segmentation, and automated planning and quality assurance capabilities. The DEIT system integrates an 80-slice computed tomography (CT) scanner with an 87 cm bore size, a linear accelerator delivering 4 photon and 5 electron energies, and a flat panel imager optimized for megavoltage (MV) cone beam CT acquisition. A comprehensive evaluation of the system's accuracy was conducted using end-to-end tests. Virtual monoenergetic CT images and electron density images of the DECT were generated and compared on both phantom and patient. The system's auto-segmentation algorithms were tested on 5 cases for each of the 99 organs at risk, and the automated optimization and planning capabilities were evaluated on clinical cases. The DEIT system demonstrated systematic errors of less than 1 mm for target localization. DECT reconstruction showed electron density mapping deviations ranging from -0.052 to 0.001, with stable Hounsfield unit consistency across monoenergetic levels above 60 keV, except for high-Z materials at lower energies. Auto-segmentation achieved dice similarity coefficients above 0.9 for most organs with an inference time of less than 2 seconds. Dose-volume histogram comparisons showed improved dose conformity indices and reduced doses to critical structures in auto-plans compared to manual plans across various clinical cases. In addition, high gamma passing rates at 2%/2 mm in both 2-dimensional (above 97%) and 3-dimensional (above 99%) in vivo analyses further validate the accuracy and reliability of treatment plans. The DEIT platform represents a viable solution for radiation treatment. The DEIT system uses artificial intelligence-driven automation, real-time adjustments, and CT imaging to enhance the radiation therapy process, improving efficiency and flexibility.

Human-AI collaboration for ultrasound diagnosis of thyroid nodules: a clinical trial.

Edström AB, Makouei F, Wennervaldt K, Lomholt AF, Kaltoft M, Melchiors J, Hvilsom GB, Bech M, Tolsgaard M, Todsen T

pubmed logopapersJun 1 2025
This clinical trial examined how the articifial intelligence (AI)-based diagnostics system S-Detect for Thyroid influences the ultrasound diagnostic work-up of thyroid ultrasound (US) performed by different US users in clinical practice and how different US users influences the diagnostic accuracy of S-Detect. We conducted a clinical trial with 20 participants, including medical students, US novice physicians, and US experienced physicians. Five patients with thyroid nodules (one malignant and four benign) volunteered to undergo a thyroid US scan performed by all 20 participants using the same US systems with S-Detect installed. Participants performed a focused thyroid US on each patient case and made a nodule classification according to the European Thyroid Imaging Reporting And Data System (EU-TIRADS). They then performed a S-Detect analysis of the same nodule and were asked to re-evaluate their EU-TIRADS reporting. From the EU-TIRADS assessments by participants, we derived a biopsy recommendation outcome of whether fine needle aspiration biopsy (FNAB) was recommended. The mean diagnostic accuracy for S-Detect was 71.3% (range 40-100%) among all participants, with no significant difference between the groups (p = 0.31). The accuracy of our biopsy recommendation outcome was 69.8% before and 69.2% after AI for all participants (p = 0.75). In this trial, we did not find S-Detect to improve the thyroid diagnostic work-up in clinical practice among novice and intermediate ultrasound operators. However, the operator had a substantial impact on the AI-generated ultrasound diagnosis, with a variation in diagnostic accuracy from 40 to 100%, despite the same patients and ultrasound machines being used in the trial.

Advances and current research status of early diagnosis for gallbladder cancer.

He JJ, Xiong WL, Sun WQ, Pan QY, Xie LT, Jiang TA

pubmed logopapersJun 1 2025
Gallbladder cancer (GBC) is the most common malignant tumor in the biliary system, characterized by high malignancy, aggressiveness, and poor prognosis. Early diagnosis holds paramount importance in ameliorating therapeutic outcomes. Presently, the clinical diagnosis of GBC primarily relies on clinical-radiological-pathological approach. However, there remains a potential for missed diagnosis and misdiagnose in the realm of clinical practice. We firstly analyzed the blood-based biomarkers, such as carcinoembryonic antigen and carbohydrate antigen 19-9. Subsequently, we evaluated the diagnostic performance of various imaging modalities, including ultrasound (US), endoscopic ultrasound (EUS), computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography/computed tomography (PET/CT) and pathological examination, emphasizing their strengths and limitations in detecting early-stage GBC. Furthermore, we explored the potential of emerging technologies, particularly artificial intelligence (AI) and liquid biopsy, to revolutionize GBC diagnosis. AI algorithms have demonstrated improved image analysis capabilities, while liquid biopsy offers the promise of non-invasive and real-time monitoring. However, the translation of these advancements into clinical practice necessitates further validation and standardization. The review highlighted the advantages and limitations of current diagnostic approaches and underscored the need for innovative strategies to enhance diagnostic accuracy of GBC. In addition, we emphasized the importance of multidisciplinary collaboration to improve early diagnosis of GBC and ultimately patient outcomes. This review endeavoured to impart fresh perspectives and insights into the early diagnosis of GBC.
Page 56 of 1381373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.