Sort by:
Page 100 of 3513502 results

Anatomy-Aware Low-Dose CT Denoising via Pretrained Vision Models and Semantic-Guided Contrastive Learning

Runze Wang, Zeli Chen, Zhiyun Song, Wei Fang, Jiajin Zhang, Danyang Tu, Yuxing Tang, Minfeng Xu, Xianghua Ye, Le Lu, Dakai Jin

arxiv logopreprintAug 11 2025
To reduce radiation exposure and improve the diagnostic efficacy of low-dose computed tomography (LDCT), numerous deep learning-based denoising methods have been developed to mitigate noise and artifacts. However, most of these approaches ignore the anatomical semantics of human tissues, which may potentially result in suboptimal denoising outcomes. To address this problem, we propose ALDEN, an anatomy-aware LDCT denoising method that integrates semantic features of pretrained vision models (PVMs) with adversarial and contrastive learning. Specifically, we introduce an anatomy-aware discriminator that dynamically fuses hierarchical semantic features from reference normal-dose CT (NDCT) via cross-attention mechanisms, enabling tissue-specific realism evaluation in the discriminator. In addition, we propose a semantic-guided contrastive learning module that enforces anatomical consistency by contrasting PVM-derived features from LDCT, denoised CT and NDCT, preserving tissue-specific patterns through positive pairs and suppressing artifacts via dual negative pairs. Extensive experiments conducted on two LDCT denoising datasets reveal that ALDEN achieves the state-of-the-art performance, offering superior anatomy preservation and substantially reducing over-smoothing issue of previous work. Further validation on a downstream multi-organ segmentation task (encompassing 117 anatomical structures) affirms the model's ability to maintain anatomical awareness.

Generative Artificial Intelligence to Automate Cerebral Perfusion Mapping in Acute Ischemic Stroke from Non-contrast Head Computed Tomography Images: Pilot Study.

Primiano NJ, Changa AR, Kohli S, Greenspan H, Cahan N, Kummer BR

pubmed logopapersAug 11 2025
Acute ischemic stroke (AIS) is a leading cause of death and long-term disability worldwide, where rapid reperfusion remains critical for salvaging brain tissue. Although CT perfusion (CTP) imaging provides essential hemodynamic information, its limitations-including extended processing times, additional radiation exposure, and variable software outputs-can delay treatment. In contrast, non-contrast head CT (NCHCT) is ubiquitously available in acute stroke settings. This study explores a generative artificial intelligence approach to predict key perfusion parameters (relative cerebral blood flow [rCBF] and time-to-maximum [Tmax]) directly from NCHCT, potentially streamlining stroke imaging workflows and expanding access to critical perfusion data. We retrospectively identified patients evaluated for AIS who underwent NCHCT, CT angiography, and CTP. Ground truth perfusion maps (rCBF and Tmax) were extracted from VIZ.ai post-processed CTP studies. A modified pix2pix-turbo generative adversarial network (GAN) was developed to translate co-registered NCHCT images into corresponding perfusion maps. The network was trained using paired NCHCT-CTP data, with training, validation, and testing splits of 80%:10%:10%. Performance was assessed on the test set using quantitative metrics including the structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and Fréchet inception distance (FID). Out of 120 patients, studies from 99 patients fitting our inclusion and exclusion criteria were used as the primary cohort (mean age 73.3 ± 13.5 years; 46.5% female). Cerebral occlusions were predominantly in the middle cerebral artery. GAN-generated Tmax maps achieved an SSIM of 0.827, PSNR of 16.99, and FID of 62.21, while the rCBF maps demonstrated comparable performance (SSIM 0.79, PSNR 16.38, FID 59.58). These results indicate that the model approximates ground truth perfusion maps to a moderate degree and successfully captures key cerebral hemodynamic features. Our findings demonstrate the feasibility of generating functional perfusion maps directly from widely available NCHCT images using a modified GAN. This cross-modality approach may serve as a valuable adjunct in AIS evaluation, particularly in resource-limited settings or when traditional CTP provides limited diagnostic information. Future studies with larger, multicenter datasets and further model refinements are warranted to enhance clinical accuracy and utility.

Outcome Prediction in Pediatric Traumatic Brain Injury Utilizing Social Determinants of Health and Machine Learning Methods.

Kaliaev A, Vejdani-Jahromi M, Gunawan A, Qureshi M, Setty BN, Farris C, Takahashi C, AbdalKader M, Mian A

pubmed logopapersAug 11 2025
Considerable socioeconomic disparities exist among pediatric traumatic brain injury (TBI) patients. This study aims to analyze the effects of social determinants of health on head injury outcomes and to create a novel machine-learning algorithm (MLA) that incorporates socioeconomic factors to predict the likelihood of a positive or negative trauma-related finding on head computed tomography (CT). A cohort of blunt trauma patients under age 15 who presented to the largest safety net hospital in New England between January 2006 and December 2013 (n=211) was included in this study. Patient socioeconomic data such as race, language, household income, and insurance type were collected alongside other parameters like Injury Severity Score (ISS), age, sex, and mechanism of injury. Multivariable analysis was performed to identify significant factors in predicting a positive head CT outcome. The cohort was split into 80% training (168 samples) and 20% testing (43 samples) datasets using stratified sampling. Twenty-two multi-parametric MLAs were trained with 5-fold cross-validation and hyperparameter tuning via GridSearchCV, and top-performing models were evaluated on the test dataset. Significant factors associated with pediatric head CT outcome included ISS, age, and insurance type (p<0.05). The age of the subjects with a clinically relevant trauma-related head CT finding (median= 1.8 years) was significantly different from the age of patients without such findings (median= 9.1 years). These predictors were utilized to train the machine learning models. With ISS, the Fine Gaussian SVM achieved the highest test AUC (0.923), with accuracy=0.837, sensitivity=0.647, and specificity=0.962. The Coarse Tree yielded accuracy=0.837, AUC=0.837, sensitivity=0.824, and specificity=0.846. Without ISS, the Narrow Neural Network performed best with accuracy=0.837, AUC=0.857, sensitivity=0.765, and specificity=0.885. Key predictors of clinically relevant head CT findings in pediatric TBI include ISS, age, and social determinants of health, with children under 5 at higher risk. A novel Fine Gaussian SVM model outperformed other MLA, offering high accuracy in predicting outcomes. This tool shows promise for improving clinical decisions while minimizing radiation exposure in children. TBI = Traumatic Brain Injury; ISS = Injury Severity Score; MLA = Machine Learning Algorithm; CT = Computed Tomography; AUC = Area Under the Curve.

Post-deployment Monitoring of AI Performance in Intracranial Hemorrhage Detection by ChatGPT.

Rohren E, Ahmadzade M, Colella S, Kottler N, Krishnan S, Poff J, Rastogi N, Wiggins W, Yee J, Zuluaga C, Ramis P, Ghasemi-Rad M

pubmed logopapersAug 11 2025
To evaluate the post-deployment performance of an artificial intelligence (AI) system (Aidoc) for intracranial hemorrhage (ICH) detection and assess the utility of ChatGPT-4 Turbo for automated AI monitoring. This retrospective study evaluated 332,809 head CT examinations from 37 radiology practices across the United States (December 2023-May 2024). Of these, 13,569 cases were flagged as positive for ICH by the Aidoc AI system. A HIPAA (Health Insurance Portability and Accountability Act) -compliant version of ChatGPT-4 Turbo was used to extract data from radiology reports. Ground truth was established through radiologists' review of 200 randomly selected cases. Performance metrics were calculated for ChatGPT, Aidoc and radiologists. ChatGPT-4 Turbo demonstrated high diagnostic accuracy in identifying intracranial hemorrhage (ICH) from radiology reports, with a positive predictive value of 1 and a negative predictive value of 0.988 (AUC:0.996). Aidoc's false positive classifications were influenced by scanner manufacturer, midline shift, mass effect, artifacts, and neurologic symptoms. Multivariate analysis identified Philips scanners (OR: 6.97, p=0.003) and artifacts (OR: 3.79, p=0.029) as significant contributors to false positives, while midline shift (OR: 0.08, p=0.021) and mass effect (OR: 0.18, p=0.021) were associated with a reduced false positive rate. Aidoc-assisted radiologists achieved a sensitivity of 0.936 and a specificity of 1. This study underscores the importance of continuous performance monitoring for AI systems in clinical practice. The integration of LLMs offers a scalable solution for evaluating AI performance, ensuring reliable deployment and enhancing diagnostic workflows.

Using Machine Learning to Improve the Contrast-Enhanced Ultrasound Liver Imaging Reporting and Data System Diagnosis of Hepatocellular Carcinoma in Indeterminate Liver Nodules.

Hoopes JR, Lyshchik A, Xiao TS, Berzigotti A, Fetzer DT, Forsberg F, Sidhu PS, Wessner CE, Wilson SR, Keith SW

pubmed logopapersAug 11 2025
Liver cancer ranks among the most lethal cancers. Hepatocellular carcinoma (HCC) is the most common type of primary liver cancer and better diagnostic tools are needed to diagnose patients at risk. The aim is to develop a machine learning algorithm that enhances the sensitivity and specificity of the Contrast-Enhanced Ultrasound Liver Imaging Reporting and Data System (CEUS-LIRADS) in classifying indeterminate at-risk liver nodules (LR-M, LR-3, LR-4) as HCC or non-HCC. Our study includes patients at risk for HCC with untreated indeterminate focal liver observations detected on US or contrast-enhanced CT or MRI performed as part of their clinical standard of care from January 2018 to November 2022. Recursive partitioning was used to improve HCC diagnosis in indeterminate at-risk nodules. Demographics, blood biomarkers, and CEUS imaging features were evaluated as potential predictors for the algorithm to classify nodules as HCC or non-HCC. We evaluated 244 indeterminate liver nodules from 224 patients (mean age 62.9 y). Of the nodules, 73.2% (164/224) were from males. The algorithm was trained on a random 2/3 partition of 163 liver nodules and correctly reclassified more than half of the HCC liver nodules previously categorized as indeterminate in the independent 1/3 test partition of 81 liver nodules, achieving a sensitivity of 56.3% (95% CI: 42.0%, 70.2%) and specificity of 93.9% (95% CI: 84.4%, 100.0%). Machine learning was applied to the multicenter, multinational study of CEUS LI-RADS indeterminate at-risk liver nodules and correctly diagnosed HCC in more than half of the HCC nodules.

Automated Prediction of Bone Volume Removed in Mastoidectomy.

Nagururu NV, Ishida H, Ding AS, Ishii M, Unberath M, Taylor RH, Munawar A, Sahu M, Creighton FX

pubmed logopapersAug 11 2025
The bone volume drilled by surgeons during mastoidectomy is determined by the need to localize the position, optimize the view, and reach the surgical endpoint while avoiding critical structures. Predicting the volume of bone removed before an operation can significantly enhance surgical training by providing precise, patient-specific guidance and enable the development of more effective computer-assisted and robotic surgical interventions. Single institution, cross-sectional. VR simulation. We developed a deep learning pipeline to automate the prediction of bone volume removed during mastoidectomy using data from virtual reality mastoidectomy simulations. The data set included 15 deidentified temporal bone computed tomography scans. The network was evaluated using fivefold cross-validation, comparing predicted and actual bone removal with metrics such as the Dice score (DSC) and Hausdorff distance (HD). Our method achieved a median DSC of 0.775 (interquartile range [IQR]: 0.725-0.810) and a median HD of 0.492 mm (IQR: 0.298-0.757 mm). Predictions reached the mastoidectomy endpoint of visualizing the horizontal canal and incus in 80% (12/15) of temporal bones. Qualitative analysis indicated that predictions typically produced realistic mastoidectomy endpoints, though some cases showed excessive or insufficient bone removal, particularly at the temporal bone cortex and tegmen mastoideum. This study establishes a foundational step in using deep learning to predict bone volume removal during mastoidectomy. The results indicate that learning-based methods can reasonably approximate the surgical endpoint of mastoidectomy. Further refinement with larger, more diverse data sets and improved model architectures will be essential for enhancing prediction accuracy.

Machine learning models for the prediction of preclinical coal workers' pneumoconiosis: integrating CT radiomics and occupational health surveillance records.

Ma Y, Cui F, Yao Y, Shen F, Qin H, Li B, Wang Y

pubmed logopapersAug 11 2025
This study aims to integrate CT imaging with occupational health surveillance data to construct a multimodal model for preclinical CWP identification and individualized risk evaluation. CT images and occupational health surveillance data were retrospectively collected from 874 coal workers, including 228 Stage I and 4 Stage II pneumoconiosis patients, along with 600 healthy and 42 subcategory 0/1 coal workers. First, the YOLOX was employed for automated 3D lung extraction to extract radiomics features. Second, two feature selection algorithms were applied to select critical features from both CT radiomics and occupational health data. Third, three distinct feature sets were constructed for model training: CT radiomics features, occupational health data, and their multimodal integration. Finally, five machine learning models were implemented to predict the preclinical stage of CWP. The model's performance was evaluated using the receiver operating characteristic curve (ROC), accuracy, sensitivity, and specificity. SHapley Additive exPlanation (SHAP) values were calculated to determine the prediction role of each feature in the model with the highest predictive performance. The YOLOX-based lung extraction demonstrated robust performance, achieving an Average Precision (AP) of 0.98. 8 CT radiomic features and 4 occupational health surveillance data were selected for the multimodal model. The optimal occupational health surveillance feature subset comprised the Length of service. Among 5 machine learning algorithms evaluated, the Decision Tree-based multimodal model showed superior predictive capacity on the test set of 142 samples, with an AUC of 0.94 (95% CI 0.88-0.99), accuracy 0.95, specificity 1.00, and Youden's index 0.83. SHAP analysis indicated that Total Protein Results, original shape Flatness, diagnostics Image original Mean were the most influential contributors. Our study demonstrated that the multimodal model demonstrated strong predictive capability for the preclinical stage of CWP by integrating CT radiomic features with occupational health data.

Unconditional latent diffusion models memorize patient imaging data.

Dar SUH, Seyfarth M, Ayx I, Papavassiliu T, Schoenberg SO, Siepmann RM, Laqua FC, Kahmann J, Frey N, Baeßler B, Foersch S, Truhn D, Kather JN, Engelhardt S

pubmed logopapersAug 11 2025
Generative artificial intelligence models facilitate open-data sharing by proposing synthetic data as surrogates of real patient data. Despite the promise for healthcare, some of these models are susceptible to patient data memorization, where models generate patient data copies instead of novel synthetic samples, resulting in patient re-identification. Here we assess memorization in unconditional latent diffusion models by training them on a variety of datasets for synthetic data generation and detecting memorization with a self-supervised copy detection approach. We show a high degree of patient data memorization across all datasets, with approximately 37.2% of patient data detected as memorized and 68.7% of synthetic samples identified as patient data copies. Latent diffusion models are more susceptible to memorization than autoencoders and generative adversarial networks, and they outperform non-diffusion models in synthesis quality. Augmentation strategies during training, small architecture size and increasing datasets can reduce memorization, while overtraining the models can enhance it. These results emphasize the importance of carefully training generative models on private medical imaging datasets and examining the synthetic data to ensure patient privacy.

Ethical considerations and robustness of artificial neural networks in medical image analysis under data corruption.

Okunev M, Handelman D, Handelman A

pubmed logopapersAug 11 2025
Medicine is one of the most sensitive fields in which artificial intelligence (AI) is extensively used, spanning from medical image analysis to clinical support. Specifically, in medicine, where every decision may severely affect human lives, the issue of ensuring that AI systems operate ethically and produce results that align with ethical considerations is of great importance. In this work, we investigate the combination of several key parameters on the performance of artificial neural networks (ANNs) used for medical image analysis in the presence of data corruption or errors. For this purpose, we examined five different ANN architectures (AlexNet, LeNet 5, VGG16, ResNet-50, and Vision Transformers - ViT), and for each architecture, we checked its performance under varying combinations of training dataset sizes and percentages of images that are corrupted through mislabeling. The image mislabeling simulates deliberate or nondeliberate changes to the dataset, which may cause the AI system to produce unreliable results. We found that the five ANN architectures produce different results for the same task, both for cases with and without dataset modification, which implies that the selection of which ANN architecture to implement may have ethical aspects that need to be considered. We also found that label corruption resulted in a mixture of performance metrics tendencies, indicating that it is difficult to conclude whether label corruption has occurred. Our findings demonstrate the relation between ethics in AI and ANN architecture implementation and AI computational parameters used therefor, and raise awareness of the need to find appropriate ways to determine whether label corruption has occurred.

Multimodal radiomics in glioma: predicting recurrence in the peritumoural brain zone using integrated MRI.

Li Q, Xiang C, Zeng X, Liao A, Chen K, Yang J, Li Y, Jia M, Song L, Hu X

pubmed logopapersAug 11 2025
Gliomas exhibit a high recurrence rate, particularly in the peritumoural brain zone after surgery. This study aims to develop and validate a radiomics-based model using preoperative fluid-attenuated inversion recovery (FLAIR) and T1-weighted contrast-enhanced (T1-CE) magnetic resonance imaging (MRI) sequences to predict glioma recurrence within specific quadrants of the surgical margin. In this retrospective study, 149 patients with confirmed glioma recurrence were included. 23 cases of data from Guizhou Medical University were used as a test set, and the remaining data were randomly used as a training set (70%) and a validation set (30%). Two radiologists from the research group established a Cartesian coordinate system centred on the tumour, based on FLAIR and T1-CE MRI sequences, dividing the tumour into four quadrants. Recurrence in each quadrant after surgery was assessed, categorising preoperative tumour quadrants as recurrent and non-recurrent. Following the division of tumours into quadrants and the removal of outliers, These quadrants were assigned to a training set (105 non-recurrence quadrants and 226 recurrence quadrants), a verification set (45 non-recurrence quadrants and 97 recurrence quadrants) and a test set (16 non-recurrence quadrants and 68 recurrence quadrants). Imaging features were extracted from preoperative sequences, and feature selection was performed using least absolute shrinkage and selection operator. Machine learning models included support vector machine, random forest, extra trees, and XGBoost. Clinical efficacy was evaluated through model calibration and decision curve analysis. The fusion model, which combines features from FLAIR and T1-CE sequences, exhibited higher predictive accuracy than single-modality models. Among the models, the LightGBM model demonstrated the highest predictive accuracy, with an area under the curve of 0.906 in the training set, 0.832 in the validation set and 0.805 in the test set. The study highlights the potential of a multimodal radiomics approach for predicting glioma recurrence, with the fusion model serving as a robust tool for clinical decision-making.
Page 100 of 3513502 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.