Sort by:
Page 262 of 3423416 results

Cardiac Phase Estimation Using Deep Learning Analysis of Pulsed-Mode Projections: Toward Autonomous Cardiac CT Imaging.

Wu P, Haneda E, Pack JD, Heukensfeldt Jansen I, Hsiao A, McVeigh E, De Man B

pubmed logopapersJun 1 2025
Cardiac CT plays an important role in diagnosing heart diseases but is conventionally limited by its complex workflow that requires dedicated phase and bolus tracking devices [e.g., electrocardiogram (ECG) gating]. This work reports first progress towards robust and autonomous cardiac CT exams through joint deep learning (DL) and analytical analysis of pulsed-mode projections (PMPs). To this end, cardiac phase and its uncertainty were simultaneously estimated using a novel projection domain cardiac phase estimation network (PhaseNet), which utilizes sliding-window multi-channel feature extraction strategy and a long short-term memory (LSTM) block to extract temporal correlation between time-distributed PMPs. An uncertainty-driven Viterbi (UDV) regularizer was developed to refine the DL estimations at each time point through dynamic programming. Stronger regularization was performed at time points where DL estimations have higher uncertainty. The performance of the proposed phase estimation pipeline was evaluated using accurate physics-based emulated data. PhaseNet achieved improved phase estimation accuracy compared to the competing methods in terms of RMSE (~50% improvement vs. standard CNN-LSTM; ~24% improvement vs. multi-channel residual network). The added UDV regularizer resulted in an additional ~14% improvement in RMSE, achieving accurate phase estimation with <6% RMSE in cardiac phase (phase ranges from 0-100%). To our knowledge, this is the first publication of prospective cardiac phase estimation in the projection domain. Combined with our previous work on PMP-based bolus curve estimation, the proposed method could potentially be used to achieve autonomous cardiac scanning without ECG device and expert-in-the-loop bolus timing.

Information Geometric Approaches for Patient-Specific Test-Time Adaptation of Deep Learning Models for Semantic Segmentation.

Ravishankar H, Paluru N, Sudhakar P, Yalavarthy PK

pubmed logopapersJun 1 2025
The test-time adaptation (TTA) of deep-learning-based semantic segmentation models, specific to individual patient data, was addressed in this study. The existing TTA methods in medical imaging are often unconstrained, require anatomical prior information or additional neural networks built during training phase, making them less practical, and prone to performance deterioration. In this study, a novel framework based on information geometric principles was proposed to achieve generic, off-the-shelf, regularized patient-specific adaptation of models during test-time. By considering the pre-trained model and the adapted models as part of statistical neuromanifolds, test-time adaptation was treated as constrained functional regularization using information geometric measures, leading to improved generalization and patient optimality. The efficacy of the proposed approach was shown on three challenging problems: 1) improving generalization of state-of-the-art models for segmenting COVID-19 anomalies in Computed Tomography (CT) images 2) cross-institutional brain tumor segmentation from magnetic resonance (MR) images, 3) segmentation of retinal layers in Optical Coherence Tomography (OCT) images. Further, it was demonstrated that robust patient-specific adaptation can be achieved without adding significant computational burden, making it first of its kind based on information geometric principles.

Significant reduction in manual annotation costs in ultrasound medical image database construction through step by step artificial intelligence pre-annotation.

Zheng F, XingMing L, JuYing X, MengYing T, BaoJian Y, Yan S, KeWei Y, ZhiKai L, Cheng H, KeLan Q, XiHao C, WenFei D, Ping H, RunYu W, Ying Y, XiaoHui B

pubmed logopapersJun 1 2025
This study investigates the feasibility of reducing manual image annotation costs in medical image database construction by utilizing a step by step approach where the Artificial Intelligence model (AI model) trained on a previous batch of data automatically pre-annotates the next batch of image data, taking ultrasound image of thyroid nodule annotation as an example. The study used YOLOv8 as the AI model. During the AI model training, in addition to conventional image augmentation techniques, augmentation methods specifically tailored for ultrasound images were employed to balance the quantity differences between thyroid nodule classes and enhance model training effectiveness. The study found that training the model with augmented data significantly outperformed training with raw images data. When the number of original images number was only 1,360, with 7 thyroid nodule classifications, pre-annotation using the AI model trained on augmented data could save at least 30% of the manual annotation workload for junior physicians. When the scale of original images number reached 6,800, the classification accuracy of the AI model trained on augmented data was very close with that of junior physicians, eliminating the need for manual preliminary annotation.

Evolution of Cortical Lesions and Function-Specific Cognitive Decline in People With Multiple Sclerosis.

Krijnen EA, Jelgerhuis J, Van Dam M, Bouman PM, Barkhof F, Klawiter EC, Hulst HE, Strijbis EMM, Schoonheim MM

pubmed logopapersJun 1 2025
Cortical lesions in multiple sclerosis (MS) severely affect cognition, but their longitudinal evolution and impact on specific cognitive functions remain understudied. This study investigates the evolution of function-specific cognitive functioning over 10 years in people with MS and assesses the influence of cortical lesion load and formation on these trajectories. In this prospectively collected study, people with MS underwent 3T MRI (T1 and fluid-attenuated inversion recovery) at 3 study visits between 2008 and 2022. Cognitive functioning was evaluated based on neuropsychological assessment reflecting 7 cognitive functions: attention; executive functioning (EF); information processing speed (IPS); verbal fluency; and verbal, visuospatial, and working memory. Cortical lesions were manually identified on artificial intelligence-generated double-inversion recovery images. Linear mixed models were constructed to assess the temporal evolution between cortical lesion load and function-specific cognitive decline. In addition, analyses were stratified by MS disease stage: early and late relapsing-remitting MS (cutoff disease duration at 15 years) and progressive MS. The study included 223 people with MS (mean age, 47.8 ± 11.1 years; 153 women) and 62 healthy controls. All completed 5-year follow-up, and 37 healthy controls and 94 with MS completed 10-year follow-up. At baseline, people with MS exhibited worse functioning of IPS and working memory. Over 10 years, cognitive decline was most severe in attention, verbal memory, and EF. At baseline, people with MS had a median cortical lesion count of 7 (range 0-73), which was related to subsequent decline in attention (B[95% CI] = -0.22 [-0.40 to -0.03]) and verbal fluency (B[95% CI] = -0.23[-0.37 to -0.09]). Over time, cortical lesions increased by a median count of 4 (range -2 to 71), particularly in late and progressive disease, and was related to decline in verbal fluency (B [95% CI] = -0.33 [-0.51 to -0.15]). The associations between (change in) cortical lesion load and cognitive decline were not modified by MS disease stage. Cognition worsened over 10 years, particularly affecting attention, verbal memory, and EF, while preexisting impairments were worst in other functions such as IPS. Worse baseline cognitive functioning was related to baseline cortical lesions, whereas baseline cortical lesions and cortical lesion formation were related to cognitive decline in functions less affected at baseline. Accumulating cortical damage leads to spreading of cognitive impairments toward additional functions.

Changes of Pericoronary Adipose Tissue in Stable Heart Transplantation Recipients and Comparison with Controls.

Yang J, Chen L, Yu J, Chen J, Shi J, Dong N, Yu F, Shi H

pubmed logopapersJun 1 2025
Pericoronary adipose tissue (PCAT) is a key cardiovascular risk biomarker, yet its temporal changes after heart transplantation (HT) and comparison with controls remain unclear. This study investigates the temporal changes of PCAT in stable HT recipients and compares it to controls. In this study, we analyzed 159 stable HT recipients alongside two control groups. Both control groups were matched to a subgroup of HT recipients who did not have coronary artery stenosis. Group 1 consisted of 60 individuals matched for age, sex, and body mass index (BMI), with no history of hypertension, diabetes, hyperlipidemia, or smoking. Group 2 included 56 individuals additionally matched for hypertension, diabetes, hyperlipidemia, and smoking history. PCAT volume and fat attenuation index (FAI) were measured using AI-based software. Temporal changes in PCAT were assessed at multiple time points in HT recipients, and PCAT in the subgroup of HT recipients without coronary stenosis was compared to controls. Stable HT recipients exhibited a progressive decrease in FAI and an increase in PCAT volume over time, particularly in the first five years post-HT. Similar trends were observed in the subgroup of HT recipients without coronary stenosis. Compared to controls, PCAT FAI was significantly higher in the HT subgroup during the first five years post-HT (P < 0.001). After five years, differences persisted but diminished, with no statistically significant differences observed in the PCAT of left anterior descending artery (LAD) (P > 0.05). A negative correlation was observed between FAI and PCAT volume post-HT (r = - 0.75 ∼ - 0.53). PCAT volume and FAI undergo temporal changes in stable HT recipients, especially during the first five years post-HT. Even in HT recipients without coronary stenosis, PCAT FAI differs from controls, indicating distinct changes in this cohort.

Eliminating the second CT scan of dual-tracer total-body PET/CT via deep learning-based image synthesis and registration.

Lin Y, Wang K, Zheng Z, Yu H, Chen S, Tang W, He Y, Gao H, Yang R, Xie Y, Yang J, Hou X, Wang S, Shi H

pubmed logopapersJun 1 2025
This study aims to develop and validate a deep learning framework designed to eliminate the second CT scan of dual-tracer total-body PET/CT imaging. We retrospectively included three cohorts of 247 patients who underwent dual-tracer total-body PET/CT imaging on two separate days (time interval:1-11 days). Out of these, 167 underwent [<sup>68</sup>Ga]Ga-DOTATATE/[<sup>18</sup>F]FDG, 50 underwent [<sup>68</sup>Ga]Ga-PSMA-11/[<sup>18</sup>F]FDG, and 30 underwent [<sup>68</sup>Ga]Ga-FAPI-04/[<sup>18</sup>F]FDG. A deep learning framework was developed that integrates a registration generative adversarial network (RegGAN) with non-rigid registration techniques. This approach allows for the transformation of attenuation-correction CT (ACCT) images from the first scan into pseudo-ACCT images for the second scan, which are then used for attenuation and scatter correction (ASC) of the second tracer PET images. Additionally, the derived registration transform facilitates dual-tracer image fusion and analysis. The deep learning-based ASC PET images were evaluated using quantitative metrics, including mean absolute error (MAE), peak signal-to-noise ratio (PSNR), and structural similarity index measure (SSIM) across the whole body and specific regions. Furthermore, the quantitative accuracy of PET images was assessed by calculating standardized uptake value (SUV) bias in normal organs and lesions. The MAE for whole-body pseudo-ACCT images ranged from 97.64 to 112.59 HU across four tracers. The deep learning-based ASC PET images demonstrated high similarity to the ground-truth PET images. The MAE of SUV for whole-body PET images was 0.06 for [<sup>68</sup>Ga]Ga-DOTATATE, 0.08 for [<sup>68</sup>Ga]Ga-PSMA-11, 0.06 for [<sup>68</sup>Ga]Ga-FAPI-04, and 0.05 for [<sup>18</sup>F]FDG, respectively. Additionally, the median absolute percent deviation of SUV was less than 2.6% for all normal organs, while the mean absolute percent deviation of SUV was less than 3.6% for lesions across four tracers. The proposed deep learning framework, combining RegGAN and non-rigid registration, shows promise in reducing CT radiation dose for dual-tracer total-body PET/CT imaging, with successful validation across multiple tracers.

Developing approaches to incorporate donor-lung computed tomography images into machine learning models to predict severe primary graft dysfunction after lung transplantation.

Ma W, Oh I, Luo Y, Kumar S, Gupta A, Lai AM, Puri V, Kreisel D, Gelman AE, Nava R, Witt CA, Byers DE, Halverson L, Vazquez-Guillamet R, Payne PRO, Sotiras A, Lu H, Niazi K, Gurcan MN, Hachem RR, Michelson AP

pubmed logopapersJun 1 2025
Primary graft dysfunction (PGD) is a common complication after lung transplantation associated with poor outcomes. Although risk factors have been identified, the complex interactions between clinical variables affecting PGD risk are not well understood, which can complicate decisions about donor-lung acceptance. Previously, we developed a machine learning model to predict grade 3 PGD using donor and recipient electronic health record data, but it lacked granular information from donor-lung computed tomography (CT) scans, which are routinely assessed during offer review. In this study, we used a gated approach to determine optimal methods for analyzing donor-lung CT scans among patients receiving first-time, bilateral lung transplants at a single center over 10 years. We assessed 4 computer vision approaches and fused the best with electronic health record data at 3 points in the machine learning process. A total of 160 patients had donor-lung CT scans for analysis. The best imaging-only approach employed a 3D ResNet model, yielding median (interquartile range) areas under the receiver operating characteristic and precision-recall curves of 0.63 (0.49-0.72) and 0.48 (0.35-0.6), respectively. Combining imaging with clinical data using late fusion provided the highest performance, with median areas under the receiver operating characteristic and precision-recall curves of 0.74 (0.59-0.85) and 0.61 (0.47-0.72), respectively.

Robust whole-body PET image denoising using 3D diffusion models: evaluation across various scanners, tracers, and dose levels.

Yu B, Ozdemir S, Dong Y, Shao W, Pan T, Shi K, Gong K

pubmed logopapersJun 1 2025
Whole-body PET imaging plays an essential role in cancer diagnosis and treatment but suffers from low image quality. Traditional deep learning-based denoising methods work well for a specific acquisition but are less effective in handling diverse PET protocols. In this study, we proposed and validated a 3D Denoising Diffusion Probabilistic Model (3D DDPM) as a robust and universal solution for whole-body PET image denoising. The proposed 3D DDPM gradually injected noise into the images during the forward diffusion phase, allowing the model to learn to reconstruct the clean data during the reverse diffusion process. A 3D convolutional network was trained using high-quality data from the Biograph Vision Quadra PET/CT scanner to generate the score function, enabling the model to capture accurate PET distribution information extracted from the total-body datasets. The trained 3D DDPM was evaluated on datasets from four scanners, four tracer types, and six dose levels representing a broad spectrum of clinical scenarios. The proposed 3D DDPM consistently outperformed 2D DDPM, 3D UNet, and 3D GAN, demonstrating its superior denoising performance across all tested conditions. Additionally, the model's uncertainty maps exhibited lower variance, reflecting its higher confidence in its outputs. The proposed 3D DDPM can effectively handle various clinical settings, including variations in dose levels, scanners, and tracers, establishing it as a promising foundational model for PET image denoising. The trained 3D DDPM model of this work can be utilized off the shelf by researchers as a whole-body PET image denoising solution. The code and model are available at https://github.com/Miche11eU/PET-Image-Denoising-Using-3D-Diffusion-Model .

Influence of prior probability information on large language model performance in radiological diagnosis.

Fukushima T, Kurokawa R, Hagiwara A, Sonoda Y, Asari Y, Kurokawa M, Kanzawa J, Gonoi W, Abe O

pubmed logopapersJun 1 2025
Large language models (LLMs) show promise in radiological diagnosis, but their performance may be affected by the context of the cases presented. Our purpose is to investigate how providing information about prior probabilities influences the diagnostic performance of an LLM in radiological quiz cases. We analyzed 322 consecutive cases from Radiology's "Diagnosis Please" quiz using Claude 3.5 Sonnet under three conditions: without context (Condition 1), informed as quiz cases (Condition 2), and presented as primary care cases (Condition 3). Diagnostic accuracy was compared using McNemar's test. The overall accuracy rate significantly improved in Condition 2 compared to Condition 1 (70.2% vs. 64.9%, p = 0.029). Conversely, the accuracy rate significantly decreased in Condition 3 compared to Condition 1 (59.9% vs. 64.9%, p = 0.027). Providing information that may influence prior probabilities significantly affects the diagnostic performance of the LLM in radiological cases. This suggests that LLMs may incorporate Bayesian-like principles and adjust the weighting of their diagnostic responses based on prior information, highlighting the potential for optimizing LLM's performance in clinical settings by providing relevant contextual information.

Impact of contrast enhancement phase on CT-based radiomics analysis for predicting post-surgical recurrence in renal cell carcinoma.

Khene ZE, Bhanvadia R, Tachibana I, Sharma P, Trevino I, Graber W, Bertail T, Fleury R, Acosta O, De Crevoisier R, Bensalah K, Lotan Y, Margulis V

pubmed logopapersJun 1 2025
To investigate the effect of CT enhancement phase on radiomics features for predicting post-surgical recurrence of clear cell renal cell carcinoma (ccRCC). This retrospective study included 144 patients who underwent radical or partial nephrectomy for ccRCC. Preoperative multiphase abdominal CT scans (non-contrast, corticomedullary, and nephrographic phases) were obtained for each patient. Automated segmentation of renal masses was performed using the nnU-Net framework. Radiomics signatures (RS) were developed for each phase using ensembles of machine learning-based models (Random Survival Forests [RSF], Survival Support Vector Machines [S-SVM], and Extreme Gradient Boosting [XGBoost]) with and without feature selection. Feature selection was performed using Affinity Propagation Clustering. The primary endpoint was disease-free survival, assessed by concordance index (C-index). The study included 144 patients. Radical and partial nephrectomies were performed in 81% and 19% of patients, respectively, with 81% of tumors classified as high grade. Disease recurrence occurred in 74 patients (51%). A total of 1,316 radiomics features were extracted per phase per patient. Without feature selection, C-index values for RSF, S-SVM, XGBoost, and Penalized Cox models ranged from 0.43 to 0.61 across phases. With Affinity Propagation feature selection, C-index values improved to 0.51-0.74, with the corticomedullary phase achieving the highest performance (C-index up to 0.74). The results of our study indicate that radiomics analysis of corticomedullary phase contrast-enhanced CT images may provide valuable predictive insight into recurrence risk for non-metastatic ccRCC following surgical resection. However, the lack of external validation is a limitation, and further studies are needed to confirm these findings in independent cohorts.
Page 262 of 3423416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.