Sort by:
Page 84 of 1591585 results

Summary Report of the SNMMI AI Task Force Radiomics Challenge 2024.

Boellaard R, Rahmim A, Eertink JJ, Duehrsen U, Kurch L, Lugtenburg PJ, Wiegers SE, Zwezerijnen GJC, Zijlstra JM, Heymans MW, Buvat I

pubmed logopapersJun 12 2025
In medical imaging, challenges are competitions that aim to provide a fair comparison of different methodologic solutions to a common problem. Challenges typically focus on addressing real-world problems, such as segmentation, detection, and prediction tasks, using various types of medical images and associated data. Here, we describe the organization and results of such a challenge to compare machine-learning models for predicting survival in patients with diffuse large B-cell lymphoma using a baseline <sup>18</sup>F-FDG PET/CT radiomics dataset. <b>Methods:</b> This challenge aimed to predict progression-free survival (PFS) in patients with diffuse large B-cell lymphoma, either as a binary outcome (shorter than 2 y versus longer than 2 y) or as a continuous outcome (survival in months). All participants were provided with a radiomic training dataset, including the ground truth survival for designing a predictive model and a radiomic test dataset without ground truth. Figures of merit (FOMs) used to assess model performance were the root-mean-square error for continuous outcomes and the C-index for 1-, 2-, and 3-y PFS binary outcomes. The challenge was endorsed and initiated by the Society of Nuclear Medicine and Molecular Imaging AI Task Force. <b>Results:</b> Nineteen models for predicting PFS as a continuous outcome from 15 teams were received. Among those models, external validation identified 6 models showing similar performance to that of a simple general linear reference model using SUV and total metabolic tumor volumes (TMTV) only. Twelve models for predicting binary outcomes were submitted by 9 teams. External validation showed that 1 model had higher, but nonsignificant, C-index values compared with values obtained by a simple logistic regression model using SUV and TMTV. <b>Conclusion:</b> Some of the radiomic-based machine-learning models developed by participants showed better FOMs than did simple linear or logistic regression models based on SUV and TMTV only, although the differences in observed FOMs were nonsignificant. This suggests that, for the challenge dataset, there was limited or no value seen from the addition of sophisticated radiomic features and use of machine learning when developing models for outcome prediction.

Application of Deep Learning Accelerated Image Reconstruction in T2-Weighted Turbo Spin-Echo Imaging of the Brain at 7T.

Liu Z, Zhou X, Tao S, Ma J, Nickel D, Liebig P, Mostapha M, Patel V, Westerhold EM, Mojahed H, Gupta V, Middlebrooks EH

pubmed logopapersJun 12 2025
Prolonged imaging times and motion sensitivity at 7T necessitate advancements in image acceleration techniques. This study evaluates a 7T deep learning (DL)-based image reconstruction by using a deep neural network trained on 7T data, applied to T2-weighted turbo spin-echo imaging. Raw <i>k</i>-space data from 30 consecutive clinical 7T brain MRI patients was reconstructed by using both DL and standard methods. Qualitative assessments included overall image quality, artifacts, sharpness, structural conspicuity, and noise level, while quantitative metrics evaluated contrast-to-noise ratio (CNR) and image noise. DL-based reconstruction consistently outperformed standard methods across all qualitative metrics (<i>P</i> < .001), with a mean CNR increase of 50.8% (95% CI: 43.0%-58.6%) and a mean noise reduction of 35.1% (95% CI: 32.7%-37.6%). These findings demonstrate that DL-based reconstruction at 7T significantly enhances image quality without introducing adverse effects, offering a promising tool for addressing the challenges of ultra-high-field MRI.

Improving the Robustness of Deep Learning Models in Predicting Hematoma Expansion from Admission Head CT.

Tran AT, Abou Karam G, Zeevi D, Qureshi AI, Malhotra A, Majidi S, Murthy SB, Park S, Kontos D, Falcone GJ, Sheth KN, Payabvash S

pubmed logopapersJun 12 2025
Robustness against input data perturbations is essential for deploying deep learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep learning models' prediction errors. Testing deep learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness. In this study, we examined adversarial training and input modifications to improve the robustness of deep learning models in predicting hematoma expansion (HE) from admission head CTs of patients with acute intracerebral hemorrhage (ICH). We used a multicenter cohort of <i>n</i> = 890 patients for cross-validation/training, and a cohort of <i>n</i> = 684 consecutive patients with ICH from 2 stroke centers for independent validation. Fast gradient sign method (FGSM) and projected gradient descent (PGD) adversarial attacks were applied for training and testing. We developed and tested 4 different models to predict ≥3 mL, ≥6 mL, ≥9 mL, and ≥12 mL HE in an independent validation cohort applying receiver operating characteristics area under the curve (AUC). We examined varying mixtures of adversarial and nonperturbed (clean) scans for training as well as including additional input from the hyperparameter-free Otsu multithreshold segmentation for model. When deep learning models trained solely on clean scans were tested with PGD and FGSM adversarial images, the average HE prediction AUC decreased from 0.8 to 0.67 and 0.71, respectively. Overall, the best performing strategy to improve model robustness was training with 5:3 mix of clean and PGD adversarial scans and addition of Otsu multithreshold segmentation to model input, increasing the average AUC to 0.77 against both PGD and FGSM adversarial attacks. Adversarial training with FGSM improved robustness against similar type attack but offered limited cross-attack robustness against PGD-type images. Adversarial training and inclusion of threshold-based segmentation as an additional input can improve deep learning model robustness in prediction of HE from admission head CTs in acute ICH.

CT derived fractional flow reserve: Part 2 - Critical appraisal of the literature.

Rodriguez-Lozano PF, Waheed A, Evangelou S, Kolossváry M, Shaikh K, Siddiqui S, Stipp L, Lakshmanan S, Wu EH, Nurmohamed NS, Orbach A, Baliyan V, de Matos JFRG, Trivedi SJ, Madan N, Villines TC, Ihdayhid AR

pubmed logopapersJun 12 2025
The integration of computed tomography-derived fractional flow reserve (CT-FFR), utilizing computational fluid dynamics and artificial intelligence (AI) in routine coronary computed tomographic angiography (CCTA), presents a promising approach to enhance evaluations of functional lesion severity. Extensive evidence underscores the diagnostic accuracy, prognostic significance, and clinical relevance of CT-FFR, prompting recent clinical guidelines to recommend its combined use with CCTA for selected individuals with with intermediate stenosis on CCTA and stable or acute chest pain. This manuscript critically examines the existing clinical evidence, evaluates the diagnostic performance, and outlines future perspectives for integrating noninvasive assessments of coronary anatomy and physiology. Furthermore, it serves as a practical guide for medical imaging professionals by addressing common pitfalls and challenges associated with CT-FFR while proposing potential solutions to facilitate its successful implementation in clinical practice.

NeuroEmo: A neuroimaging-based fMRI dataset to extract temporal affective brain dynamics for Indian movie video clips stimuli using dynamic functional connectivity approach with graph convolution neural network (DFC-GCNN).

Abgeena A, Garg S, Goyal N, P C JR

pubmed logopapersJun 12 2025
FMRI, a non-invasive neuroimaging technique, can detect emotional brain activation patterns. It allows researchers to observe functional changes in the brain, making it a valuable tool for emotion recognition. For improved emotion recognition systems, it becomes crucial to understand the neural mechanisms behind emotional processing in the brain. There have been multiple studies across the world on the same, however, research on fMRI-based emotion recognition within the Indian population remains scarce, limiting the generalizability of existing models. To address this gap, a culturally relevant neuroimaging dataset has been created https://openneuro.org/datasets/ds005700 for identifying five emotional states i.e., calm, afraid, delighted, depressed and excited-in a diverse group of Indian participants. To ensure cultural relevance, emotional stimuli were derived from Bollywood movie clips. This study outlines the fMRI task design, experimental setup, data collection procedures, preprocessing steps, statistical analysis using the General Linear Model (GLM), and region-of-interest (ROI)-based dynamic functional connectivity (DFC) extraction using parcellation based on the Power et al. (2011) functional atlas. A supervised emotion classification model has been proposed using a Graph Convolutional Neural Network (GCNN), where graph structures were constructed from DFC matrices at varying thresholds. The DFC-GCNN model achieved an impressive 95% classification accuracy across 5-fold cross-validation, highlighting emotion-specific connectivity dynamics in key affective regions, including the amygdala, prefrontal cortex, and anterior insula. These findings emphasize the significance of temporal variability in emotional state classification. By introducing a culturally specific neuroimaging dataset and a GCNN-based emotion recognition framework, this research enhances the applicability of graph-based models for identifying region-wise connectivity patterns in fMRI data. It also offers novel insights into cross-cultural differences in emotional processing at the neural level. Furthermore, the high spatial and temporal resolution of the fMRI dataset provides a valuable resource for future studies in emotional neuroscience and related disciplines.

Accelerating Diffusion: Task-Optimized latent diffusion models for rapid CT denoising.

Jee J, Chang W, Kim E, Lee K

pubmed logopapersJun 12 2025
Computed tomography (CT) systems are indispensable for diagnostics but pose risks due to radiation exposure. Low-dose CT (LDCT) mitigates these risks but introduces noise and artifacts that compromise diagnostic accuracy. While deep learning methods, such as convolutional neural networks (CNNs) and generative adversarial networks (GANs), have been applied to LDCT denoising, challenges persist, including difficulties in preserving fine details and risks of model collapse. Recently, the Denoising Diffusion Probabilistic Model (DDPM) has addressed the limitations of traditional methods and demonstrated exceptional performance across various tasks. Despite these advancements, its high computational cost during training and extended sampling time significantly hinder practical clinical applications. Additionally, DDPM's reliance on random Gaussian noise can reduce optimization efficiency and performance in task-specific applications. To overcome these challenges, this study proposes a novel LDCT denoising framework that integrates the Latent Diffusion Model (LDM) with the Cold Diffusion Process. LDM reduces computational costs by conducting the diffusion process in a low-dimensional latent space while preserving critical image features. The Cold Diffusion Process replaces Gaussian noise with a CT denoising task-specific degradation approach, enabling efficient denoising with fewer time steps. Experimental results demonstrate that the proposed method outperforms DDPM in key metrics, including PSNR, SSIM, and RMSE, while achieving up to 2 × faster training and 14 × faster sampling. These advancements highlight the proposed framework's potential as an effective and practical solution for real-world clinical applications.

PiPViT: Patch-based Visual Interpretable Prototypes for Retinal Image Analysis

Marzieh Oghbaie, Teresa Araújo, Hrvoje Bogunović

arxiv logopreprintJun 12 2025
Background and Objective: Prototype-based methods improve interpretability by learning fine-grained part-prototypes; however, their visualization in the input pixel space is not always consistent with human-understandable biomarkers. In addition, well-known prototype-based approaches typically learn extremely granular prototypes that are less interpretable in medical imaging, where both the presence and extent of biomarkers and lesions are critical. Methods: To address these challenges, we propose PiPViT (Patch-based Visual Interpretable Prototypes), an inherently interpretable prototypical model for image recognition. Leveraging a vision transformer (ViT), PiPViT captures long-range dependencies among patches to learn robust, human-interpretable prototypes that approximate lesion extent only using image-level labels. Additionally, PiPViT benefits from contrastive learning and multi-resolution input processing, which enables effective localization of biomarkers across scales. Results: We evaluated PiPViT on retinal OCT image classification across four datasets, where it achieved competitive quantitative performance compared to state-of-the-art methods while delivering more meaningful explanations. Moreover, quantitative evaluation on a hold-out test set confirms that the learned prototypes are semantically and clinically relevant. We believe PiPViT can transparently explain its decisions and assist clinicians in understanding diagnostic outcomes. Github page: https://github.com/marziehoghbaie/PiPViT

Score-based Generative Diffusion Models to Synthesize Full-dose FDG Brain PET from MRI in Epilepsy Patients

Jiaqi Wu, Jiahong Ouyang, Farshad Moradi, Mohammad Mehdi Khalighi, Greg Zaharchuk

arxiv logopreprintJun 12 2025
Fluorodeoxyglucose (FDG) PET to evaluate patients with epilepsy is one of the most common applications for simultaneous PET/MRI, given the need to image both brain structure and metabolism, but is suboptimal due to the radiation dose in this young population. Little work has been done synthesizing diagnostic quality PET images from MRI data or MRI data with ultralow-dose PET using advanced generative AI methods, such as diffusion models, with attention to clinical evaluations tailored for the epilepsy population. Here we compared the performance of diffusion- and non-diffusion-based deep learning models for the MRI-to-PET image translation task for epilepsy imaging using simultaneous PET/MRI in 52 subjects (40 train/2 validate/10 hold-out test). We tested three different models: 2 score-based generative diffusion models (SGM-Karras Diffusion [SGM-KD] and SGM-variance preserving [SGM-VP]) and a Transformer-Unet. We report results on standard image processing metrics as well as clinically relevant metrics, including congruency measures (Congruence Index and Congruency Mean Absolute Error) that assess hemispheric metabolic asymmetry, which is a key part of the clinical analysis of these images. The SGM-KD produced the best qualitative and quantitative results when synthesizing PET purely from T1w and T2 FLAIR images with the least mean absolute error in whole-brain specific uptake value ratio (SUVR) and highest intraclass correlation coefficient. When 1% low-dose PET images are included in the inputs, all models improve significantly and are interchangeable for quantitative performance and visual quality. In summary, SGMs hold great potential for pure MRI-to-PET translation, while all 3 model types can synthesize full-dose FDG-PET accurately using MRI and ultralow-dose PET.

AI-based identification of patients who benefit from revascularization: a multicenter study

Zhang, W., Miller, R. J., Patel, K., Shanbhag, A., Liang, J., Lemley, M., Ramirez, G., Builoff, V., Yi, J., Zhou, J., Kavanagh, P., Acampa, W., Bateman, T. M., Di Carli, M. F., Dorbala, S., Einstein, A. J., Fish, M. B., Hauser, M. T., Ruddy, T., Kaufmann, P. A., Miller, E. J., Sharir, T., Martins, M., Halcox, J., Chareonthaitawee, P., Dey, D., Berman, D., Slomka, P.

medrxiv logopreprintJun 12 2025
Background and AimsRevascularization in stable coronary artery disease often relies on ischemia severity, but we introduce an AI-driven approach that uses clinical and imaging data to estimate individualized treatment effects and guide personalized decisions. MethodsUsing a large, international registry from 13 centers, we developed an AI model to estimate individual treatment effects by simulating outcomes under alternative therapeutic strategies. The model was trained on an internal cohort constructed using 1:1 propensity score matching to emulate randomized controlled trials (RCTs), creating balanced patient pairs in which only the treatment strategy--early revascularization (defined as any procedure within 90 days of MPI) versus medical therapy--differed. This design allowed the model to estimate individualized treatment effects, forming the basis for counterfactual reasoning at the patient level. We then derived the AI-REVASC score, which quantifies the potential benefit, for each patient, of early revascularization. The score was validated in the held-out testing cohort using Cox regression. ResultsOf 45,252 patients, 19,935 (44.1%) were female, median age 65 (IQR: 57-73). During a median follow-up of 3.6 years (IQR: 2.7-4.9), 4,323 (9.6%) experienced MI or death. The AI model identified a group (n=1,335, 5.9%) that benefits from early revascularization with a propensity-adjusted hazard ratio of 0.50 (95% CI: 0.25-1.00). Patients identified for early revascularization had higher prevalence of hypertension, diabetes, dyslipidemia, and lower LVEF. ConclusionsThis study pioneers a scalable, data-driven approach that emulates randomized trials using retrospective data. The AI-REVASC score enables precision revascularization decisions where guidelines and RCTs fall short. Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=104 SRC="FIGDIR/small/25329295v1_ufig1.gif" ALT="Figure 1"> View larger version (31K): [email protected]@1df75d8org.highwire.dtl.DTLVardef@1b1ce68org.highwire.dtl.DTLVardef@663cdf_HPS_FORMAT_FIGEXP M_FIG C_FIG

A machine learning approach for personalized breast radiation dosimetry in CT: Integrating radiomics and deep neural networks.

Tzanis E, Stratakis J, Damilakis J

pubmed logopapersJun 11 2025
To develop a machine learning-based workflow for patient-specific breast radiation dosimetry in CT. Two hundred eighty-six chest CT examinations, with corresponding right and left breast contours, were retrospectively collected from the radiotherapy department at our institution to develop and validate breast segmentation U-Nets. Additionally, Monte Carlo simulations were performed for each CT scan to determine radiation doses to the breasts. The derived breast doses, along with predictors such as X-ray tube current and radiomic features, were then used to train deep neural networks (DNNs) for breast dose prediction. The breast segmentation models achieved a mean dice similarity coefficient of 0.92, with precision and sensitivity scores above 0.90 for both breasts, indicating high segmentation accuracy. The DNNs demonstrated close alignment with ground truth values, with mean predicted doses of 5.05 ± 0.50 mGy for the right breast and 5.06 ± 0.55 mGy for the left breast, compared to ground truth values of 5.03 ± 0.57 mGy and 5.02 ± 0.61 mGy, respectively. The mean absolute percentage errors were 4.01 % (range: 3.90 %-4.12 %) for the right breast and 4.82 % (range: 4.56 %-5.11 %) for the left breast. The mean inference time was 30.2 ± 4.3 s. Statistical analysis showed no significant differences between predicted and actual doses (p ≥ 0.07). This study presents an automated, machine learning-based workflow for breast radiation dosimetry in CT, integrating segmentation and dose prediction models. The models and code are available at: https://github.com/eltzanis/ML-based-Breast-Radiation-Dosimetry-in-CT.
Page 84 of 1591585 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.