Sort by:
Page 298 of 4564555 results

Application of Deep Learning Accelerated Image Reconstruction in T2-Weighted Turbo Spin-Echo Imaging of the Brain at 7T.

Liu Z, Zhou X, Tao S, Ma J, Nickel D, Liebig P, Mostapha M, Patel V, Westerhold EM, Mojahed H, Gupta V, Middlebrooks EH

pubmed logopapersJun 12 2025
Prolonged imaging times and motion sensitivity at 7T necessitate advancements in image acceleration techniques. This study evaluates a 7T deep learning (DL)-based image reconstruction by using a deep neural network trained on 7T data, applied to T2-weighted turbo spin-echo imaging. Raw <i>k</i>-space data from 30 consecutive clinical 7T brain MRI patients was reconstructed by using both DL and standard methods. Qualitative assessments included overall image quality, artifacts, sharpness, structural conspicuity, and noise level, while quantitative metrics evaluated contrast-to-noise ratio (CNR) and image noise. DL-based reconstruction consistently outperformed standard methods across all qualitative metrics (<i>P</i> < .001), with a mean CNR increase of 50.8% (95% CI: 43.0%-58.6%) and a mean noise reduction of 35.1% (95% CI: 32.7%-37.6%). These findings demonstrate that DL-based reconstruction at 7T significantly enhances image quality without introducing adverse effects, offering a promising tool for addressing the challenges of ultra-high-field MRI.

Improving the Robustness of Deep Learning Models in Predicting Hematoma Expansion from Admission Head CT.

Tran AT, Abou Karam G, Zeevi D, Qureshi AI, Malhotra A, Majidi S, Murthy SB, Park S, Kontos D, Falcone GJ, Sheth KN, Payabvash S

pubmed logopapersJun 12 2025
Robustness against input data perturbations is essential for deploying deep learning models in clinical practice. Adversarial attacks involve subtle, voxel-level manipulations of scans to increase deep learning models' prediction errors. Testing deep learning model performance on examples of adversarial images provides a measure of robustness, and including adversarial images in the training set can improve the model's robustness. In this study, we examined adversarial training and input modifications to improve the robustness of deep learning models in predicting hematoma expansion (HE) from admission head CTs of patients with acute intracerebral hemorrhage (ICH). We used a multicenter cohort of <i>n</i> = 890 patients for cross-validation/training, and a cohort of <i>n</i> = 684 consecutive patients with ICH from 2 stroke centers for independent validation. Fast gradient sign method (FGSM) and projected gradient descent (PGD) adversarial attacks were applied for training and testing. We developed and tested 4 different models to predict ≥3 mL, ≥6 mL, ≥9 mL, and ≥12 mL HE in an independent validation cohort applying receiver operating characteristics area under the curve (AUC). We examined varying mixtures of adversarial and nonperturbed (clean) scans for training as well as including additional input from the hyperparameter-free Otsu multithreshold segmentation for model. When deep learning models trained solely on clean scans were tested with PGD and FGSM adversarial images, the average HE prediction AUC decreased from 0.8 to 0.67 and 0.71, respectively. Overall, the best performing strategy to improve model robustness was training with 5:3 mix of clean and PGD adversarial scans and addition of Otsu multithreshold segmentation to model input, increasing the average AUC to 0.77 against both PGD and FGSM adversarial attacks. Adversarial training with FGSM improved robustness against similar type attack but offered limited cross-attack robustness against PGD-type images. Adversarial training and inclusion of threshold-based segmentation as an additional input can improve deep learning model robustness in prediction of HE from admission head CTs in acute ICH.

CT derived fractional flow reserve: Part 2 - Critical appraisal of the literature.

Rodriguez-Lozano PF, Waheed A, Evangelou S, Kolossváry M, Shaikh K, Siddiqui S, Stipp L, Lakshmanan S, Wu EH, Nurmohamed NS, Orbach A, Baliyan V, de Matos JFRG, Trivedi SJ, Madan N, Villines TC, Ihdayhid AR

pubmed logopapersJun 12 2025
The integration of computed tomography-derived fractional flow reserve (CT-FFR), utilizing computational fluid dynamics and artificial intelligence (AI) in routine coronary computed tomographic angiography (CCTA), presents a promising approach to enhance evaluations of functional lesion severity. Extensive evidence underscores the diagnostic accuracy, prognostic significance, and clinical relevance of CT-FFR, prompting recent clinical guidelines to recommend its combined use with CCTA for selected individuals with with intermediate stenosis on CCTA and stable or acute chest pain. This manuscript critically examines the existing clinical evidence, evaluates the diagnostic performance, and outlines future perspectives for integrating noninvasive assessments of coronary anatomy and physiology. Furthermore, it serves as a practical guide for medical imaging professionals by addressing common pitfalls and challenges associated with CT-FFR while proposing potential solutions to facilitate its successful implementation in clinical practice.

Exploring the limit of image resolution for human expert classification of vascular ultrasound images in giant cell arteritis and healthy subjects: the GCA-US-AI project.

Bauer CJ, Chrysidis S, Dejaco C, Koster MJ, Kohler MJ, Monti S, Schmidt WA, Mukhtyar CB, Karakostas P, Milchert M, Ponte C, Duftner C, de Miguel E, Hocevar A, Iagnocco A, Terslev L, Døhn UM, Nielsen BD, Juche A, Seitz L, Keller KK, Karalilova R, Daikeler T, Mackie SL, Torralba K, van der Geest KSM, Boumans D, Bosch P, Tomelleri A, Aschwanden M, Kermani TA, Diamantopoulos A, Fredberg U, Inanc N, Petzinna SM, Albarqouni S, Behning C, Schäfer VS

pubmed logopapersJun 12 2025
Prompt diagnosis of giant cell arteritis (GCA) with ultrasound is crucial for preventing severe ocular and other complications, yet expertise in ultrasound performance is scarce. The development of an artificial intelligence (AI)-based assistant that facilitates ultrasound image classification and helps to diagnose GCA early promises to close the existing gap. In the projection of the planned AI, this study investigates the minimum image resolution required for human experts to reliably classify ultrasound images of arteries commonly affected by GCA for the presence or absence of GCA. Thirty-one international experts in GCA ultrasonography participated in a web-based exercise. They were asked to classify 10 ultrasound images for each of 5 vascular segments as GCA, normal, or not able to classify. The following segments were assessed: (1) superficial common temporal artery, (2) its frontal and (3) parietal branches (all in transverse view), (4) axillary artery in transverse view, and 5) axillary artery in longitudinal view. Identical images were shown at different resolutions, namely 32 × 32, 64 × 64, 128 × 128, 224 × 224, and 512 × 512 pixels, thereby resulting in a total of 250 images to be classified by every study participant. Classification performance improved with increasing resolution up to a threshold, plateauing at 224 × 224 pixels. At 224 × 224 pixels, the overall classification sensitivity was 0.767 (95% CI, 0.737-0.796), and specificity was 0.862 (95% CI, 0.831-0.888). A resolution of 224 × 224 pixels ensures reliable human expert classification and aligns with the input requirements of many common AI-based architectures. Thus, the results of this study substantially guide projected AI development.

Multimodal deep learning for enhanced breast cancer diagnosis on sonography.

Wei TR, Chang A, Kang Y, Patel M, Fang Y, Yan Y

pubmed logopapersJun 12 2025
This study introduces a novel multimodal deep learning model tailored for the differentiation of benign and malignant breast masses using dual-view breast ultrasound images (radial and anti-radial views) in conjunction with corresponding radiology reports. The proposed multimodal model architecture includes specialized image and text encoders for independent feature extraction, along with a transformation layer to align the multimodal features for the subsequent classification task. The model achieved an area of the curve of 85% and outperformed unimodal models with 6% and 8% in Youden index. Additionally, our multimodal model surpassed zero-shot predictions generated by prominent foundation models such as CLIP and MedCLIP. In direct comparison with classification results based on physician-assessed ratings, our model exhibited clear superiority, highlighting its practical significance in diagnostics. By integrating both image and text modalities, this study exemplifies the potential of multimodal deep learning in enhancing diagnostic performance, laying the foundation for developing robust and transparent AI-assisted solutions.

NeuroEmo: A neuroimaging-based fMRI dataset to extract temporal affective brain dynamics for Indian movie video clips stimuli using dynamic functional connectivity approach with graph convolution neural network (DFC-GCNN).

Abgeena A, Garg S, Goyal N, P C JR

pubmed logopapersJun 12 2025
FMRI, a non-invasive neuroimaging technique, can detect emotional brain activation patterns. It allows researchers to observe functional changes in the brain, making it a valuable tool for emotion recognition. For improved emotion recognition systems, it becomes crucial to understand the neural mechanisms behind emotional processing in the brain. There have been multiple studies across the world on the same, however, research on fMRI-based emotion recognition within the Indian population remains scarce, limiting the generalizability of existing models. To address this gap, a culturally relevant neuroimaging dataset has been created https://openneuro.org/datasets/ds005700 for identifying five emotional states i.e., calm, afraid, delighted, depressed and excited-in a diverse group of Indian participants. To ensure cultural relevance, emotional stimuli were derived from Bollywood movie clips. This study outlines the fMRI task design, experimental setup, data collection procedures, preprocessing steps, statistical analysis using the General Linear Model (GLM), and region-of-interest (ROI)-based dynamic functional connectivity (DFC) extraction using parcellation based on the Power et al. (2011) functional atlas. A supervised emotion classification model has been proposed using a Graph Convolutional Neural Network (GCNN), where graph structures were constructed from DFC matrices at varying thresholds. The DFC-GCNN model achieved an impressive 95% classification accuracy across 5-fold cross-validation, highlighting emotion-specific connectivity dynamics in key affective regions, including the amygdala, prefrontal cortex, and anterior insula. These findings emphasize the significance of temporal variability in emotional state classification. By introducing a culturally specific neuroimaging dataset and a GCNN-based emotion recognition framework, this research enhances the applicability of graph-based models for identifying region-wise connectivity patterns in fMRI data. It also offers novel insights into cross-cultural differences in emotional processing at the neural level. Furthermore, the high spatial and temporal resolution of the fMRI dataset provides a valuable resource for future studies in emotional neuroscience and related disciplines.

Accelerating Diffusion: Task-Optimized latent diffusion models for rapid CT denoising.

Jee J, Chang W, Kim E, Lee K

pubmed logopapersJun 12 2025
Computed tomography (CT) systems are indispensable for diagnostics but pose risks due to radiation exposure. Low-dose CT (LDCT) mitigates these risks but introduces noise and artifacts that compromise diagnostic accuracy. While deep learning methods, such as convolutional neural networks (CNNs) and generative adversarial networks (GANs), have been applied to LDCT denoising, challenges persist, including difficulties in preserving fine details and risks of model collapse. Recently, the Denoising Diffusion Probabilistic Model (DDPM) has addressed the limitations of traditional methods and demonstrated exceptional performance across various tasks. Despite these advancements, its high computational cost during training and extended sampling time significantly hinder practical clinical applications. Additionally, DDPM's reliance on random Gaussian noise can reduce optimization efficiency and performance in task-specific applications. To overcome these challenges, this study proposes a novel LDCT denoising framework that integrates the Latent Diffusion Model (LDM) with the Cold Diffusion Process. LDM reduces computational costs by conducting the diffusion process in a low-dimensional latent space while preserving critical image features. The Cold Diffusion Process replaces Gaussian noise with a CT denoising task-specific degradation approach, enabling efficient denoising with fewer time steps. Experimental results demonstrate that the proposed method outperforms DDPM in key metrics, including PSNR, SSIM, and RMSE, while achieving up to 2 × faster training and 14 × faster sampling. These advancements highlight the proposed framework's potential as an effective and practical solution for real-world clinical applications.

PiPViT: Patch-based Visual Interpretable Prototypes for Retinal Image Analysis

Marzieh Oghbaie, Teresa Araújo, Hrvoje Bogunović

arxiv logopreprintJun 12 2025
Background and Objective: Prototype-based methods improve interpretability by learning fine-grained part-prototypes; however, their visualization in the input pixel space is not always consistent with human-understandable biomarkers. In addition, well-known prototype-based approaches typically learn extremely granular prototypes that are less interpretable in medical imaging, where both the presence and extent of biomarkers and lesions are critical. Methods: To address these challenges, we propose PiPViT (Patch-based Visual Interpretable Prototypes), an inherently interpretable prototypical model for image recognition. Leveraging a vision transformer (ViT), PiPViT captures long-range dependencies among patches to learn robust, human-interpretable prototypes that approximate lesion extent only using image-level labels. Additionally, PiPViT benefits from contrastive learning and multi-resolution input processing, which enables effective localization of biomarkers across scales. Results: We evaluated PiPViT on retinal OCT image classification across four datasets, where it achieved competitive quantitative performance compared to state-of-the-art methods while delivering more meaningful explanations. Moreover, quantitative evaluation on a hold-out test set confirms that the learned prototypes are semantically and clinically relevant. We believe PiPViT can transparently explain its decisions and assist clinicians in understanding diagnostic outcomes. Github page: https://github.com/marziehoghbaie/PiPViT

Score-based Generative Diffusion Models to Synthesize Full-dose FDG Brain PET from MRI in Epilepsy Patients

Jiaqi Wu, Jiahong Ouyang, Farshad Moradi, Mohammad Mehdi Khalighi, Greg Zaharchuk

arxiv logopreprintJun 12 2025
Fluorodeoxyglucose (FDG) PET to evaluate patients with epilepsy is one of the most common applications for simultaneous PET/MRI, given the need to image both brain structure and metabolism, but is suboptimal due to the radiation dose in this young population. Little work has been done synthesizing diagnostic quality PET images from MRI data or MRI data with ultralow-dose PET using advanced generative AI methods, such as diffusion models, with attention to clinical evaluations tailored for the epilepsy population. Here we compared the performance of diffusion- and non-diffusion-based deep learning models for the MRI-to-PET image translation task for epilepsy imaging using simultaneous PET/MRI in 52 subjects (40 train/2 validate/10 hold-out test). We tested three different models: 2 score-based generative diffusion models (SGM-Karras Diffusion [SGM-KD] and SGM-variance preserving [SGM-VP]) and a Transformer-Unet. We report results on standard image processing metrics as well as clinically relevant metrics, including congruency measures (Congruence Index and Congruency Mean Absolute Error) that assess hemispheric metabolic asymmetry, which is a key part of the clinical analysis of these images. The SGM-KD produced the best qualitative and quantitative results when synthesizing PET purely from T1w and T2 FLAIR images with the least mean absolute error in whole-brain specific uptake value ratio (SUVR) and highest intraclass correlation coefficient. When 1% low-dose PET images are included in the inputs, all models improve significantly and are interchangeable for quantitative performance and visual quality. In summary, SGMs hold great potential for pure MRI-to-PET translation, while all 3 model types can synthesize full-dose FDG-PET accurately using MRI and ultralow-dose PET.

AI-based identification of patients who benefit from revascularization: a multicenter study

Zhang, W., Miller, R. J., Patel, K., Shanbhag, A., Liang, J., Lemley, M., Ramirez, G., Builoff, V., Yi, J., Zhou, J., Kavanagh, P., Acampa, W., Bateman, T. M., Di Carli, M. F., Dorbala, S., Einstein, A. J., Fish, M. B., Hauser, M. T., Ruddy, T., Kaufmann, P. A., Miller, E. J., Sharir, T., Martins, M., Halcox, J., Chareonthaitawee, P., Dey, D., Berman, D., Slomka, P.

medrxiv logopreprintJun 12 2025
Background and AimsRevascularization in stable coronary artery disease often relies on ischemia severity, but we introduce an AI-driven approach that uses clinical and imaging data to estimate individualized treatment effects and guide personalized decisions. MethodsUsing a large, international registry from 13 centers, we developed an AI model to estimate individual treatment effects by simulating outcomes under alternative therapeutic strategies. The model was trained on an internal cohort constructed using 1:1 propensity score matching to emulate randomized controlled trials (RCTs), creating balanced patient pairs in which only the treatment strategy--early revascularization (defined as any procedure within 90 days of MPI) versus medical therapy--differed. This design allowed the model to estimate individualized treatment effects, forming the basis for counterfactual reasoning at the patient level. We then derived the AI-REVASC score, which quantifies the potential benefit, for each patient, of early revascularization. The score was validated in the held-out testing cohort using Cox regression. ResultsOf 45,252 patients, 19,935 (44.1%) were female, median age 65 (IQR: 57-73). During a median follow-up of 3.6 years (IQR: 2.7-4.9), 4,323 (9.6%) experienced MI or death. The AI model identified a group (n=1,335, 5.9%) that benefits from early revascularization with a propensity-adjusted hazard ratio of 0.50 (95% CI: 0.25-1.00). Patients identified for early revascularization had higher prevalence of hypertension, diabetes, dyslipidemia, and lower LVEF. ConclusionsThis study pioneers a scalable, data-driven approach that emulates randomized trials using retrospective data. The AI-REVASC score enables precision revascularization decisions where guidelines and RCTs fall short. Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=104 SRC="FIGDIR/small/25329295v1_ufig1.gif" ALT="Figure 1"> View larger version (31K): [email protected]@1df75d8org.highwire.dtl.DTLVardef@1b1ce68org.highwire.dtl.DTLVardef@663cdf_HPS_FORMAT_FIGEXP M_FIG C_FIG
Page 298 of 4564555 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.