Sort by:
Page 69 of 3993982 results

Deep Diffusion MRI Template (DDTemplate): A Novel Deep Learning Groupwise Diffusion MRI Registration Method for Brain Template Creation.

Wang J, Zhu X, Zhang W, Du M, Wells WM, O'Donnell LJ, Zhang F

pubmed logopapersJul 26 2025
Diffusion MRI (dMRI) is an advanced imaging technique that enables in-vivo tracking of white matter fiber tracts and estimates the underlying cellular microstructure of brain tissues. Groupwise registration of dMRI data from multiple individuals is an important task for brain template creation and investigation of inter-subject brain variability. However, groupwise registration is a challenging task due to the uniqueness of dMRI data that include multi-dimensional, orientation-dependent signals that describe not only the strength but also the orientation of water diffusion in brain tissues. Deep learning approaches have shown successful performance in standard subject-to-subject dMRI registration. However, no deep learning methods have yet been proposed for groupwise dMRI registration. . In this work, we propose Deep Diffusion MRI Template (DDTemplate), which is a novel deep-learning-based method building upon the popular VoxelMorph framework to take into account dMRI fiber tract information. DDTemplate enables joint usage of whole-brain tissue microstructure and tract-specific fiber orientation information to ensure alignment of white matter fiber tracts and whole brain anatomical structures. We propose a novel deep learning framework that simultaneously trains a groupwise dMRI registration network and generates a population brain template. During inference, the trained model can be applied to register unseen subjects to the learned template. We compare DDTemplate with several state-of-the-art registration methods and demonstrate superior performance on dMRI data from multiple cohorts (adolescents, young adults, and elderly adults) acquired from different scanners. Furthermore, as a testbed task, we perform a between-population analysis to investigate sex differences in the brain, using the popular Tract-Based Spatial Statistics (TBSS) method that relies on groupwise dMRI registration. We find that using DDTemplate can increase the sensitivity in population difference detection, showing the potential of our method's utility in real neuroscientific applications.

Optimization of deep learning models for inference in low resource environments.

Thakur S, Pati S, Wu J, Panchumarthy R, Karkada D, Kozlov A, Shamporov V, Suslov A, Lyakhov D, Proshin M, Shah P, Makris D, Bakas S

pubmed logopapersJul 26 2025
Artificial Intelligence (AI), and particularly deep learning (DL), has shown great promise to revolutionize healthcare. However, clinical translation is often hindered by demanding hardware requirements. In this study, we assess the effectiveness of optimization techniques for DL models in healthcare applications, targeting varying AI workloads across the domains of radiology, histopathology, and medical RGB imaging, while evaluating across hardware configurations. The assessed AI workloads focus on both segmentation and classification workloads, by virtue of brain extraction in Magnetic Resonance Imaging (MRI), colorectal cancer delineation in Hematoxylin & Eosin (H&E) stained digitized tissue sections, and diabetic foot ulcer classification in RGB images. We quantitatively evaluate model performance in terms of model runtime during inference (including speedup, latency, and memory usage) and model utility on unseen data. Our results demonstrate that optimization techniques can substantially improve model runtime, without compromising model utility. These findings suggest that optimization techniques can facilitate the clinical translation of AI models in low-resource environments, making them more practical for real-world healthcare applications even in underserved regions.

CLT-MambaSeg: An integrated model of Convolution, Linear Transformer and Multiscale Mamba for medical image segmentation.

Uppal D, Prakash S

pubmed logopapersJul 26 2025
Recent advances in deep learning have significantly enhanced the performance of medical image segmentation. However, maintaining a balanced integration of feature localization, global context modeling, and computational efficiency remains a critical research challenge. Convolutional Neural Networks (CNNs) effectively capture fine-grained local features through hierarchical convolutions; however, they often struggle to model long-range dependencies due to their limited receptive field. Transformers address this limitation by leveraging self-attention mechanisms to capture global context, but they are computationally intensive and require large-scale data for effective training. The Mamba architecture has emerged as a promising approach, effectively capturing long-range dependencies while maintaining low computational overhead and high segmentation accuracy. Based on this, we propose a method named CLT-MambaSeg that integrates Convolution, Linear Transformer, and Multiscale Mamba architectures to capture local features, model global context, and improve computational efficiency for medical image segmentation. It utilizes a convolution-based Spatial Representation Extraction (SREx) module to capture intricate spatial relationships and dependencies. Further, it comprises a Mamba Vision Linear Transformer (MVLTrans) module to capture multiscale context, spatial and sequential dependencies, and enhanced global context. In addition, to address the problem of limited data, we propose a novel Memory-Guided Augmentation Generative Adversarial Network (MeGA-GAN) that generates synthetic realistic images to further enhance the segmentation performance. We conduct extensive experiments and ablation studies on the five benchmark datasets, namely CVC-ClinicDB, Breast UltraSound Images (BUSI), PH2, and two datasets from the International Skin Imaging Collaboration (ISIC), namely ISIC-2016 and ISIC-2017. Experimental results demonstrate the efficacy of the proposed CLT-MambaSeg compared to other state-of-the-art methods.

KC-UNIT: Multi-kernel conversion using unpaired image-to-image translation with perceptual guidance in chest computed tomography imaging.

Choi C, Kim D, Park S, Lee H, Kim H, Lee SM, Kim N

pubmed logopapersJul 26 2025
Computed tomography (CT) images are reconstructed from raw datasets including sinogram using various convolution kernels through back projection. Kernels are typically chosen depending on the anatomical structure being imaged and the specific purpose of the scan, balancing the trade-off between image sharpness and pixel noise. Generally, a sinogram requires large storage capacity, and storage space is often limited in clinical settings. Thus, CT images are generally reconstructed with only one specific kernel in clinical settings, and the sinogram is typically discarded after a week. Therefore, many researchers have proposed deep learning-based image-to-image translation methods for CT kernel conversion. However, transferring the style of the target kernel while preserving anatomical structure remains challenging, particularly when translating CT images from a source domain to a target domain in an unpaired manner, which is often encountered in real-world settings. Thus, we propose a novel kernel conversion method using unpaired image-to-image translation (KC-UNIT). This approach utilizes discriminator regularization, using feature maps from the generator to improve semantic representation learning. To capture content and style features, cosine similarity content and contrastive style losses were defined between the feature map of generator and semantic label map of discriminator. This can be easily incorporated by modifying the discriminator's architecture without requiring any additional learnable or pre-trained networks. The KC-UNIT demonstrated the ability to preserve fine-grained anatomical structure from the source domain during transfer. Our method outperformed existing generative adversarial network-based methods across most kernel conversion methods in three kernel domains. The code is available at https://github.com/cychoi97/KC-UNIT.

Artificial intelligence-assisted compressed sensing CINE enhances the workflow of cardiac magnetic resonance in challenging patients.

Wang H, Schmieder A, Watkins M, Wang P, Mitchell J, Qamer SZ, Lanza G

pubmed logopapersJul 26 2025
A key cardiac magnetic resonance (CMR) challenge is breath-holding duration, difficult for cardiac patients. To evaluate whether artificial intelligence-assisted compressed sensing CINE (AI-CS-CINE) reduces image acquisition time of CMR compared to conventional CINE (C-CINE). Cardio-oncology patients (<i>n</i> = 60) and healthy volunteers (<i>n</i> = 29) underwent sequential C-CINE and AI-CS-CINE with a 1.5-T scanner. Acquisition time, visual image quality assessment, and biventricular metrics (end-diastolic volume, end-systolic volume, stroke volume, ejection fraction, left ventricular mass, and wall thickness) were analyzed and compared between C-CINE and AI-CS-CINE with Bland-Altman analysis, and calculation of intraclass coefficient (ICC). In 89 participants (58.5 ± 16.8 years, 42 males, 47 females), total AI-CS-CINE acquisition and reconstruction time (37 seconds) was 84% faster than C-CINE (238 seconds). C-CINE required repeats in 23% (20/89) of cases (approximately 8 minutes lost), while AI-CS-CINE only needed one repeat (1%; 2 seconds lost). AI-CS-CINE had slightly lower contrast but preserved structural clarity. Bland-Altman plots and ICC (0.73 ≤ <i>r</i> ≤ 0.98) showed strong agreement for left ventricle (LV) and right ventricle (RV) metrics, including those in the cardiac amyloidosis subgroup (<i>n</i> = 31). AI-CS-CINE enabled faster, easier imaging in patients with claustrophobia, dyspnea, arrhythmias, or restlessness. Motion-artifacted C-CINE images were reliably interpreted from AI-CS-CINE. AI-CS-CINE accelerated CMR image acquisition and reconstruction, preserved anatomical detail, and diminished impact of patient-related motion. Quantitative AI-CS-CINE metrics agreed closely with C-CINE in cardio-oncology patients, including the cardiac amyloidosis cohort, as well as healthy volunteers regardless of left and right ventricular size and function. AI-CS-CINE significantly enhanced CMR workflow, particularly in challenging cases. The strong analytical concordance underscores reliability and robustness of AI-CS-CINE as a valuable tool.

Synomaly noise and multi-stage diffusion: A novel approach for unsupervised anomaly detection in medical images.

Bi Y, Huang L, Clarenbach R, Ghotbi R, Karlas A, Navab N, Jiang Z

pubmed logopapersJul 26 2025
Anomaly detection in medical imaging plays a crucial role in identifying pathological regions across various imaging modalities, such as brain MRI, liver CT, and carotid ultrasound (US). However, training fully supervised segmentation models is often hindered by the scarcity of expert annotations and the complexity of diverse anatomical structures. To address these issues, we propose a novel unsupervised anomaly detection framework based on a diffusion model that incorporates a synthetic anomaly (Synomaly) noise function and a multi-stage diffusion process. Synomaly noise introduces synthetic anomalies into healthy images during training, allowing the model to effectively learn anomaly removal. The multi-stage diffusion process is introduced to progressively denoise images, preserving fine details while improving the quality of anomaly-free reconstructions. The generated high-fidelity counterfactual healthy images can further enhance the interpretability of the segmentation models, as well as provide a reliable baseline for evaluating the extent of anomalies and supporting clinical decision-making. Notably, the unsupervised anomaly detection model is trained purely on healthy images, eliminating the need for anomalous training samples and pixel-level annotations. We validate the proposed approach on brain MRI, liver CT datasets, and carotid US. The experimental results demonstrate that the proposed framework outperforms existing state-of-the-art unsupervised anomaly detection methods, achieving performance comparable to fully supervised segmentation models in the US dataset. Ablation studies further highlight the contributions of Synomaly noise and the multi-stage diffusion process in improving anomaly segmentation. These findings underscore the potential of our approach as a robust and annotation-efficient alternative for medical anomaly detection. Code:https://github.com/yuan-12138/Synomaly.

Accelerating cardiac radial-MRI: Fully polar based technique using compressed sensing and deep learning.

Ghodrati V, Duan J, Ali F, Bedayat A, Prosper A, Bydder M

pubmed logopapersJul 26 2025
Fast radial-MRI approaches based on compressed sensing (CS) and deep learning (DL) often use non-uniform fast Fourier transform (NUFFT) as the forward imaging operator, which might introduce interpolation errors and reduce image quality. Using the polar Fourier transform (PFT), we developed fully polar CS and DL algorithms for fast 2D cardiac radial-MRI. Our methods directly reconstruct images in polar spatial space from polar k-space data, eliminating frequency interpolation and ensuring an easy-to-compute data consistency term for the DL framework via the variable splitting (VS) scheme. Furthermore, PFT reconstruction produces initial images with fewer artifacts in a reduced field of view, making it a better starting point for CS and DL algorithms, especially for dynamic imaging, where information from a small region of interest is critical, as opposed to NUFFT, which often results in global streaking artifacts. In the cardiac region, PFT-based CS technique outperformed NUFFT-based CS at acceleration rates of 5x (mean SSIM: 0.8831 vs. 0.8526), 10x (0.8195 vs. 0.7981), and 15x (0.7720 vs. 0.7503). Our PFT(VS)-DL technique outperformed the NUFFT(GD)-based DL method, which used unrolled gradient descent with the NUFFT as the forward imaging operator, with mean SSIM scores of 0.8914 versus 0.8617 at 10x and 0.8470 versus 0.8301 at 15x. Radiological assessments revealed that PFT(VS)-based DL scored 2.9±0.30 and 2.73±0.45 at 5x and 10x, whereas NUFFT(GD)-based DL scored 2.7±0.47 and 2.40±0.50, respectively. Our methods suggest a promising alternative to NUFFT-based fast radial-MRI for dynamic imaging, prioritizing reconstruction quality in a small region of interest over whole image quality.

Artificial intelligence-powered software outperforms interventional cardiologists in assessment of IVUS-based stent optimization.

Rubio PM, Garcia-Garcia HM, Galo J, Chaturvedi A, Case BC, Mintz GS, Ben-Dor I, Hashim H, Waksman R

pubmed logopapersJul 26 2025
Optimal stent deployment assessed by intravascular ultrasound (IVUS) is associated with improved outcomes after percutaneous coronary intervention (PCI). However, IVUS remains underutilized due to its time-consuming analysis and reliance on operator expertise. AVVIGO™+, an FDA-approved artificial intelligence (AI) software, offers automated lesion assessment, but its performance for stent evaluation has not been thoroughly investigated. To assess whether an artificial intelligence-powered software (AVVIGO™+) provides a superior evaluation of IVUS-based stent expansion index (%Stent expansion = Minimum Stent Area (MSA) / Distal reference lumen area) and geographic miss (i.e. >50 % plaque burden - PB - at stent edges) compared to the current gold standard method performed by interventional cardiologists (IC), defined as frame-by-frame visual assessment by interventional cardiologists, selecting the MSA and the reference frame with the largest lumen area within 5 mm of the stent edge, following expert consensus. This retrospective study included 60 patients (47,997 IVUS frames) who underwent IVUS guided PCI, independently analyzed by IC and AVVIGO™+. Assessments included minimum stent area (MSA), stent expansion index, and PB at proximal and distal reference segments. For expansion, a threshold of 80 % was used to define suboptimal results. The time required for expansion analysis was recorded for both methods. Concordance, absolute and relative differences were evaluated. AVVIGO™ + consistently identified lower mean expansion (70.3 %) vs. IC (91.2 %), (p < 0.0001), primarily due to detecting frames with smaller MSA values (5.94 vs. 7.19 mm<sup>2</sup>, p = 0.0053). This led to 25 discordant cases in which AVVIGO™ + reported suboptimal expansion while IC classified the result as adequate. The analysis time was significantly shorter with AVVIGO™ + (0.76 ± 0.39 min) vs IC (1.89 ± 0.62 min) (p < 0.0001), representing a 59.7 % reduction. For geographic miss, AVVIGO™ + reported higher PB than IC at both distal (51.8 % vs. 43.0 %, p < 0.0001) and proximal (50.0 % vs. 43.0 %, p = 0.0083) segments. When applying the 50 % PB threshold, AVVIGO™ + identified PB ≥50 % not seen by IC in 12 cases (6 distal, 6 proximal). AVVIGO™ + demonstrated improved detection of suboptimal stent expansion and geographic miss compared to interventional cardiologists, while also significantly reducing analysis time. These findings suggest that AI-based platforms may offer a more reliable and efficient approach to IVUS-guided stent optimization, with potential to enhance consistency in clinical practice.

Current evidence of low-dose CT screening benefit.

Yip R, Mulshine JL, Oudkerk M, Field J, Silva M, Yankelevitz DF, Henschke CI

pubmed logopapersJul 25 2025
Lung cancer is the leading cause of cancer-related mortality worldwide, largely due to late-stage diagnosis. Low-dose computed tomography (LDCT) screening has emerged as a powerful tool for early detection, enabling diagnosis at curable stages and reducing lung cancer mortality. Despite strong evidence, LDCT screening uptake remains suboptimal globally. This review synthesizes current evidence supporting LDCT screening, highlights ongoing global implementation efforts, and discusses key insights from the 1st AGILE conference. Lung cancer screening is gaining global momentum, with many countries advancing plans for national LDCT programs. Expanding eligibility through risk-based models and targeting high-risk never- and light-smokers are emerging strategies to improve efficiency and equity. Technological advancements, including AI-assisted interpretation and image-based biomarkers, are addressing concerns around false positives, overdiagnosis, and workforce burden. Integrating cardiac and smoking-related disease assessment within LDCT screening offers added preventive health benefits. To maximize global impact, screening strategies must be tailored to local health systems and populations. Efforts should focus on increasing awareness, standardizing protocols, optimizing screening intervals, and strengthening multidisciplinary care pathways. International collaboration and shared infrastructure can accelerate progress and ensure sustainability. LDCT screening represents a cost-effective opportunity to reduce lung cancer mortality and premature deaths.

Quantifying physiological variability and improving reproducibility in 4D-flow MRI cerebrovascular measurements with self-supervised deep learning.

Jolicoeur BW, Yardim ZS, Roberts GS, Rivera-Rivera LA, Eisenmenger LB, Johnson KM

pubmed logopapersJul 25 2025
To assess the efficacy of self-supervised deep learning (DL) denoising in reducing measurement variability in 4D-Flow MRI, and to clarify the contributions of physiological variation to cerebrovascular hemodynamics. A self-supervised DL denoising framework was trained on 3D radially sampled 4D-Flow MRI data. The model was evaluated in a prospective test-retest imaging study in which 10 participants underwent multiple 4D-Flow MRI scans. This included back-to-back scans and a single scan interleaved acquisition designed to isolate noise from physiological variations. The effectiveness of DL denoising was assessed by comparing pixelwise velocity and hemodynamic metrics before and after denoising. DL denoising significantly enhanced the reproducibility of 4D-Flow MRI measurements, reducing the 95% confidence interval of cardiac-resolved velocity from 215 to 142 mm/s in back-to-back scans and from 158 to 96 mm/s in interleaved scans, after adjusting for physiological variation. In derived parameters, DL denoising did not significantly improve integrated measures, such as flow rates, but did significantly improve noise sensitive measures, such as pulsatility index. Physiologic variation in back-to-back time-resolved scans contributed 26.37% ± 0.08% and 32.42% ± 0.05% of standard error before and after DL. Self-supervised DL denoising enhances the quantitative repeatability of 4D-Flow MRI by reducing technical noise; however, variations from physiology and post-processing are not removed. These findings underscore the importance of accounting for both technical and physiological variability in neurovascular flow imaging, particularly for studies aiming to establish biomarkers for neurodegenerative diseases with vascular contributions.
Page 69 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.