Sort by:
Page 3 of 19183 results

Adherence to the Checklist for Artificial Intelligence in Medical Imaging (CLAIM): an umbrella review with a comprehensive two-level analysis.

Koçak B, Köse F, Keleş A, Şendur A, Meşe İ, Karagülle M

pubmed logopapersSep 8 2025
To comprehensively assess Checklist for Artificial Intelligence in Medical Imaging (CLAIM) adherence in medical imaging artificial intelligence (AI) literature by aggregating data from previous systematic and non-systematic reviews. A systematic search of PubMed, Scopus, and Google Scholar identified reviews using the CLAIM to evaluate medical imaging AI studies. Reviews were analyzed at two levels: review level (33 reviews; 1,458 studies) and study level (421 unique studies from 15 reviews). The CLAIM adherence metrics (scores and compliance rates), baseline characteristics, factors influencing adherence, and critiques of the CLAIM were analyzed. A review-level analysis of 26 reviews (874 studies) found a weighted mean CLAIM score of 25 [standard deviation (SD): 4] and a median of 26 [interquartile range (IQR): 8; 25<sup>th</sup>-75<sup>th</sup> percentiles: 20-28]. In a separate review-level analysis involving 18 reviews (993 studies), the weighted mean CLAIM compliance was 63% (SD: 11%), with a median of 66% (IQR: 4%; 25<sup>th</sup>-75<sup>th</sup> percentiles: 63%-67%). A study-level analysis of 421 unique studies published between 1997 and 2024 found a median CLAIM score of 26 (IQR: 6; 25<sup>th</sup>-75<sup>th</sup> percentiles: 23-29) and a median compliance of 68% (IQR: 16%; 25<sup>th</sup>-75<sup>th</sup> percentiles: 59%-75%). Adherence was independently associated with the journal impact factor quartile, publication year, and specific radiology subfields. After guideline publication, CLAIM compliance improved (<i>P</i> = 0.004). Multiple readers provided an evaluation in 85% (28/33) of reviews, but only 11% (3/28) included a reliability analysis. An item-wise evaluation identified 11 underreported items (missing in ≥50% of studies). Among the 10 identified critiques, the most common were item inapplicability to diverse study types and subjective interpretations of fulfillment. Our two-level analysis revealed considerable reporting gaps, underreported items, factors related to adherence, and common CLAIM critiques, providing actionable insights for researchers and journals to improve transparency, reproducibility, and reporting quality in AI studies. By combining data from systematic and non-systematic reviews on CLAIM adherence, our comprehensive findings may serve as targets to help researchers and journals improve transparency, reproducibility, and reporting quality in AI studies.

Veriserum: A dual-plane fluoroscopic dataset with knee implant phantoms for deep learning in medical imaging

Jinhao Wang, Florian Vogl, Pascal Schütz, Saša Ćuković, William R. Taylor

arxiv logopreprintSep 5 2025
Veriserum is an open-source dataset designed to support the training of deep learning registration for dual-plane fluoroscopic analysis. It comprises approximately 110,000 X-ray images of 10 knee implant pair combinations (2 femur and 5 tibia implants) captured during 1,600 trials, incorporating poses associated with daily activities such as level gait and ramp descent. Each image is annotated with an automatically registered ground-truth pose, while 200 images include manually registered poses for benchmarking. Key features of Veriserum include dual-plane images and calibration tools. The dataset aims to support the development of applications such as 2D/3D image registration, image segmentation, X-ray distortion correction, and 3D reconstruction. Freely accessible, Veriserum aims to advance computer vision and medical imaging research by providing a reproducible benchmark for algorithm development and evaluation. The Veriserum dataset used in this study is publicly available via https://movement.ethz.ch/data-repository/veriserum.html, with the data stored at ETH Z\"urich Research Collections: https://doi.org/10.3929/ethz-b-000701146.

Geometric-Driven Cross-Modal Registration Framework for Optical Scanning and CBCT Models in AR-Based Maxillofacial Surgical Navigation.

Liu Y, Wang E, Gong M, Tao B, Wu Y, Qi X, Chen X

pubmed logopapersSep 4 2025
Accurate preoperative planning for dental implants, especially in edentulous or partially edentulous patients, relies on precise localization of radiographic templates that guide implant positioning. By wearing a patientspecific radiographic template, clinicians can better assess anatomical constraints and plan optimal implant paths. However, due to the low radiopacity of such templates, their spatial position is difficult to determine directly from cone-beam computed tomography (CBCT) scans. To overcome this limitation, high-resolution optical scans of the templates are acquired, providing detailed geometric information for accurate spatial registration. This paper proposes a geometric-driven cross-modal registration framework that aligns the optical scan model of the radiographic template with patient CBCT data, enhancing registration accuracy through geometric feature extraction such as curvature and occlusal contours. A hybrid deep learning workflow further improves robustness, achieving a root mean square error (RMSE) of 1.68mm and mean absolute error (MAE) of 1.25mm. The system also incorporates augmented reality (AR) for real-time surgical navigation. Clinical and phantom experiments validate its effectiveness in supporting precise implant path planning and execution. Our proposed system enhances the efficiency and safety of dental implant surgery by integrating geometric feature extraction, deep learning-based registration, and AR-assisted navigation.

A Physics-ASIC Architecture-Driven Deep Learning Photon-Counting Detector Model Under Limited Data.

Yu X, Wu Q, Qin W, Zhong T, Su M, Ma J, Zhang Y, Ji X, Wang W, Quan G, Du Y, Chen Y, Lai X

pubmed logopapersSep 4 2025
Photon-counting computed tomography (PCCT) based on photon-counting detectors (PCDs) represents a cutting-edge CT technology, offering higher spatial resolution, reduced radiation dose, and advanced material decomposition capabilities. Accurately modeling complex and nonlinear PCDs under limited calibration data becomes one of the challenges hindering the widespread accessibility of PCCT. This paper introduces a physics-ASIC architecture-driven deep learning detector model for PCDs. This model adeptly captures the comprehensive response of the PCD, encompassing both sensor and ASIC responses. We present experimental results demonstrating the model's exceptional accuracy and robustness with limited calibration data. Key advancements include reduced calibration errors, reasonable physics-ASIC parameters estimation, and high-quality and high-accuracy material decomposition images.

Advancing Positron Emission Tomography Image Quantification: Artificial Intelligence-Driven Methods, Clinical Challenges, and Emerging Opportunities in Long-Axial Field-of-View Positron Emission Tomography/Computed Tomography Imaging

Fereshteh Yousefirizi, Movindu Dassanayake, Alejandro Lopez, Andrew Reader, Gary J. R. Cook, Clemens Mingels, Arman Rahmim, Robert Seifert, Ian Alberts

arxiv logopreprintSep 3 2025
MTV is increasingly recognized as an accurate estimate of disease burden, which has prognostic value, but its implementation has been hindered by the time-consuming need for manual segmentation of images. Automated quantitation using AI-driven approaches is promising. AI-driven automated quantification significantly reduces labor-intensive manual segmentation, improving consistency, reproducibility, and feasibility for routine clinical practice. AI-enhanced radiomics provides comprehensive characterization of tumor biology, capturing intratumoral and intertumoral heterogeneity beyond what conventional volumetric metrics alone offer, supporting improved patient stratification and therapy planning. AI-driven segmentation of normal organs improves radioligand therapy planning by enabling accurate dose predictions and comprehensive organ-based radiomics analysis, further refining personalized patient management.

Navigator motion-resolved MR fingerprinting using implicit neural representation: Feasibility for free-breathing three-dimensional whole-liver multiparametric mapping.

Li C, Li J, Zhang J, Solomon E, Dimov AV, Spincemaille P, Nguyen TD, Prince MR, Wang Y

pubmed logopapersSep 2 2025
To develop a multiparametric free-breathing three-dimensional, whole-liver quantitative maps of water T<sub>1</sub>, water T<sub>2</sub>, fat fraction (FF) and R<sub>2</sub>*. A multi-echo 3D stack-of-spiral gradient-echo sequence with inversion recovery and T<sub>2</sub>-prep magnetization preparations was implemented for multiparametric MRI. Fingerprinting and a neural network based on implicit neural representation (FINR) were developed to simultaneously reconstruct the motion deformation fields, the static images, perform water-fat separation, and generate T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF maps. FINR performance was evaluated in 10 healthy subjects by comparison with quantitative maps generated using conventional breath-holding imaging. FINR consistently generated sharp images in all subjects free of motion artifacts. FINR showed minimal bias and narrow 95% limits of agreement for T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF values in the liver compared with conventional imaging. FINR training took about 3 h per subject, and FINR inference took less than 1 min to produce static images and motion deformation fields. FINR is a promising approach for 3D whole-liver T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF mapping in a single free-breathing continuous scan.

Overcoming Site Variability in Multisite fMRI Studies: an Autoencoder Framework for Enhanced Generalizability of Machine Learning Models.

Almuqhim F, Saeed F

pubmed logopapersSep 2 2025
Harmonizing multisite functional magnetic resonance imaging (fMRI) data is crucial for eliminating site-specific variability that hinders the generalizability of machine learning models. Traditional harmonization techniques, such as ComBat, depend on additive and multiplicative factors, and may struggle to capture the non-linear interactions between scanner hardware, acquisition protocols, and signal variations between different imaging sites. In addition, these statistical techniques require data from all the sites during their model training which may have the unintended consequence of data leakage for ML models trained using this harmonized data. The ML models trained using this harmonized data may result in low reliability and reproducibility when tested on unseen data sets, limiting their applicability for general clinical usage. In this study, we propose Autoencoders (AEs) as an alternative for harmonizing multisite fMRI data. Our designed and developed framework leverages the non-linear representation learning capabilities of AEs to reduce site-specific effects while preserving biologically meaningful features. Our evaluation using Autism Brain Imaging Data Exchange I (ABIDE-I) dataset, containing 1,035 subjects collected from 17 centers demonstrates statistically significant improvements in leave-one-site-out (LOSO) cross-validation evaluations. All AE variants (AE, SAE, TAE, and DAE) significantly outperformed the baseline mode (p < 0.01), with mean accuracy improvements ranging from 3.41% to 5.04%. Our findings demonstrate the potential of AEs to harmonize multisite neuroimaging data effectively enabling robust downstream analyses across various neuroscience applications while reducing data-leakage, and preservation of neurobiological features. Our open-source code is made available at https://github.com/pcdslab/Autoencoder-fMRI-Harmonization .

Optimizing Paths for Adaptive Fly-Scan Microscopy: An Extended Version

Yu Lu, Thomas F. Lynn, Ming Du, Zichao Di, Sven Leyffer

arxiv logopreprintSep 2 2025
In x-ray microscopy, traditional raster-scanning techniques are used to acquire a microscopic image in a series of step-scans. Alternatively, scanning the x-ray probe along a continuous path, called a fly-scan, reduces scan time and increases scan efficiency. However, not all regions of an image are equally important. Currently used fly-scan methods do not adapt to the characteristics of the sample during the scan, often wasting time in uniform, uninteresting regions. One approach to avoid unnecessary scanning in uniform regions for raster step-scans is to use deep learning techniques to select a shorter optimal scan path instead of a traditional raster scan path, followed by reconstructing the entire image from the partially scanned data. However, this approach heavily depends on the quality of the initial sampling, requires a large dataset for training, and incurs high computational costs. We propose leveraging the fly-scan method along an optimal scanning path, focusing on regions of interest (ROIs) and using image completion techniques to reconstruct details in non-scanned areas. This approach further shortens the scanning process and potentially decreases x-ray exposure dose while maintaining high-quality and detailed information in critical regions. To achieve this, we introduce a multi-iteration fly-scan framework that adapts to the scanned image. Specifically, in each iteration, we define two key functions: (1) a score function to generate initial anchor points and identify potential ROIs, and (2) an objective function to optimize the anchor points for convergence to an optimal set. Using these anchor points, we compute the shortest scanning path between optimized anchor points, perform the fly-scan, and subsequently apply image completion based on the acquired information in preparation for the next scan iteration.

Diffusion-QSM: diffusion model with timetravel and resampling refinement for quantitative susceptibility mapping.

Zhang M, Liu C, Zhang Y, Wei H

pubmed logopapersSep 2 2025
Quantitative susceptibility mapping (QSM) is a useful magnetic resonance imaging technique. We aim to propose a deep learning (DL)-based method for QSM reconstruction that is robust to data perturbations. We developed Diffusion-QSM, a diffusion model-based method with a time-travel and resampling refinement module for high-quality QSM reconstruction. First, the diffusion prior is trained unconditionally on high-quality QSM images, without requiring explicit information about the measured tissue phase, thereby enhancing generalization performance. Subsequently, during inference, the physical constraints from the QSM forward model and measurement are integrated into the output of the diffusion model to guide the sampling process toward realistic image representations. In addition, a time-travel and resampling module is employed during the later sampling stage to refine the image quality, resulting in an improved reconstruction without significantly prolonging the time. Experimental results show that Diffusion-QSM outperforms traditional and unsupervised DL methods for QSM reconstruction using simulation, in vivo and ex vivo data and shows better generalization capability than supervised DL methods when processing out-of-distribution data. Diffusion-QSM successfully unifies data-driven diffusion priors and subjectspecific physics constraints, enabling generalizable, high-quality QSM reconstruction under diverse perturbations, including image contrast, resolution and scan direction. This work advances QSM reconstruction by bridging the generalization gap in deep learning. The excellent quality and generalization capability underscore its potential for various realistic applications.

Acoustic Interference Suppression in Ultrasound images for Real-Time HIFU Monitoring Using an Image-Based Latent Diffusion Model

Dejia Cai, Yao Ran, Kun Yang, Xinwang Shi, Yingying Zhou, Kexian Wu, Yang Xu, Yi Hu, Xiaowei Zhou

arxiv logopreprintSep 1 2025
High-Intensity Focused Ultrasound (HIFU) is a non-invasive therapeutic technique widely used for treating various diseases. However, the success and safety of HIFU treatments depend on real-time monitoring, which is often hindered by interference when using ultrasound to guide HIFU treatment. To address these challenges, we developed HIFU-ILDiff, a novel deep learning-based approach leveraging latent diffusion models to suppress HIFU-induced interference in ultrasound images. The HIFU-ILDiff model employs a Vector Quantized Variational Autoencoder (VQ-VAE) to encode noisy ultrasound images into a lower-dimensional latent space, followed by a latent diffusion model that iteratively removes interference. The denoised latent vectors are then decoded to reconstruct high-resolution, interference-free ultrasound images. We constructed a comprehensive dataset comprising 18,872 image pairs from in vitro phantoms, ex vivo tissues, and in vivo animal data across multiple imaging modalities and HIFU power levels to train and evaluate the model. Experimental results demonstrate that HIFU-ILDiff significantly outperforms the commonly used Notch Filter method, achieving a Structural Similarity Index (SSIM) of 0.796 and Peak Signal-to-Noise Ratio (PSNR) of 23.780 compared to SSIM of 0.443 and PSNR of 14.420 for the Notch Filter under in vitro scenarios. Additionally, HIFU-ILDiff achieves real-time processing at 15 frames per second, markedly faster than the Notch Filter's 5 seconds per frame. These findings indicate that HIFU-ILDiff is able to denoise HIFU interference in ultrasound guiding images for real-time monitoring during HIFU therapy, which will greatly improve the treatment precision in current clinical applications.
Page 3 of 19183 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.