Sort by:
Page 9 of 54537 results

3D MR Neurography of Craniocervical Nerves: Comparing Double-Echo Steady-State and Postcontrast STIR with Deep Learning-Based Reconstruction at 1.5T.

Ensle F, Zecca F, Kerber B, Lohezic M, Wen Y, Kroschke J, Pawlus K, Guggenberger R

pubmed logopapersSep 2 2025
3D MR neurography is a useful diagnostic tool in head and neck disorders, but neurographic imaging remains challenging in this region. Optimal sequences for nerve visualization have not yet been established and may also differ between nerves. While deep learning (DL) reconstruction can enhance nerve depiction, particularly at 1.5T, studies in the head and neck are lacking. The purpose of this study was to compare double echo steady-state (DESS) and postcontrast STIR sequences in DL-reconstructed 3D MR neurography of the extraforaminal cranial and spinal nerves at 1.5T. Eighteen consecutive examinations of 18 patients undergoing head-and-neck MRI at 1.5T were retrospectively included (mean age: 51 ± 14 years, 11 women). 3D DESS and postcontrast 3D STIR sequences were obtained as part of the standard protocol, and reconstructed with a prototype DL algorithm. Two blinded readers qualitatively evaluated visualization of the inferior alveolar, lingual, facial, hypoglossal, greater occipital, lesser occipital, and greater auricular nerves, as well as overall image quality, vascular suppression, and artifacts. Additionally, apparent SNR and contrast-to-noise ratios of the inferior alveolar and greater occipital nerve were measured. Visual ratings and quantitative measurements, respectively, were compared between sequences by using Wilcoxon signed-rank test. DESS demonstrated significantly improved visualization of the lesser occipital nerve, greater auricular nerve, and proximal greater occipital nerve (<i>P</i> < .015). Postcontrast STIR showed significantly enhanced visualization of the lingual nerve, hypoglossal nerve, and distal inferior alveolar nerve (<i>P</i> < .001). The facial nerve, proximal inferior alveolar nerve, and distal greater occipital nerve did not demonstrate significant differences in visualization between sequences (<i>P</i> > .08). There was also no significant difference for overall image quality and artifacts. Postcontrast STIR achieved superior vascular suppression, reaching statistical significance for 1 reader (<i>P</i> = .039). Quantitatively, there was no significant difference between sequences (<i>P</i> > .05). Our findings suggest that 3D DESS generally provides improved visualization of spinal nerves, while postcontrast 3D STIR facilitates enhanced delineation of extraforaminal cranial nerves.

Optimizing Paths for Adaptive Fly-Scan Microscopy: An Extended Version

Yu Lu, Thomas F. Lynn, Ming Du, Zichao Di, Sven Leyffer

arxiv logopreprintSep 2 2025
In x-ray microscopy, traditional raster-scanning techniques are used to acquire a microscopic image in a series of step-scans. Alternatively, scanning the x-ray probe along a continuous path, called a fly-scan, reduces scan time and increases scan efficiency. However, not all regions of an image are equally important. Currently used fly-scan methods do not adapt to the characteristics of the sample during the scan, often wasting time in uniform, uninteresting regions. One approach to avoid unnecessary scanning in uniform regions for raster step-scans is to use deep learning techniques to select a shorter optimal scan path instead of a traditional raster scan path, followed by reconstructing the entire image from the partially scanned data. However, this approach heavily depends on the quality of the initial sampling, requires a large dataset for training, and incurs high computational costs. We propose leveraging the fly-scan method along an optimal scanning path, focusing on regions of interest (ROIs) and using image completion techniques to reconstruct details in non-scanned areas. This approach further shortens the scanning process and potentially decreases x-ray exposure dose while maintaining high-quality and detailed information in critical regions. To achieve this, we introduce a multi-iteration fly-scan framework that adapts to the scanned image. Specifically, in each iteration, we define two key functions: (1) a score function to generate initial anchor points and identify potential ROIs, and (2) an objective function to optimize the anchor points for convergence to an optimal set. Using these anchor points, we compute the shortest scanning path between optimized anchor points, perform the fly-scan, and subsequently apply image completion based on the acquired information in preparation for the next scan iteration.

Overcoming Site Variability in Multisite fMRI Studies: an Autoencoder Framework for Enhanced Generalizability of Machine Learning Models.

Almuqhim F, Saeed F

pubmed logopapersSep 2 2025
Harmonizing multisite functional magnetic resonance imaging (fMRI) data is crucial for eliminating site-specific variability that hinders the generalizability of machine learning models. Traditional harmonization techniques, such as ComBat, depend on additive and multiplicative factors, and may struggle to capture the non-linear interactions between scanner hardware, acquisition protocols, and signal variations between different imaging sites. In addition, these statistical techniques require data from all the sites during their model training which may have the unintended consequence of data leakage for ML models trained using this harmonized data. The ML models trained using this harmonized data may result in low reliability and reproducibility when tested on unseen data sets, limiting their applicability for general clinical usage. In this study, we propose Autoencoders (AEs) as an alternative for harmonizing multisite fMRI data. Our designed and developed framework leverages the non-linear representation learning capabilities of AEs to reduce site-specific effects while preserving biologically meaningful features. Our evaluation using Autism Brain Imaging Data Exchange I (ABIDE-I) dataset, containing 1,035 subjects collected from 17 centers demonstrates statistically significant improvements in leave-one-site-out (LOSO) cross-validation evaluations. All AE variants (AE, SAE, TAE, and DAE) significantly outperformed the baseline mode (p < 0.01), with mean accuracy improvements ranging from 3.41% to 5.04%. Our findings demonstrate the potential of AEs to harmonize multisite neuroimaging data effectively enabling robust downstream analyses across various neuroscience applications while reducing data-leakage, and preservation of neurobiological features. Our open-source code is made available at https://github.com/pcdslab/Autoencoder-fMRI-Harmonization .

Comparing respiratory-triggered T2WI MRI with an artificial intelligence-assisted technique and motion-suppressed respiratory-triggered T2WI in abdominal imaging.

Wang N, Liu Y, Ran J, An Q, Chen L, Zhao Y, Yu D, Liu A, Zhuang L, Song Q

pubmed logopapersSep 1 2025
Magnetic resonance imaging (MRI) plays a crucial role in the diagnosis of abdominal conditions. A comprehensive assessment, especially of the liver, requires multi-planar T2-weighted sequences. To mitigate the effect of respiratory motion on image quality, the combination of acquisition and reconstruction with motion suppression (ARMS) and respiratory triggering (RT) is commonly employed. While this method maintains image quality, it does so at the expense of longer acquisition times. We evaluated the effectiveness of free-breathing, artificial intelligence-assisted compressed-sensing respiratory-triggered T2-weighted imaging (ACS-RT T2WI) compared to conventional acquisition and reconstruction with motion-suppression respiratory-triggered T2-weighted imaging (ARMS-RT T2WI) in abdominal MRI, assessing both qualitative and quantitative measures of image quality and lesion detection. In this retrospective study, 334 patients with upper abdominal discomfort were examined on a 3.0T MRI system. Each patient underwent both ARMS-RT T2WI and ACS-RT T2WI. Image quality was analyzed by two independent readers using a five-point Likert scale. The quantitative measurements included the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), peak signal-to-noise ratio (PSNR), and sharpness. Lesion detection rates and contrast ratios (CRs) were also evaluated for liver, biliary system, and pancreatic lesions. There ACS-RT T2WI protocol had a significantly reduced median scanning time compared to the ARMS-RT T2WI protocol (148.22±38.37 <i>vs.</i> 13.86±1.72 seconds). However, ARMS-RT T2WI had a higher PSNR than ACS-RT T2WI (39.87±2.72 <i>vs.</i> 38.69±3.00, P<0.05). Of the 201 liver lesions, ARMS-RT T2WI detected 193 (96.0%) and ACS-RT T2WI detected 192 (95.5%) (P=0.787). Of the 97 biliary system lesions, ARMS-RT T2WI detected 92 (94.8%) and ACS-RT T2WI detected 94 (96.9%) (P=0.721). Of the 110 pancreatic lesions, ARMS-RT T2WI detected 102 (92.7%) and ACS-RT T2WI detected 104 (94.5%) (P=0.784). The CR analysis showed the superior performance of ACS-RT T2WI in certain lesion types (hemangioma, 0.58±0.11 <i>vs.</i> 0.55±0.12; biliary tumor, 0.47±0.09 <i>vs.</i> 0.38±0.09; pancreatic cystic lesions, 0.59±0.12 <i>vs.</i> 0.48±0.14; pancreatic cancer, 0.48±0.18 <i>vs.</i> 0.43±0.17), but no significant difference was found in others like focal nodular hyperplasia (FNH), hepatapostema, hepatocellular carcinoma (HCC), cholangiocarcinoma, metastatic tumors, and biliary calculus. ACS-RT T2WI ensures clinical reliability with a substantial scan time reduction (>80%). Despite minor losses in detail and SNR reduction, ACS-RT T2WI does not impair lesion detection, marking its efficacy in abdominal imaging.

Automatic design and optimization of MRI-based neurochemical sensors via reinforcement learning.

Ali Z, Asparin A, Zhang Y, Mettee H, Taha D, Ha Y, Bhanot D, Sarwar K, Kiran H, Wu S, Wei H

pubmed logopapersSep 1 2025
Magnetic resonance imaging (MRI) is a cornerstone of medical imaging, celebrated for its non-invasiveness, high spatial and temporal resolution, and exceptional soft tissue contrast, with over 100 million clinical procedures performed annually worldwide. In this field, MRI-based nanosensors have garnered significant interest in biomedical research due to their tunable sensing mechanisms, high permeability, rapid kinetics, and surface functionality. Extensive studies in the field have reported the use of superparamagnetic iron oxide nanoparticles (SPIONs) and proteins as a proof-of-concept for sensing critical neurochemicals via MRI. However, the signal change ratio and response rate of our SPION-protein-based in vitro dopamine and in vivo calcium sensors need to be further enhanced to detect the subtle and transient fluctuations in neurochemical levels associated with neural activities, starting from in vitro diagnostics. In this paper, we present an advanced reinforcement-learning-based computational model that treats sensor design as an optimal decision-making problem by choosing sensor performance as a weighted reward objective function. The adjustments of the SPION's and protein's three-dimensional configuration and magnetic moment establish a set of actions that can autonomously maximize the cumulative reward in the computational environment. Our new model first elucidates the sensor's conformation alteration behind the increment in T<sub>2</sub> contrast observed experimentally in MRI in the presence and absence of calcium and dopamine neurochemicals. Additionally, our enhanced machine-learning algorithm can autonomously learn the performance trends of SPION-protein-based sensors and identify their optimal structural parameters. Experimental in vitro validation with TEM and MR relaxometry confirmed the predicted optimal SPION diameters, demonstrating the highest sensing performance at 9 nm for calcium and 11 nm for dopamine detection. Beginning with in vitro diagnostics, these results demonstrate a versatile modeling platform for the development of MRI-based neurochemical sensors, providing insights into their behavior under operational conditions. This platform also enables the autonomous design of improved sensor sizes and geometries, providing a roadmap for the future optimization of MRI sensors.

Improved image quality and diagnostic performance of coronary computed tomography angiography-derived fractional flow reserve with super-resolution deep learning reconstruction.

Zou LM, Xu C, Xu M, Xu KT, Wang M, Wang Y, Wang YN

pubmed logopapersSep 1 2025
Super-resolution deep learning reconstruction (SR-DLR) algorithm has emerged as a promising image reconstruction technique for improving the image quality of coronary computed tomography angiography (CCTA) and ensuring accurate CCTA-derived fractional flow reserve (CT-FFR) assessments even in problematic scenarios (e.g., the presence of heavily calcified plaque and stent implantation). Therefore, the purposes of this study were to evaluate the image quality of CCTA obtained with SR-DLR in comparison with conventional reconstruction methods and to investigate the diagnostic performances of different reconstruction approaches based on CT-FFR. Fifty patients who underwent CCTA and subsequent invasive coronary angiography (ICA) were retrospectively included. All images were reconstructed with hybrid iterative reconstruction (HIR), model-based iterative reconstruction (MBIR), conventional deep learning reconstruction (C-DLR), and SR-DLR algorithms. Objective parameters and subjective scores were compared. Among the patients, 22-comprising 45 lesions-had invasive FFR results as a reference, and the diagnostic performance of different reconstruction approaches based on CT-FFR were compared. SR-DLR achieved the lowest image noise, highest signal-to-noise ratio (SNR), and best edge sharpness (all P values <0.05), as well as the best subjective scores from both reviewers (all P values <0.001). With FFR serving as a reference, the specificity and positive predictive value (PPV) were improved as compared with HIR and C-DLR (72% <i>vs.</i> 36-44% and 73% <i>vs.</i> 53-58%, respectively); moreover, SR-DLR improved the sensitivity and negative predictive value (NPV) as compared to MBIR (95% <i>vs.</i> 70% and 95% <i>vs.</i> 68%, respectively; all P values <0.05). The overall diagnostic accuracy and area under the curve (AUC) for SR-DLR were significantly higher than those of the HIR, MBIR, and C-DLR algorithms (82% <i>vs.</i> 60-67% and 0.84 <i>vs.</i> 0.61-0.70, respectively; all P values <0.05). SR-DLR had the best image quality for both objective and subjective evaluation. The diagnostic performances of CT-FFR were improved by SR-DLR, enabling more accurate assessment of flow-limiting lesions.

Impact of a deep learning image reconstruction algorithm on the robustness of abdominal computed tomography radiomics features using standard and low radiation doses.

Yang S, Bie Y, Zhao L, Luan K, Li X, Chi Y, Bian Z, Zhang D, Pang G, Zhong H

pubmed logopapersSep 1 2025
Deep learning image reconstruction (DLIR) can enhance image quality and lower image dose, yet its impact on radiomics features (RFs) remains unclear. This study aimed to compare the effects of DLIR and conventional adaptive statistical iterative reconstruction-Veo (ASIR-V) algorithms on the robustness of RFs using standard and low-dose abdominal clinical computed tomography (CT) scans. A total of 54 patients with hepatic masses who underwent abdominal contrast-enhanced CT scans were retrospectively analyzed. The raw data of standard dose in the venous phase and low dose in the delayed phase were reconstructed using five reconstruction settings, including ASIR-V at 30% (ASIR-V30%) and 70% (ASIR-V70%) levels, and DLIR at low (DLIR-L), medium (DLIR-M), and high (DLIR-H) levels. The PyRadiomics platform was used for the extraction of RFs in 18 regions of interest (ROIs) in different organs or tissues. The consistency of RFs among different algorithms and different strength levels was tested by coefficient of variation (CV) and quartile coefficient of dispersion (QCD). The consistency of RFs among different strength levels of the same algorithm and clinically comparable levels across algorithms was evaluated by intraclass correlation coefficient (ICC). Robust features were identified by Kruskal-Wallis and Mann-Whitney <i>U</i> test. Among the five reconstruction methods, the mean CV and QCD in the standard-dose group were 0.364 and 0.213, respectively, and the corresponding values were 0.444 and 0.245 in the low-dose group. The mean ICC values between ASIR-V 30% and 70%, DLIR-L and M, DLIR-M and H, DLIR-L and H, ASIR-V30% and DLIR-M, and ASIR-V70% and DLIR-H were 0.672, 0.734, 0.756, 0.629, 0.724, and 0.651, respectively, in the standard-dose group, and the corresponding values were 0.500, 0.567, 0.700, 0.474, 0.499, and 0.650 in the low-dose group. The ICC values between DLIR-M and H under low-dose conditions were even higher than those of ASIR-V30% and -V70% under standard dose conditions. Among the five reconstruction settings, averages of 14.0% (117/837) and 10.3% (86/837) of RFs across 18 ROIs exhibited robustness under standard-dose and low-dose conditions, respectively. Some 23.1% (193/837) of RFs demonstrated robustness between the low-dose DLIR-M and H groups, which was higher than the 21.0% (176/837) observed in the standard-dose ASIR-V30% and -V70% groups. Most of the RFs lacked reproducibility across algorithms and energy levels. However, DLIR at medium (M) and high (H) levels significantly improved RFs consistency and robustness, even at reduced doses.

Acoustic Interference Suppression in Ultrasound images for Real-Time HIFU Monitoring Using an Image-Based Latent Diffusion Model

Dejia Cai, Yao Ran, Kun Yang, Xinwang Shi, Yingying Zhou, Kexian Wu, Yang Xu, Yi Hu, Xiaowei Zhou

arxiv logopreprintSep 1 2025
High-Intensity Focused Ultrasound (HIFU) is a non-invasive therapeutic technique widely used for treating various diseases. However, the success and safety of HIFU treatments depend on real-time monitoring, which is often hindered by interference when using ultrasound to guide HIFU treatment. To address these challenges, we developed HIFU-ILDiff, a novel deep learning-based approach leveraging latent diffusion models to suppress HIFU-induced interference in ultrasound images. The HIFU-ILDiff model employs a Vector Quantized Variational Autoencoder (VQ-VAE) to encode noisy ultrasound images into a lower-dimensional latent space, followed by a latent diffusion model that iteratively removes interference. The denoised latent vectors are then decoded to reconstruct high-resolution, interference-free ultrasound images. We constructed a comprehensive dataset comprising 18,872 image pairs from in vitro phantoms, ex vivo tissues, and in vivo animal data across multiple imaging modalities and HIFU power levels to train and evaluate the model. Experimental results demonstrate that HIFU-ILDiff significantly outperforms the commonly used Notch Filter method, achieving a Structural Similarity Index (SSIM) of 0.796 and Peak Signal-to-Noise Ratio (PSNR) of 23.780 compared to SSIM of 0.443 and PSNR of 14.420 for the Notch Filter under in vitro scenarios. Additionally, HIFU-ILDiff achieves real-time processing at 15 frames per second, markedly faster than the Notch Filter's 5 seconds per frame. These findings indicate that HIFU-ILDiff is able to denoise HIFU interference in ultrasound guiding images for real-time monitoring during HIFU therapy, which will greatly improve the treatment precision in current clinical applications.

Evaluating Undersampling Schemes and Deep Learning Reconstructions for High-Resolution 3D Double Echo Steady State Knee Imaging at 7 T: A Comparison Between GRAPPA, CAIPIRINHA, and Compressed Sensing.

Marth T, Marth AA, Kajdi GW, Nickel MD, Paul D, Sutter R, Nanz D, von Deuster C

pubmed logopapersSep 1 2025
The 3-dimensional (3D) double echo steady state (DESS) magnetic resonance imaging sequence can image knee cartilage with high, isotropic resolution, particularly at high and ultra-high field strengths. Advanced undersampling techniques with high acceleration factors can provide the short acquisition times required for clinical use. However, the optimal undersampling scheme and its limits are unknown. High-resolution isotropic (reconstructed voxel size: 0.3 × 0.3 × 0.3 mm 3 ) 3D DESS images of 40 knees in 20 volunteers were acquired at 7 T with varying undersampling factors (R = 4-30) and schemes (regular: GRAPPA, CAIPIRINHA; incoherent: compressed sensing [CS]), whereas the remaining imaging parameters were kept constant. All imaging data were reconstructed with deep learning (DL) algorithms. Three readers rated image quality on a 4-point Likert scale. Four-fold accelerated GRAPPA was used as reference standard. Incidental cartilage lesions were graded on a modified Whole-Organ Magnetic Resonance Imaging Score (WORMS). Friedman's analysis of variance characterized rating differences. The interreader agreement was assessed using κ statistics. The quality of 16-fold accelerated CS images was not rated significantly different from that of 4-fold accelerated GRAPPA and 8-fold accelerated CAIPIRINHA images, whereas the corresponding data were acquired 4.5 and 2 times faster (01:12 min:s) than in 4-fold accelerated GRAPPA (5:22 min:s) and 8-fold accelerated CAIPIRINHA (2:22 min:s) acquisitions, respectively. Interreader agreement for incidental cartilage lesions was almost perfect for 4-fold accelerated GRAPPA (κ = 0.91), 8-fold accelerated CAIPIRINHA (κ = 0.86), and 8- to 16-fold accelerated CS (κ = 0.91). Our results suggest significant advantages of incoherent versus regular undersampling patterns for high-resolution 3D DESS cartilage imaging with high acceleration factors. The combination of CS undersampling with DL reconstruction enables fast, isotropic, high-resolution acquisitions without apparent impairment of image quality. Since DESS specific absorption rate values tend to be moderate, CS DESS with DL reconstruction promises potential for high-resolution assessment of cartilage morphology and other musculoskeletal anatomies at 7 T.

Deep learning-based super-resolution method for projection image compression in radiotherapy.

Chang Z, Shang J, Fan Y, Huang P, Hu Z, Zhang K, Dai J, Yan H

pubmed logopapersSep 1 2025
Cone-beam computed tomography (CBCT) is a three-dimensional (3D) imaging method designed for routine target verification of cancer patients during radiotherapy. The images are reconstructed from a sequence of projection images obtained by the on-board imager attached to a radiotherapy machine. CBCT images are usually stored in a health information system, but the projection images are mostly abandoned due to their massive volume. To store them economically, in this study, a deep learning (DL)-based super-resolution (SR) method for compressing the projection images was investigated. In image compression, low-resolution (LR) images were down-sampled by a factor from the high-resolution (HR) projection images and then encoded to the video file. In image restoration, LR images were decoded from the video file and then up-sampled to HR projection images via the DL network. Three SR DL networks, convolutional neural network (CNN), residual network (ResNet), and generative adversarial network (GAN), were tested along with three video coding-decoding (CODEC) algorithms: Advanced Video Coding (AVC), High Efficiency Video Coding (HEVC), and AOMedia Video 1 (AV1). Based on the two databases of the natural and projection images, the performance of the SR networks and video codecs was evaluated with the compression ratio (CR), peak signal-to-noise ratio (PSNR), video quality metric (VQM), and structural similarity index measure (SSIM). The codec AV1 achieved the highest CR among the three codecs. The CRs of AV1 were 13.91, 42.08, 144.32, and 289.80 for the down-sampling factor (DSF) 0 (non-SR) 2, 4, and 6, respectively. The SR network, ResNet, achieved the best restoration accuracy among the three SR networks. Its PSNRs were 69.08, 41.60, 37.08, and 32.44 dB for the four DSFs, respectively; its VQMs were 0.06%, 3.65%, 6.95%, and 13.03% for the four DSFs, respectively; and its SSIMs were 0.9984, 0.9878, 0.9798, and 0.9518 for the four DSFs, respectively. As the DSF increased, the CR increased proportionally with the modest degradation of the restored images. The application of the SR model can further improve the CR based on the current result achieved by the video encoders. This compression method is not only effective for the two-dimensional (2D) projection images, but also applicable to the 3D images used in radiotherapy.
Page 9 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.