Sort by:
Page 8 of 54537 results

MUSiK: An Open Source Simulation Library for 3D Multi-view Ultrasound.

Chan TJ, Nair-Kanneganti A, Anthony B, Pouch A

pubmed logopapersSep 4 2025
Diagnostic ultrasound has long filled a crucial niche in medical imaging thanks to its portability, affordability, and favorable safety profile. Now, multi-view hardware and deep-learning-based image reconstruction algorithms promise to extend this niche to increasingly sophisticated applications, such as volume rendering and long-term organ monitoring. However, progress on these fronts is impeded by the complexities of ultrasound electronics and by the scarcity of high-fidelity radiofrequency data. Evidently, there is a critical need for tools that enable rapid ultrasound prototyping and generation of synthetic data. We meet this need with MUSiK, the first open-source ultrasound simulation library expressly designed for multi-view acoustic simulations of realistic anatomy. This library covers the full gamut of image acquisition: building anatomical digital phantoms, defining and positioning diverse transducer types, running simulations, and reconstructing images. In this paper, we demonstrate several use cases for MUSiK. We simulate in vitro multi-view experiments and compare the resolution and contrast of the resulting images. We then perform multiple conventional and experimental in vivo imaging tasks, such as 2D scans of the kidney, 2D and 3D echocardiography, 2.5D tomography of large regions, and 3D tomography for lesion detection in soft tissue. Finally, we introduce MUSiK's Bayesian reconstruction framework for multi-view ultrasound and validate an original SNR-enhancing reconstruction algorithm. We anticipate that these unique features will seed new hypotheses and accelerate the overall pace of ultrasound technological development. The MUSiK library is publicly available at github.com/norway99/MUSiK.

Learning functions through Diffusion Maps

Alvaro Almeida Gomez

arxiv logopreprintSep 3 2025
We propose a data-driven method for approximating real-valued functions on smooth manifolds, building on the Diffusion Maps framework under the manifold hypothesis. Given pointwise evaluations of a function, the method constructs a smooth extension to the ambient space by exploiting diffusion geometry and its connection to the heat equation and the Laplace-Beltrami operator. To address the computational challenges of high-dimensional data, we introduce a dimensionality reduction strategy based on the low-rank structure of the distance matrix, revealed via singular value decomposition (SVD). In addition, we develop an online updating mechanism that enables efficient incorporation of new data, thereby improving scalability and reducing computational cost. Numerical experiments, including applications to sparse CT reconstruction, demonstrate that the proposed methodology outperforms classical feedforward neural networks and interpolation methods in terms of both accuracy and efficiency.

Analog optical computer for AI inference and combinatorial optimization.

Kalinin KP, Gladrow J, Chu J, Clegg JH, Cletheroe D, Kelly DJ, Rahmani B, Brennan G, Canakci B, Falck F, Hansen M, Kleewein J, Kremer H, O'Shea G, Pickup L, Rajmohan S, Rowstron A, Ruhle V, Braine L, Khedekar S, Berloff NG, Gkantsidis C, Parmigiani F, Ballani H

pubmed logopapersSep 3 2025
Artificial intelligence (AI) and combinatorial optimization drive applications across science and industry, but their increasing energy demands challenge the sustainability of digital computing. Most unconventional computing systems<sup>1-7</sup> target either AI or optimization workloads and rely on frequent, energy-intensive digital conversions, limiting efficiency. These systems also face application-hardware mismatches, whether handling memory-bottlenecked neural models, mapping real-world optimization problems or contending with inherent analog noise. Here we introduce an analog optical computer (AOC) that combines analog electronics and three-dimensional optics to accelerate AI inference and combinatorial optimization in a single platform. This dual-domain capability is enabled by a rapid fixed-point search, which avoids digital conversions and enhances noise robustness. With this fixed-point abstraction, the AOC implements emerging compute-bound neural models with recursive reasoning potential and realizes an advanced gradient-descent approach for expressive optimization. We demonstrate the benefits of co-designing the hardware and abstraction, echoing the co-evolution of digital accelerators and deep learning models, through four case studies: image classification, nonlinear regression, medical image reconstruction and financial transaction settlement. Built with scalable, consumer-grade technologies, the AOC paves a promising path for faster and sustainable computing. Its native support for iterative, compute-intensive models offers a scalable analog platform for fostering future innovation in AI and optimization.

Disentangled deep learning method for interior tomographic reconstruction of low-dose X-ray CT.

Chen C, Zhang L, Gao H, Wang Z, Xing Y, Chen Z

pubmed logopapersSep 3 2025
Objective&#xD;Low-dose interior tomography integrates low-dose CT (LDCT) with region-of-interest (ROI) imaging which finds wide application in radiation dose reduction and high-resolution imaging. However, the combined effects of noise and data truncation pose great challenges for accurate tomographic reconstruction. This study aims to develop a novel reconstruction framework that achieves high-quality ROI reconstruction and efficient extension of recoverable region to provide innovative solutions to address coupled ill-posed problems.&#xD;Approach&#xD;We conducted a comprehensive analysis of projection data composition and angular sampling patterns in low-dose interior tomography. Based on this analysis, we proposed two novel deep learning-based reconstruction pipelines: (1) Deep Projection Extraction-based Reconstruction (DPER) that focuses on ROI reconstruction by disentangling and extracting noise and background projection contributions using a dual-domain deep neural network; and (2) DPER with Progressive extension (DPER-Pro) that enhances DPER by a progressive "coarse-to-fine" strategy for missing data compensation, enabling simultaneous ROI reconstruction and extension of recoverable regions. The proposed methods were rigorously evaluated through extensive experiments on simulated torso datasets and real CT scans of a torso phantom.&#xD;Main Results&#xD;The experimental results demonstrated that DPER effectively handles the coupled ill-posed problem and achieves high-quality ROI reconstructions by accurately extracting noise and background projections. DPER-Pro extends the recoverable region while preserving ROI image quality by leveraging disentangled projection components and angular sampling patterns. Both methods outperform competing approaches in reconstructing reliable structures, enhancing generalization, and mitigating noise and truncation artifacts.&#xD;Significance&#xD;This work presents a novel decoupled deep learning framework for low-dose interior tomography that provides a robust and effective solution to the challenges posed by noise and truncated projections. The proposed methods significantly improve ROI reconstruction quality while efficiently recovering structural information in exterior regions, offering a promising pathway for advancing low-dose ROI imaging across a wide range of applications.&#xD.

Optimizing Paths for Adaptive Fly-Scan Microscopy: An Extended Version

Yu Lu, Thomas F. Lynn, Ming Du, Zichao Di, Sven Leyffer

arxiv logopreprintSep 2 2025
In x-ray microscopy, traditional raster-scanning techniques are used to acquire a microscopic image in a series of step-scans. Alternatively, scanning the x-ray probe along a continuous path, called a fly-scan, reduces scan time and increases scan efficiency. However, not all regions of an image are equally important. Currently used fly-scan methods do not adapt to the characteristics of the sample during the scan, often wasting time in uniform, uninteresting regions. One approach to avoid unnecessary scanning in uniform regions for raster step-scans is to use deep learning techniques to select a shorter optimal scan path instead of a traditional raster scan path, followed by reconstructing the entire image from the partially scanned data. However, this approach heavily depends on the quality of the initial sampling, requires a large dataset for training, and incurs high computational costs. We propose leveraging the fly-scan method along an optimal scanning path, focusing on regions of interest (ROIs) and using image completion techniques to reconstruct details in non-scanned areas. This approach further shortens the scanning process and potentially decreases x-ray exposure dose while maintaining high-quality and detailed information in critical regions. To achieve this, we introduce a multi-iteration fly-scan framework that adapts to the scanned image. Specifically, in each iteration, we define two key functions: (1) a score function to generate initial anchor points and identify potential ROIs, and (2) an objective function to optimize the anchor points for convergence to an optimal set. Using these anchor points, we compute the shortest scanning path between optimized anchor points, perform the fly-scan, and subsequently apply image completion based on the acquired information in preparation for the next scan iteration.

3D MR Neurography of Craniocervical Nerves: Comparing Double-Echo Steady-State and Postcontrast STIR with Deep Learning-Based Reconstruction at 1.5T.

Ensle F, Zecca F, Kerber B, Lohezic M, Wen Y, Kroschke J, Pawlus K, Guggenberger R

pubmed logopapersSep 2 2025
3D MR neurography is a useful diagnostic tool in head and neck disorders, but neurographic imaging remains challenging in this region. Optimal sequences for nerve visualization have not yet been established and may also differ between nerves. While deep learning (DL) reconstruction can enhance nerve depiction, particularly at 1.5T, studies in the head and neck are lacking. The purpose of this study was to compare double echo steady-state (DESS) and postcontrast STIR sequences in DL-reconstructed 3D MR neurography of the extraforaminal cranial and spinal nerves at 1.5T. Eighteen consecutive examinations of 18 patients undergoing head-and-neck MRI at 1.5T were retrospectively included (mean age: 51 ± 14 years, 11 women). 3D DESS and postcontrast 3D STIR sequences were obtained as part of the standard protocol, and reconstructed with a prototype DL algorithm. Two blinded readers qualitatively evaluated visualization of the inferior alveolar, lingual, facial, hypoglossal, greater occipital, lesser occipital, and greater auricular nerves, as well as overall image quality, vascular suppression, and artifacts. Additionally, apparent SNR and contrast-to-noise ratios of the inferior alveolar and greater occipital nerve were measured. Visual ratings and quantitative measurements, respectively, were compared between sequences by using Wilcoxon signed-rank test. DESS demonstrated significantly improved visualization of the lesser occipital nerve, greater auricular nerve, and proximal greater occipital nerve (<i>P</i> < .015). Postcontrast STIR showed significantly enhanced visualization of the lingual nerve, hypoglossal nerve, and distal inferior alveolar nerve (<i>P</i> < .001). The facial nerve, proximal inferior alveolar nerve, and distal greater occipital nerve did not demonstrate significant differences in visualization between sequences (<i>P</i> > .08). There was also no significant difference for overall image quality and artifacts. Postcontrast STIR achieved superior vascular suppression, reaching statistical significance for 1 reader (<i>P</i> = .039). Quantitatively, there was no significant difference between sequences (<i>P</i> > .05). Our findings suggest that 3D DESS generally provides improved visualization of spinal nerves, while postcontrast 3D STIR facilitates enhanced delineation of extraforaminal cranial nerves.

Super-Resolution MR Spectroscopic Imaging via Diffusion Models for Tumor Metabolism Mapping.

Alsubaie M, Perera SM, Gu L, Subasi SB, Andronesi OC, Li X

pubmed logopapersSep 2 2025
High-resolution magnetic resonance spectroscopic imaging (MRSI) plays a crucial role in characterizing tumor metabolism and guiding clinical decisions for glioma patients. However, due to inherently low metabolite concentrations and signal-to-noise ratio (SNR) limitations, MRSI data are often acquired at low spatial resolution, hindering accurate visualization of tumor heterogeneity and margins. In this study, we propose a novel deep learning framework based on conditional denoising diffusion probabilistic models for super-resolution reconstruction of MRSI, with a particular focus on mutant isocitrate dehydrogenase (IDH) gliomas. The model progressively transforms noise into high-fidelity metabolite maps through a learned reverse diffusion process, conditioned on low-resolution inputs. Leveraging a Self-Attention UNet backbone, the proposed approach integrates global contextual features and achieves superior detail preservation. On simulated patient data, the proposed method achieved Structural Similarity Index Measure (SSIM) values of 0.956, 0.939, and 0.893; Peak Signal-to-Noise Ratio (PSNR) values of 29.73, 27.84, and 26.39 dB; and Learned Perceptual Image Patch Similarity (LPIPS) values of 0.025, 0.036, and 0.045 for upsampling factors of 2, 4, and 8, respectively, with LPIPS improvements statistically significant compared to all baselines ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.01</mn></mrow> </math> ). We validated the framework on in vivo MRSI from healthy volunteers and glioma patients, where it accurately reconstructed small lesions, preserved critical textural and structural information, and enhanced tumor boundary delineation in metabolic ratio maps, revealing heterogeneity not visible in other approaches. These results highlight the promise of diffusion-based deep learning models as clinically relevant tools for noninvasive, high-resolution metabolic imaging in glioma and potentially other neurological disorders.

Navigator motion-resolved MR fingerprinting using implicit neural representation: Feasibility for free-breathing three-dimensional whole-liver multiparametric mapping.

Li C, Li J, Zhang J, Solomon E, Dimov AV, Spincemaille P, Nguyen TD, Prince MR, Wang Y

pubmed logopapersSep 2 2025
To develop a multiparametric free-breathing three-dimensional, whole-liver quantitative maps of water T<sub>1</sub>, water T<sub>2</sub>, fat fraction (FF) and R<sub>2</sub>*. A multi-echo 3D stack-of-spiral gradient-echo sequence with inversion recovery and T<sub>2</sub>-prep magnetization preparations was implemented for multiparametric MRI. Fingerprinting and a neural network based on implicit neural representation (FINR) were developed to simultaneously reconstruct the motion deformation fields, the static images, perform water-fat separation, and generate T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF maps. FINR performance was evaluated in 10 healthy subjects by comparison with quantitative maps generated using conventional breath-holding imaging. FINR consistently generated sharp images in all subjects free of motion artifacts. FINR showed minimal bias and narrow 95% limits of agreement for T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF values in the liver compared with conventional imaging. FINR training took about 3 h per subject, and FINR inference took less than 1 min to produce static images and motion deformation fields. FINR is a promising approach for 3D whole-liver T<sub>1</sub>, T<sub>2</sub>, R<sub>2</sub>*, and FF mapping in a single free-breathing continuous scan.

Application and assessment of deep learning to routine 2D T2 FLEX spine imaging at 1.5T.

Shaikh IS, Milshteyn E, Chulsky S, Maclellan CJ, Soman S

pubmed logopapersSep 2 2025
2D T2 FSE is an essential routine spine MRI sequence, allowing assessment of fractures, soft tissues, and pathology. Fat suppression using a DIXON-type approach (2D FLEX) improves water/fat separation. Recently, a deep learning (DL) reconstruction (AIR™ Recon DL, GE HealthCare) became available for 2D FLEX, offering increased signal-to-noise ratio (SNR), reduced artifacts, and sharper images. This study aimed to compare DL-reconstructed versus non-DL-reconstructed spine 2D T2 FLEX images for diagnostic image quality and quantitative metrics at 1.5T. Forty-one patients with clinically indicated cervical or lumbar spine MRI were scanned between May and August 2023 on a 1.5T Voyager (GE HealthCare). A 2D T2 FLEX sequence was acquired, and DL-based reconstruction (noise reduction strength: 75%) was applied. Raw data were also reconstructed without DL. Three readers (CAQ-neuroradiologist, PGY-6 neuroradiology fellow, PGY-2 radiology resident) rated diagnostic preference (0 = non-DL, 1 = DL, 2 = equivalent) for 39 cases. Quantitative measures (SNR, total variation [TV], number of edges, and fat fraction [FF]) were compared using paired t-tests with significance set at p < .05. Among evaluations, 79.5% preferred DL, 11% found images equivalent, and 9.4% favored non-DL, with strong inter-rater agreement (p < .001, Fleiss' Kappa = 0.99). DL images had higher SNR, lower TV, and fewer edges (p < .001), indicating effective noise reduction. FF remained statistically unchanged in subcutaneous fat (p = .25) but differed slightly in vertebral bodies (1.4% difference, p = .01). DL reconstruction notably improved image quality by enhancing SNR and reducing noise without clinically meaningful changes in fat quantification. These findings support the use of DL-enhanced 2D T2 FLEX in routine spine imaging at 1.5T. Incorporating DL-based reconstruction into standard spine MRI protocols can increase diagnostic confidence and workflow efficiency. Further studies with larger cohorts and diverse pathologies are warranted to refine this approach and explore potential benefits for clinical decision-making.
Page 8 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.