Sort by:
Page 12 of 12116 results

BodyGPS: Anatomical Positioning System

Halid Ziya Yerebakan, Kritika Iyer, Xueqi Guo, Yoshihisa Shinagawa, Gerardo Hermosillo Valadez

arxiv logopreprintMay 12 2025
We introduce a new type of foundational model for parsing human anatomy in medical images that works for different modalities. It supports supervised or unsupervised training and can perform matching, registration, classification, or segmentation with or without user interaction. We achieve this by training a neural network estimator that maps query locations to atlas coordinates via regression. Efficiency is improved by sparsely sampling the input, enabling response times of less than 1 ms without additional accelerator hardware. We demonstrate the utility of the algorithm in both CT and MRI modalities.

JSover: Joint Spectrum Estimation and Multi-Material Decomposition from Single-Energy CT Projections

Qing Wu, Hongjiang Wei, Jingyi Yu, S. Kevin Zhou, Yuyao Zhang

arxiv logopreprintMay 12 2025
Multi-material decomposition (MMD) enables quantitative reconstruction of tissue compositions in the human body, supporting a wide range of clinical applications. However, traditional MMD typically requires spectral CT scanners and pre-measured X-ray energy spectra, significantly limiting clinical applicability. To this end, various methods have been developed to perform MMD using conventional (i.e., single-energy, SE) CT systems, commonly referred to as SEMMD. Despite promising progress, most SEMMD methods follow a two-step image decomposition pipeline, which first reconstructs monochromatic CT images using algorithms such as FBP, and then performs decomposition on these images. The initial reconstruction step, however, neglects the energy-dependent attenuation of human tissues, introducing severe nonlinear beam hardening artifacts and noise into the subsequent decomposition. This paper proposes JSover, a fundamentally reformulated one-step SEMMD framework that jointly reconstructs multi-material compositions and estimates the energy spectrum directly from SECT projections. By explicitly incorporating physics-informed spectral priors into the SEMMD process, JSover accurately simulates a virtual spectral CT system from SE acquisitions, thereby improving the reliability and accuracy of decomposition. Furthermore, we introduce implicit neural representation (INR) as an unsupervised deep learning solver for representing the underlying material maps. The inductive bias of INR toward continuous image patterns constrains the solution space and further enhances estimation quality. Extensive experiments on both simulated and real CT datasets show that JSover outperforms state-of-the-art SEMMD methods in accuracy and computational efficiency.

From Genome to Phenome: Opportunities and Challenges of Molecular Imaging.

Tian M, Hood L, Chiti A, Schwaiger M, Minoshima S, Watanabe Y, Kang KW, Zhang H

pubmed logopapersMay 8 2025
The study of the human phenome is essential for understanding the complexities of wellness and disease and their transitions, with molecular imaging being a vital tool in this exploration. Molecular imaging embodies the 4 principles of human phenomics: precise measurement, accurate calculation or analysis, well-controlled manipulation or intervention, and innovative invention or creation. Its application has significantly enhanced the precision, individualization, and effectiveness of medical interventions. This article provides an overview of molecular imaging's technologic advancements and presents the potential use of molecular imaging in human phenomics and precision medicine. The integration of molecular imaging with multiomics data and artificial intelligence has the potential to transform health care, promoting proactive and preventive strategies. This evolving approach promises to deepen our understanding of the human phenome, lead to preclinical diagnostics and treatments, and establish quantitative frameworks for precision health management.

Potential of artificial intelligence for radiation dose reduction in computed tomography -A scoping review.

Bani-Ahmad M, England A, McLaughlin L, Hadi YH, McEntee M

pubmed logopapersMay 7 2025
Artificial intelligence (AI) is now transforming medical imaging, with extensive ramifications for nearly every aspect of diagnostic imaging, including computed tomography (CT). This current work aims to review, evaluate, and summarise the role of AI in radiation dose optimisation across three fundamental domains in CT: patient positioning, scan range determination, and image reconstruction. A comprehensive scoping review of the literature was performed. Electronic databases including Scopus, Ovid, EBSCOhost and PubMed were searched between January 2018 and December 2024. Relevant articles were identified from their titles had their abstracts evaluated, and those deemed relevant had their full text reviewed. Extracted data from selected studies included the application of AI, radiation dose, anatomical part, and any relevant evaluation metrics based on the CT parameter in which AI is applied. 90 articles met the selection criteria. Included studies evaluated the performance of AI for dose optimisation through patient positioning, scan range determination, and reconstruction across various CT scans, including the abdomen, chest, head, neck, and pelvis, as well as CT angiography. A concise overview of the present state of AI in these three domains, emphasising benefits, limitations, and impact on the transformation of dose reduction in CT scanning, is provided. AI methods can help minimise positioning offsets and over-scanning caused by manual errors and helped to overcome the limitation associated with low-dose CT settings through deep learning image reconstruction algorithms. Further clinical integration of AI will continue to allow for improvements in optimising CT scan protocols and radiation dose. This review underscores the significance of AI in optimizing radiation doses in CT imaging, focusing on three key areas: patient positioning, scan range determination, and image reconstruction.

Cross-organ all-in-one parallel compressed sensing magnetic resonance imaging

Baoshun Shi, Zheng Liu, Xin Meng, Yan Yang

arxiv logopreprintMay 7 2025
Recent advances in deep learning-based parallel compressed sensing magnetic resonance imaging (p-CSMRI) have significantly improved reconstruction quality. However, current p-CSMRI methods often require training separate deep neural network (DNN) for each organ due to anatomical variations, creating a barrier to developing generalized medical image reconstruction systems. To address this, we propose CAPNet (cross-organ all-in-one deep unfolding p-CSMRI network), a unified framework that implements a p-CSMRI iterative algorithm via three specialized modules: auxiliary variable module, prior module, and data consistency module. Recognizing that p-CSMRI systems often employ varying sampling ratios for different organs, resulting in organ-specific artifact patterns, we introduce an artifact generation submodule, which extracts and integrates artifact features into the data consistency module to enhance the discriminative capability of the overall network. For the prior module, we design an organ structure-prompt generation submodule that leverages structural features extracted from the segment anything model (SAM) to create cross-organ prompts. These prompts are strategically incorporated into the prior module through an organ structure-aware Mamba submodule. Comprehensive evaluations on a cross-organ dataset confirm that CAPNet achieves state-of-the-art reconstruction performance across multiple anatomical structures using a single unified model. Our code will be published at https://github.com/shibaoshun/CAPNet.

Multistage Diffusion Model With Phase Error Correction for Fast PET Imaging.

Gao Y, Huang Z, Xie X, Zhao W, Yang Q, Yang X, Yang Y, Zheng H, Liang D, Liu J, Chen R, Hu Z

pubmed logopapersMay 7 2025
Fast PET imaging is clinically important for reducing motion artifacts and improving patient comfort. While recent diffusion-based deep learning methods have shown promise, they often fail to capture the true PET degradation process, suffer from accumulated inference errors, introduce artifacts, and require extensive reconstruction iterations. To address these challenges, we propose a novel multistage diffusion framework tailored for fast PET imaging. At the coarse level, we design a multistage structure to approximate the temporal non-linear PET degradation process in a data-driven manner, using paired PET images collected under different acquisition duration. A Phase Error Correction Network (PECNet) ensures consistency across stages by correcting accumulated deviations. At the fine level, we introduce a deterministic cold diffusion mechanism, which simulates intra-stage degradation through interpolation between known acquisition durations-significantly reducing reconstruction iterations to as few as 10. Evaluations on [<sup>68</sup>Ga]FAPI and [<sup>18</sup>F]FDG PET datasets demonstrate the superiority of our approach, achieving peak PSNRs of 36.2 dB and 39.0 dB, respectively, with average SSIMs over 0.97. Our framework offers high-fidelity PET imaging with fewer iterations, making it practical for accelerated clinical imaging.
Page 12 of 12116 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.