Sort by:
Page 1 of 440 results
Next

Shape Completion and Real-Time Visualization in Robotic Ultrasound Spine Acquisitions

Miruna-Alexandra Gafencu, Reem Shaban, Yordanka Velikova, Mohammad Farid Azampour, Nassir Navab

arxiv logopreprintAug 12 2025
Ultrasound (US) imaging is increasingly used in spinal procedures due to its real-time, radiation-free capabilities; however, its effectiveness is hindered by shadowing artifacts that obscure deeper tissue structures. Traditional approaches, such as CT-to-US registration, incorporate anatomical information from preoperative CT scans to guide interventions, but they are limited by complex registration requirements, differences in spine curvature, and the need for recent CT imaging. Recent shape completion methods can offer an alternative by reconstructing spinal structures in US data, while being pretrained on large set of publicly available CT scans. However, these approaches are typically offline and have limited reproducibility. In this work, we introduce a novel integrated system that combines robotic ultrasound with real-time shape completion to enhance spinal visualization. Our robotic platform autonomously acquires US sweeps of the lumbar spine, extracts vertebral surfaces from ultrasound, and reconstructs the complete anatomy using a deep learning-based shape completion network. This framework provides interactive, real-time visualization with the capability to autonomously repeat scans and can enable navigation to target locations. This can contribute to better consistency, reproducibility, and understanding of the underlying anatomy. We validate our approach through quantitative experiments assessing shape completion accuracy and evaluations of multiple spine acquisition protocols on a phantom setup. Additionally, we present qualitative results of the visualization on a volunteer scan.

Subject-specific acceleration of simultaneous quantification of blood flow and T<sub>1</sub> of the brain using a dual-flip-angle phase-contrast stack-of-stars sequence.

Wang Y, Wang M, Liu B, Ding Z, She H, Du YP

pubmed logopapersAug 8 2025
To develop a highly accelerated MRI technique for simultaneous quantification of blood flow and T<sub>1</sub> of the brain tissue. A dual-flip-angle phase-contrast stack-of-stars (DFA PC-SOS) sequence was developed for simultaneous acquisition of highly-undersampled data for the quantification of velocity of arterial blood and T<sub>1</sub> mapping of brain tissue. A deep learning-based algorithm, combining hybrid-feature hash encoding implicit neural representation with explicit sparse prior knowledge (INRESP), was used for image reconstruction. Magnitude and phase images were used for T<sub>1</sub> mapping and velocity measurements, respectively. The accuracy of the measurements was assessed in a quantitative phantom and six healthy volunteers. T<sub>1</sub> mapping obtained with DFA PC-SOS showed high correlation and consistency with reference measurements in phantom experiments (y = 0.916× + 4.71, R<sup>2</sup> = 0.9953, ICC = 0.9963). Blood flow measurements in healthy volunteers demonstrated strong correlation and consistency with reference values measured by SFA PC-SOS (y = 1.04×-0.187, R<sup>2</sup> = 0.9918, ICC = 0.9967). The proposed technique enabled an acceleration of 16× with high correlation and consistency with fully sampled data in volunteers (T<sub>1</sub>: y = 1.06× + 1.44, R<sup>2</sup> = 0.9815, ICC = 0.9818; flow: y = 1.01×-0.0525, R<sup>2</sup> = 0.9995, ICC = 0.9998). This study demonstrates the feasibility of 16-fold accelerated simultaneous acquisition for flow quantification and T<sub>1</sub> mapping in the brain. The proposed technique provides a rapid and comprehensive assessment of cerebrovascular diseases with both vascular hemodynamics and surrounding brain tissue characteristics, and has potential to be used in routine clinical applications.

Towards a zero-shot low-latency navigation for open surgery augmented reality applications.

Schwimmbeck M, Khajarian S, Auer C, Wittenberg T, Remmele S

pubmed logopapersAug 5 2025
Augmented reality (AR) enhances surgical navigation by superimposing visible anatomical structures with three-dimensional virtual models using head-mounted displays (HMDs). In particular, interventions such as open liver surgery can benefit from AR navigation, as it aids in identifying and distinguishing tumors and risk structures. However, there is a lack of automatic and markerless methods that are robust against real-world challenges, such as partial occlusion and organ motion. We introduce a novel multi-device approach for automatic live navigation in open liver surgery that enhances the visualization and interaction capabilities of a HoloLens 2 HMD through precise and reliable registration using an Intel RealSense RGB-D camera. The intraoperative RGB-D segmentation and the preoperative CT data are utilized to register a virtual liver model to the target anatomy. An AR-prompted Segment Anything Model (SAM) enables robust segmentation of the liver in situ without the need for additional training data. To mitigate algorithmic latency, Double Exponential Smoothing (DES) is applied to forecast registration results. We conducted a phantom study for open liver surgery, investigating various scenarios of liver motion, viewpoints, and occlusion. The mean registration errors (8.31 mm-18.78 mm TRE) are comparable to those reported in prior work, while our approach demonstrates high success rates even for high occlusion factors and strong motion. Using forecasting, we bypassed the algorithmic latency of 79.8 ms per frame, with median forecasting errors below 2 mms and 1.5 degrees between the quaternions. To our knowledge, this is the first work to approach markerless in situ visualization by combining a multi-device method with forecasting and a foundation model for segmentation and tracking. This enables a more reliable and precise AR registration of surgical targets with low latency. Our approach can be applied to other surgical applications and AR hardware with minimal effort.

High-Resolution Ultrasound Data for AI-Based Segmentation in Mouse Brain Tumor.

Dorosti S, Landry T, Brewer K, Forbes A, Davis C, Brown J

pubmed logopapersJul 30 2025
Glioblastoma multiforme (GBM) is the most aggressive type of brain cancer, making effective treatments essential to improve patient survival. To advance the understanding of GBM and develop more effective therapies, preclinical studies commonly use mouse models due to their genetic and physiological similarities to humans. In particular, the GL261 mouse glioma model is employed for its reproducible tumor growth and ability to mimic key aspects of human gliomas. Ultrasound imaging is a valuable modality in preclinical studies, offering real-time, non-invasive tumor monitoring and facilitating treatment response assessment. Furthermore, its potential therapeutic applications, such as in tumor ablation, expand its utility in preclinical studies. However, real-time segmentation of GL261 tumors during surgery introduces significant complexities, such as precise tumor boundary delineation and maintaining processing efficiency. Automated segmentation offers a solution, but its success relies on high-quality datasets with precise labeling. Our study introduces the first publicly available ultrasound dataset specifically developed to improve tumor segmentation in GL261 glioblastomas, providing 1,856 annotated images to support AI model development in preclinical research. This dataset bridges preclinical insights and clinical practice, laying the foundation for developing more accurate and effective tumor resection techniques.

Trabecular bone analysis: ultra-high-resolution CT goes far beyond high-resolution CT and gets closer to micro-CT (a study using Canon Medical CT devices).

Gillet R, Puel U, Amer A, Doyen M, Boubaker F, Assabah B, Hossu G, Gillet P, Blum A, Teixeira PAG

pubmed logopapersJul 30 2025
High-resolution CT (HR-CT) cannot image trabecular bone due to insufficient spatial resolution. Ultra-high-resolution CT may be a valuable alternative. We aimed to describe the accuracy of Canon Medical HR, super-high-resolution (SHR), and ultra-high-resolution (UHR)-CT in measuring trabecular bone microarchitectural parameters using micro-CT as a reference. Sixteen cadaveric distal tibial epiphyses were enrolled in this pre-clinical study. Images were acquired with HR-CT (i.e., 0.5 mm slice thickness/512<sup>2</sup> matrix) and SHR-CT (i.e., 0.25 mm slice thickness and 1024<sup>2</sup> matrix) with and without deep learning reconstruction (DLR) and UHR-CT (i.e., 0.25 mm slice thickness/2048<sup>2</sup> matrix) without DLR. Trabecular bone parameters were compared. Trabecular thickness was closest with UHR-CT but remained 1.37 times that of micro-CT (P < 0.001). With SHR-CT without and with DLR, it was 1.75 and 1.79 times that of micro-CT, respectively (P < 0.001), and 3.58 and 3.68 times that of micro-CT with HR-CT without and with DLR, respectively (P < 0.001). Trabecular separation was 0.7 times that of micro-CT with UHR-CT (P < 0.001), 0.93 and 0.94 times that of micro-CT with SHR-CT without and with DLR (P = 0.36 and 0.79, respectively), and 1.52 and 1.36 times that of micro-CT with HR-CT without and with DLR (P < 0.001). Bone volume/total volume was overestimated (i.e., 1.66 to 1.92 times that of micro-CT) by all techniques (P < 0.001). However, HR-CT values were superior to UHR-CT values (P = 0.03 and 0.01, without and with DLR, respectively). UHR and SHR-CT were the closest techniques to micro-CT and surpassed HR-CT.

The safety and accuracy of radiation-free spinal navigation using a short, scoliosis-specific BoneMRI-protocol, compared to CT.

Lafranca PPG, Rommelspacher Y, Walter SG, Muijs SPJ, van der Velden TA, Shcherbakova YM, Castelein RM, Ito K, Seevinck PR, Schlösser TPC

pubmed logopapersJul 21 2025
Spinal navigation systems require pre- and/or intra-operative 3-D imaging, which expose young patients to harmful radiation. We assessed a scoliosis-specific MRI-protocol that provides T2-weighted MRI and AI-generated synthetic-CT (sCT) scans, through deep learning algorithms. This study aims to compare MRI-based synthetic-CT spinal navigation to CT for safety and accuracy of pedicle screw planning and placement at thoracic and lumbar levels. Spines of 5 cadavers were scanned with thin-slice CT and the scoliosis-specific MRI-protocol (to create sCT). Preoperatively, on both CT and sCT screw trajectories were planned. Subsequently, four spine surgeons performed surface-matched, navigated placement of 2.5 mm k-wires in all pedicles from T3 to L5. Randomization for CT/sCT, surgeon and side was performed (1:1 ratio). On postoperative CT-scans, virtual screws were simulated over k-wires. Maximum angulation, distance between planned and postoperative screw positions and medial breach rate (Gertzbein-Robbins classification) were assessed. 140 k-wires were inserted, 3 were excluded. There were no pedicle breaches > 2 mm. Of sCT-guided screws, 59 were grade A and 10 grade B. For the CT-guided screws, 47 were grade A and 21 grade B (p = 0.022). Average distance (± SD) between intraoperative and postoperative screw positions was 2.3 ± 1.5 mm in sCT-guided screws, and 2.4 ± 1.8 mm for CT (p = 0.78), average maximum angulation (± SD) was 3.8 ± 2.5° for sCT and 3.9 ± 2.9° for CT (p = 0.75). MRI-based, AI-generated synthetic-CT spinal navigation allows for safe and accurate planning and placement of thoracic and lumbar pedicle screws in a cadaveric model, without significant differences in distance and angulation between planned and postoperative screw positions compared to CT.

AI-Assisted Semiquantitative Measurement of Murine Bleomycin-Induced Lung Fibrosis Using In Vivo Micro-CT: An End-to-End Approach.

Cheng H, Gao T, Sun Y, Huang F, Gu X, Shan C, Wang B, Luo S

pubmed logopapersJul 21 2025
Small animal models are crucial for investigating idiopathic pulmonary fibrosis (IPF) and developing preclinical therapeutic strategies. However, there are several limitations to the quantitative measurements used in the longitudinal assessment of experimental lung fibrosis, e.g., histological or biochemical analyses introduce inter-individual variability, while image-derived biomarker has yet to directly and accurately quantify the severity of lung fibrosis. This study investigates artificial intelligence (AI)-assisted, end-to-end, semi-quantitative measurement of lung fibrosis using in vivo micro-CT. Based on the bleomycin (BLM)-induced lung fibrosis mouse model, the AI model predicts histopathological scores from in vivo micro-CT images, directly correlating these images with the severity of lung fibrosis in mice. Fibrosis severity was graded by the Ashcroft scale: none (0), mild (1-3), moderate (4-5), severe (≥6).The overall accuracy, precision, recall, and F1 scores of the lung fibrosis severity-stratified 3-fold cross validation on 225 micro-CT images for the proposed AI model were 92.9%, 90.9%, 91.6%, and 91.0%. The overall area under the receiver operating characteristic curve (AUROC) was 0.990 (95% CI: 0.977, 1.000), with AUROC values of 1.000 for none (100 images, 95% CI: 0.997, 1.000), 0.969 for mild (43 images, 95% CI: 0.918, 1.000), 0.992 for moderate (36 images, 95% CI: 0.962, 1.000), and 0.992 for severe (46 images, 95% CI: 0.967, 1.000). Preliminary results indicate that AI-assisted, in vivo micro-CT-based semi-quantitative measurements of murine are feasible and likely accurate. This novel method holds promise as a tool to improve the reproducibility of experimental studies in animal models of IPF.

Quantitative multi-metabolite imaging of Parkinson's disease using AI boosted molecular MRI

Hagar Shmuely, Michal Rivlin, Or Perlman

arxiv logopreprintJul 15 2025
Traditional approaches for molecular imaging of Parkinson's disease (PD) in vivo require radioactive isotopes, lengthy scan times, or deliver only low spatial resolution. Recent advances in saturation transfer-based PD magnetic resonance imaging (MRI) have provided biochemical insights, although the image contrast is semi-quantitative and nonspecific. Here, we combined a rapid molecular MRI acquisition paradigm with deep learning based reconstruction for multi-metabolite quantification of glutamate, mobile proteins, semisolid, and mobile macromolecules in an acute MPTP (1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine) mouse model. The quantitative parameter maps are in general agreement with the histology and MR spectroscopy, and demonstrate that semisolid magnetization transfer (MT), amide, and aliphatic relayed nuclear Overhauser effect (rNOE) proton volume fractions may serve as PD biomarkers.

Self-supervised Upsampling for Reconstructions with Generalized Enhancement in Photoacoustic Computed Tomography.

Deng K, Luo Y, Zuo H, Chen Y, Gu L, Liu MY, Lan H, Luo J, Ma C

pubmed logopapersJul 14 2025
Photoacoustic computed tomography (PACT) is an emerging hybrid imaging modality with potential applications in biomedicine. A major roadblock to the widespread adoption of PACT is the limited number of detectors, which gives rise to spatial aliasing and manifests as streak artifacts in the reconstructed image. A brute-force solution to the problem is to increase the number of detectors, which, however, is often undesirable due to escalated costs. In this study, we present a novel self-supervised learning approach, to overcome this long-standing challenge. We found that small blocks of PACT channel data show similarity at various downsampling rates. Based on this observation, a neural network trained on downsampled data can reliably perform accurate interpolation without requiring densely-sampled ground truth data, which is typically unavailable in real practice. Our method has undergone validation through numerical simulations, controlled phantom experiments, as well as ex vivo and in vivo animal tests, across multiple PACT systems. We have demonstrated that our technique provides an effective and cost-efficient solution to address the under-sampling issue in PACT, thereby enhancing the capabilities of this imaging technology.

Efficient needle guidance: multi-camera augmented reality navigation without patient-specific calibration.

Wei Y, Huang B, Zhao B, Lin Z, Zhou SZ

pubmed logopapersJul 12 2025
Augmented reality (AR) technology holds significant promise for enhancing surgical navigation in needle-based procedures such as biopsies and ablations. However, most existing AR systems rely on patient-specific markers, which disrupt clinical workflows and require time-consuming preoperative calibrations, thereby hindering operational efficiency and precision. We developed a novel multi-camera AR navigation system that eliminates the need for patient-specific markers by utilizing ceiling-mounted markers mapped to fixed medical imaging devices. A hierarchical optimization framework integrates both marker mapping and multi-camera calibration. Deep learning techniques are employed to enhance marker detection and registration accuracy. Additionally, a vision-based pose compensation method is implemented to mitigate errors caused by patient movement, improving overall positional accuracy. Validation through phantom experiments and simulated clinical scenarios demonstrated an average puncture accuracy of 3.72 ± 1.21 mm. The system reduced needle placement time by 20 s compared to traditional marker-based methods. It also effectively corrected errors induced by patient movement, with a mean positional error of 0.38 pixels and an angular deviation of 0.51 <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> . These results highlight the system's precision, adaptability, and reliability in realistic surgical conditions. This marker-free AR guidance system significantly streamlines surgical workflows while enhancing needle navigation accuracy. Its simplicity, cost-effectiveness, and adaptability make it an ideal solution for both high- and low-resource clinical environments, offering the potential for improved precision, reduced procedural time, and better patient outcomes.
Page 1 of 440 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.