Sort by:
Page 12 of 19183 results

Generalizable, sequence-invariant deep learning image reconstruction for subspace-constrained quantitative MRI.

Hu Z, Chen Z, Cao T, Lee HL, Xie Y, Li D, Christodoulou AG

pubmed logopapersJul 1 2025
To develop a deep subspace learning network that can function across different pulse sequences. A contrast-invariant component-by-component (CBC) network structure was developed and compared against previously reported spatiotemporal multicomponent (MC) structure for reconstructing MR Multitasking images. A total of 130, 167, and 16 subjects were imaged using T<sub>1</sub>, T<sub>1</sub>-T<sub>2</sub>, and T<sub>1</sub>-T<sub>2</sub>- <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow><msubsup><mi>T</mi> <mn>2</mn> <mo>*</mo></msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> -fat fraction (FF) mapping sequences, respectively. We compared CBC and MC networks in matched-sequence experiments (same sequence for training and testing), then examined their cross-sequence performance and generalizability by unmatched-sequence experiments (different sequences for training and testing). A "universal" CBC network was also evaluated using mixed-sequence training (combining data from all three sequences). Evaluation metrics included image normalized root mean squared error and Bland-Altman analyses of end-diastolic maps, both versus iteratively reconstructed references. The proposed CBC showed significantly better normalized root mean squared error than MC in both matched-sequence and unmatched-sequence experiments (p < 0.001), fewer structural details in quantitative error maps, and tighter limits of agreement. CBC was more generalizable than MC (smaller performance loss; p = 0.006 in T<sub>1</sub> and p < 0.001 in T<sub>1</sub>-T<sub>2</sub> from matched-sequence testing to unmatched-sequence testing) and additionally allowed training of a single universal network to reconstruct images from any of the three pulse sequences. The mixed-sequence CBC network performed similarly to matched-sequence CBC in T<sub>1</sub> (p = 0.178) and T<sub>1</sub>-T<sub>2</sub> (p = 0121), where training data were plentiful, and performed better in T<sub>1</sub>-T<sub>2</sub>- <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow><msubsup><mi>T</mi> <mn>2</mn> <mo>*</mo></msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> -FF (p < 0.001) where training data were scarce. Contrast-invariant learning of spatial features rather than spatiotemporal features improves performance and generalizability, addresses data scarcity, and offers a pathway to universal supervised deep subspace learning.

The impact of updated imaging software on the performance of machine learning models for breast cancer diagnosis: a multi-center, retrospective study.

Cai L, Golatta M, Sidey-Gibbons C, Barr RG, Pfob A

pubmed logopapersJul 1 2025
Artificial Intelligence models based on medical (imaging) data are increasingly developed. However, the imaging software on which the original data is generated is frequently updated. The impact of updated imaging software on the performance of AI models is unclear. We aimed to develop machine learning models using shear wave elastography (SWE) data to identify malignant breast lesions and to test the models' generalizability by validating them on external data generated by both the original updated software versions. We developed and validated different machine learning models (GLM, MARS, XGBoost, SVM) using multicenter, international SWE data (NCT02638935) using tenfold cross-validation. Findings were compared to the histopathologic evaluation of the biopsy specimen or 2-year follow-up. The outcome measure was the area under the curve (AUROC). We included 1288 cases in the development set using the original imaging software and 385 cases in the validation set using both, original and updated software. In the external validation set, the GLM and XGBoost models showed better performance with the updated software data compared to the original software data (AUROC 0.941 vs. 0.902, p < 0.001 and 0.934 vs. 0.872, p < 0.001). The MARS model showed worse performance with the updated software data (0.847 vs. 0.894, p = 0.045). SVM was not calibrated. In this multicenter study using SWE data, some machine learning models demonstrated great potential to bridge the gap between original software and updated software, whereas others exhibited weak generalizability.

Deep Guess acceleration for explainable image reconstruction in sparse-view CT.

Loli Piccolomini E, Evangelista D, Morotti E

pubmed logopapersJul 1 2025
Sparse-view Computed Tomography (CT) is an emerging protocol designed to reduce X-ray dose radiation in medical imaging. Reconstructions based on the traditional Filtered Back Projection algorithm suffer from severe artifacts due to sparse data. In contrast, Model-Based Iterative Reconstruction (MBIR) algorithms, though better at mitigating noise through regularization, are too computationally costly for clinical use. This paper introduces a novel technique, denoted as the Deep Guess acceleration scheme, using a trained neural network both to quicken the regularized MBIR and to enhance the reconstruction accuracy. We integrate state-of-the-art deep learning tools to initialize a clever starting guess for a proximal algorithm solving a non-convex model and thus computing a (mathematically) interpretable solution image in a few iterations. Experimental results on real and synthetic CT images demonstrate the Deep Guess effectiveness in (very) sparse tomographic protocols, where it overcomes its mere variational counterpart and many data-driven approaches at the state of the art. We also consider a ground truth-free implementation and test the robustness of the proposed framework to noise.

Virtual lung screening trial (VLST): An in silico study inspired by the national lung screening trial for lung cancer detection.

Tushar FI, Vancoillie L, McCabe C, Kavuri A, Dahal L, Harrawood B, Fryling M, Zarei M, Sotoudeh-Paima S, Ho FC, Ghosh D, Harowicz MR, Tailor TD, Luo S, Segars WP, Abadi E, Lafata KJ, Lo JY, Samei E

pubmed logopapersJul 1 2025
Clinical imaging trials play a crucial role in advancing medical innovation but are often costly, inefficient, and ethically constrained. Virtual Imaging Trials (VITs) present a solution by simulating clinical trial components in a controlled, risk-free environment. The Virtual Lung Screening Trial (VLST), an in silico study inspired by the National Lung Screening Trial (NLST), illustrates the potential of VITs to expedite clinical trials, minimize risks to participants, and promote optimal use of imaging technologies in healthcare. This study aimed to show that a virtual imaging trial platform could investigate some key elements of a major clinical trial, specifically the NLST, which compared Computed tomography (CT) and chest radiography (CXR) for lung cancer screening. With simulated cancerous lung nodules, a virtual patient cohort of 294 subjects was created using XCAT human models. Each virtual patient underwent both CT and CXR imaging, with deep learning models, the AI CT-Reader and AI CXR-Reader, acting as virtual readers to perform recall patients with suspicion of lung cancer. The primary outcome was the difference in diagnostic performance between CT and CXR, measured by the Area Under the Curve (AUC). The AI CT-Reader showed superior diagnostic accuracy, achieving an AUC of 0.92 (95 % CI: 0.90-0.95) compared to the AI CXR-Reader's AUC of 0.72 (95 % CI: 0.67-0.77). Furthermore, at the same 94 % CT sensitivity reported by the NLST, the VLST specificity of 73 % was similar to the NLST specificity of 73.4 %. This CT performance highlights the potential of VITs to replicate certain aspects of clinical trials effectively, paving the way toward a safe and efficient method for advancing imaging-based diagnostics.

Physiological Confounds in BOLD-fMRI and Their Correction.

Addeh A, Williams RJ, Golestani A, Pike GB, MacDonald ME

pubmed logopapersJul 1 2025
Functional magnetic resonance imaging (fMRI) has opened new frontiers in neuroscience by instrumentally driving our understanding of brain function and development. Despite its substantial successes, fMRI studies persistently encounter obstacles stemming from inherent, unavoidable physiological confounds. The adverse effects of these confounds are especially noticeable with higher magnetic fields, which have been gaining momentum in fMRI experiments. This review focuses on the four major physiological confounds impacting fMRI studies: low-frequency fluctuations in both breathing depth and rate, low-frequency fluctuations in the heart rate, thoracic movements, and cardiac pulsatility. Over the past three decades, numerous correction techniques have emerged to address these challenges. Correction methods have effectively enhanced the detection of task-activated voxels and minimized the occurrence of false positives and false negatives in functional connectivity studies. While confound correction methods have merit, they also have certain limitations. For instance, model-based approaches require externally recorded physiological data that is often unavailable in fMRI studies. Methods reliant on independent component analysis, on the other hand, need prior knowledge about the number of components. Machine learning techniques, although showing potential, are still in the early stages of development and require additional validation. This article reviews the mechanics of physiological confound correction methods, scrutinizes their performance and limitations, and discusses their impact on fMRI studies.

Agreement between Routine-Dose and Lower-Dose CT with and without Deep Learning-based Denoising for Active Surveillance of Solid Small Renal Masses: A Multiobserver Study.

Borgbjerg J, Breen BS, Kristiansen CH, Larsen NE, Medrud L, Mikalone R, Müller S, Naujokaite G, Negård A, Nielsen TK, Salte IM, Frøkjær JB

pubmed logopapersJul 1 2025
Purpose To assess the agreement between routine-dose (RD) and lower-dose (LD) contrast-enhanced CT scans, with and without Digital Imaging and Communications in Medicine-based deep learning-based denoising (DLD), in evaluating small renal masses (SRMs) during active surveillance. Materials and Methods In this retrospective study, CT scans from patients undergoing active surveillance for an SRM were included. Using a validated simulation technique, LD CT images were generated from the RD images to simulate 75% (LD75) and 90% (LD90) radiation dose reductions. Two additional LD image sets, in which the DLD was applied (LD75-DLD and LD90-DLD), were generated. Between January 2023 and June 2024, nine radiologists from three institutions independently evaluated 350 CT scans across five datasets for tumor size, tumor nearness to the collecting system (TN), and tumor shape irregularity (TSI), and interobserver reproducibility and agreement were assessed using the 95% limits of agreement with the mean (LOAM) and Gwet AC2 coefficient, respectively. Subjective and quantitative image quality assessments were also performed. Results The study sample included 70 patients (mean age, 73.2 years ± 9.2 [SD]; 48 male, 22 female). LD75 CT was found to be in agreement with RD scans for assessing SRM diameter, with a LOAM of ±2.4 mm (95% CI: 2.3, 2.6) for LD75 compared with ±2.2 mm (95% CI: 2.1, 2.4) for RD. However, a 90% dose reduction compromised reproducibility (LOAM ±3.0 mm; 95% CI: 2.8, 3.2). LD90-DLD preserved measurement reproducibility (LOAM ±2.4 mm; 95% CI: 2.3, 2.6). Observer agreement was comparable between TN and TSI assessments across all image sets, with no statistically significant differences identified (all comparisons <i>P</i> ≥ .35 for TN and <i>P</i> ≥ .02 for TSI; Holm-corrected significance threshold, <i>P</i> = .013). Subjective and quantitative image quality assessments confirmed that DLD effectively restored image quality at reduced dose levels: LD75-DLD had the highest overall image quality, significantly lower noise, and improved contrast-to-noise ratio compared with RD (<i>P</i> < .001). Conclusion A 75% reduction in radiation dose is feasible for SRM assessment in active surveillance using CT with a conventional iterative reconstruction technique, whereas applying DLD allows submillisievert dose reduction. <b>Keywords:</b> CT, Urinary, Kidney, Radiation Safety, Observer Performance, Technology Assessment <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Muglia in this issue.

Automated Finite Element Modeling of the Lumbar Spine: A Biomechanical and Clinical Approach to Spinal Load Distribution and Stress Analysis.

Ahmadi M, Zhang X, Lin M, Tang Y, Engeberg ED, Hashemi J, Vrionis FD

pubmed logopapersJun 30 2025
Biomechanical analysis of the lumbar spine is vital for understanding load distribution and stress patterns under physiological conditions. Traditional finite element analysis (FEA) relies on time-consuming manual segmentation and meshing, leading to long runtimes and inconsistent accuracy. Automating this process improves efficiency and reproducibility. This study introduces an automated FEA methodology for lumbar spine biomechanics, integrating deep learning-based segmentation with computational modeling to streamline workflows from imaging to simulation. Medical imaging data were segmented using deep learning frameworks for vertebrae and intervertebral discs. Segmented structures were transformed into optimized surface meshes via Laplacian smoothing and decimation. Using the Gibbon library and FEBio, FEA models incorporated cortical and cancellous bone, nucleus, annulus, cartilage, and ligaments. Ligament attachments used spherical coordinate-based segmentation; vertebral endplates were extracted via principal component analysis (PCA) for cartilage modeling. Simulations assessed stress, strain, and displacement under axial rotation, extension, flexion, and lateral bending. The automated pipeline cut model preparation time by 97.9%, from over 24 hours to 30 minutes and 49.48 seconds. Biomechanical responses aligned with experimental and traditional FEA data, showing high posterior element loads in extension and flexion, consistent ligament forces, and disc deformations. The approach enhanced reproducibility with minimal manual input. This automated methodology provides an efficient, accurate framework for lumbar spine biomechanics, eliminating manual segmentation challenges. It supports clinical diagnostics, implant design, and rehabilitation, advancing computational and patient-specific spinal studies. Rapid simulations enhance implant optimization, and early detection of degenerative spinal issues, improving personalized treatment and research.

Photon-counting micro-CT scanner for deep learning-enabled small animal perfusion imaging.

Allphin AJ, Nadkarni R, Clark DP, Badea CT

pubmed logopapersJun 27 2025
In this work, we introduce a benchtop, turn-table photon-counting (PC) micro-CT scanner and highlight its application for dynamic small animal perfusion imaging.&#xD;Approach: Built on recently published hardware, the system now features a CdTe-based photon-counting detector (PCD). We validated its static spectral PC micro-CT imaging using conventional phantoms and assessed dynamic performance with a custom flow-configurable dual-compartment perfusion phantom. The phantom was scanned under varied flow conditions during injections of a low molecular weight iodinated contrast agent. In vivo mouse studies with identical injection settings demonstrated potential applications. A pretrained denoising CNN processed large multi-energy, temporal datasets (20 timepoints × 4 energies × 3 spatial dimensions), reconstructed via weighted filtered back projection. A separate CNN, trained on simulated data, performed gamma variate-based 2D perfusion mapping, evaluated qualitatively in phantom and in vivo tests.&#xD;Main Results: Full five-dimensional reconstructions were denoised using a CNN in ~3% of the time of iterative reconstruction, reducing noise in water at the highest energy threshold from 1206 HU to 86 HU. Decomposed iodine maps, which improved contrast to noise ratio from 16.4 (in the lowest energy CT images) to 29.4 (in the iodine maps), were used for perfusion analysis. The perfusion CNN outperformed pixelwise gamma variate fitting by ~33%, with a test set error of 0.04 vs. 0.06 in blood flow index (BFI) maps, and quantified linear BFI changes in the phantom with a coefficient of determination of 0.98.&#xD;Significance: This work underscores the PC micro-CT scanner's utility for high-throughput small animal perfusion imaging, leveraging spectral PC micro-CT and iodine decomposition. It provides a versatile platform for preclinical vascular research and advanced, time-resolved studies of disease models and therapeutic interventions.

AI Model Passport: Data and System Traceability Framework for Transparent AI in Health

Varvara Kalokyri, Nikolaos S. Tachos, Charalampos N. Kalantzopoulos, Stelios Sfakianakis, Haridimos Kondylakis, Dimitrios I. Zaridis, Sara Colantonio, Daniele Regge, Nikolaos Papanikolaou, The ProCAncer-I consortium, Konstantinos Marias, Dimitrios I. Fotiadis, Manolis Tsiknakis

arxiv logopreprintJun 27 2025
The increasing integration of Artificial Intelligence (AI) into health and biomedical systems necessitates robust frameworks for transparency, accountability, and ethical compliance. Existing frameworks often rely on human-readable, manual documentation which limits scalability, comparability, and machine interpretability across projects and platforms. They also fail to provide a unique, verifiable identity for AI models to ensure their provenance and authenticity across systems and use cases, limiting reproducibility and stakeholder trust. This paper introduces the concept of the AI Model Passport, a structured and standardized documentation framework that acts as a digital identity and verification tool for AI models. It captures essential metadata to uniquely identify, verify, trace and monitor AI models across their lifecycle - from data acquisition and preprocessing to model design, development and deployment. In addition, an implementation of this framework is presented through AIPassport, an MLOps tool developed within the ProCAncer-I EU project for medical imaging applications. AIPassport automates metadata collection, ensures proper versioning, decouples results from source scripts, and integrates with various development environments. Its effectiveness is showcased through a lesion segmentation use case using data from the ProCAncer-I dataset, illustrating how the AI Model Passport enhances transparency, reproducibility, and regulatory readiness while reducing manual effort. This approach aims to set a new standard for fostering trust and accountability in AI-driven healthcare solutions, aspiring to serve as the basis for developing transparent and regulation compliant AI systems across domains.

Lightweight Physics-Informed Zero-Shot Ultrasound Plane Wave Denoising

Hojat Asgariandehkordi, Mostafa Sharifzadeh, Hassan Rivaz

arxiv logopreprintJun 26 2025
Ultrasound Coherent Plane Wave Compounding (CPWC) enhances image contrast by combining echoes from multiple steered transmissions. While increasing the number of angles generally improves image quality, it drastically reduces the frame rate and can introduce blurring artifacts in fast-moving targets. Moreover, compounded images remain susceptible to noise, particularly when acquired with a limited number of transmissions. We propose a zero-shot denoising framework tailored for low-angle CPWC acquisitions, which enhances contrast without relying on a separate training dataset. The method divides the available transmission angles into two disjoint subsets, each used to form compound images that include higher noise levels. The new compounded images are then used to train a deep model via a self-supervised residual learning scheme, enabling it to suppress incoherent noise while preserving anatomical structures. Because angle-dependent artifacts vary between the subsets while the underlying tissue response is similar, this physics-informed pairing allows the network to learn to disentangle the inconsistent artifacts from the consistent tissue signal. Unlike supervised methods, our model requires no domain-specific fine-tuning or paired data, making it adaptable across anatomical regions and acquisition setups. The entire pipeline supports efficient training with low computational cost due to the use of a lightweight architecture, which comprises only two convolutional layers. Evaluations on simulation, phantom, and in vivo data demonstrate superior contrast enhancement and structure preservation compared to both classical and deep learning-based denoising methods.
Page 12 of 19183 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.