Sort by:
Page 2 of 440 results

Effect of data-driven motion correction for respiratory movement on lesion detectability in PET-CT: a phantom study.

de Winter MA, Gevers R, Lavalaye J, Habraken JBA, Maspero M

pubmed logopapersJul 11 2025
While data-driven motion correction (DDMC) techniques have proven to enhance the visibility of lesions affected by motion, their impact on overall detectability remains unclear. This study investigates whether DDMC improves lesion detectability in PET-CT using FDG-18F. A moving platform simulated respiratory motion in a NEMA-IEC body phantom with varying amplitudes (0, 7, 10, 20, 30 mm) and target-to-background ratios (2, 5, 10.5). Scans were reconstructed with and without DDMC, and the spherical targets' maximal and mean recovery coefficient (RC) and contrast-to-noise ratio (CNR) were measured. DDMC results in higher RC values in the target spheres. CNR values increase for small, high-motion affected targets but decrease for larger spheres with smaller amplitudes. A sub-analysis shows that DDMC increases the contrast of the sphere along with a 36% increase in background noise. While DDMC significantly enhances contrast (RC), its impact on detectability (CNR) is less profound due to increased background noise. CNR improves for small targets with high motion amplitude, potentially enhancing the detectability of low-uptake lesions. Given that the increased background noise may reduce detectability for targets unaffected by motion, we suggest that DDMC reconstructions are used best in addition to non-DDMC reconstructions.

Impact of heart rate on coronary artery stenosis grading accuracy using deep learning-based fast kV-switching CT: A phantom study.

Mikayama R, Kojima T, Shirasaka T, Yamane S, Funatsu R, Kato T, Yabuuchi H

pubmed logopapersJul 11 2025
Deep learning-based fast kV-switching CT (DL-FKSCT) generates complete sinograms for fast kV-switching dual-energy CT (DECT) scans by using a trained neural network to restore missing views. Such restoration significantly enhances the image quality of coronary CT angiography (CCTA), and the allowable heart rate (HR) may vary between DECT and single-energy CT (SECT). This study aimed to examine HR's effect onCCTA using DL-FKSCT. We scanned stenotic coronary artery phantoms attached to a pulsating cardiac phantom with DECT and SECT modes on a DL-FKSCT scanner. The phantom unit was operated with simulated HRs ranging from 0 (static) to 50-70 beats per minute (bpm). The sharpness and stenosis ratio of the coronary model were quantitatively compared between DECT and SECT, stratified by simulated HR settings using the paired t-test (significance was set at p < 0.01 with a Bonferroni adjustment for multiple comparisons). Regarding image sharpness, DECT showed significant superiority over SECT. In terms of the stenosis ratio compared to a static image reference, 70 keV virtual monochromatic image in DECT exhibited errors exceeding 10 % at HRs surpassing 65 bpm (p < 0.01), whereas 120 kVp SECT registered errors below 10 % across all HR settings, with no significant differences observed. In DL-FKSCT, DECT exhibited a lower upper limit of HR than SECT. Therefore, HR control is important for DECT scans in DL-FKSCT.

EdgeSRIE: A hybrid deep learning framework for real-time speckle reduction and image enhancement on portable ultrasound systems

Hyunwoo Cho, Jongsoo Lee, Jinbum Kang, Yangmo Yoo

arxiv logopreprintJul 5 2025
Speckle patterns in ultrasound images often obscure anatomical details, leading to diagnostic uncertainty. Recently, various deep learning (DL)-based techniques have been introduced to effectively suppress speckle; however, their high computational costs pose challenges for low-resource devices, such as portable ultrasound systems. To address this issue, EdgeSRIE, which is a lightweight hybrid DL framework for real-time speckle reduction and image enhancement in portable ultrasound imaging, is introduced. The proposed framework consists of two main branches: an unsupervised despeckling branch, which is trained by minimizing a loss function between speckled images, and a deblurring branch, which restores blurred images to sharp images. For hardware implementation, the trained network is quantized to 8-bit integer precision and deployed on a low-resource system-on-chip (SoC) with limited power consumption. In the performance evaluation with phantom and in vivo analyses, EdgeSRIE achieved the highest contrast-to-noise ratio (CNR) and average gradient magnitude (AGM) compared with the other baselines (different 2-rule-based methods and other 4-DL-based methods). Furthermore, EdgeSRIE enabled real-time inference at over 60 frames per second while satisfying computational requirements (< 20K parameters) on actual portable ultrasound hardware. These results demonstrated the feasibility of EdgeSRIE for real-time, high-quality ultrasound imaging in resource-limited environments.

Clinical validation of AI assisted animal ultrasound models for diagnosis of early liver trauma.

Song Q, He X, Wang Y, Gao H, Tan L, Ma J, Kang L, Han P, Luo Y, Wang K

pubmed logopapersJul 2 2025
The study aimed to develop an AI-assisted ultrasound model for early liver trauma identification, using data from Bama miniature pigs and patients in Beijing, China. A deep learning model was created and fine-tuned with animal and clinical data, achieving high accuracy metrics. In internal tests, the model outperformed both Junior and Senior sonographers. External tests showed the model's effectiveness, with a Dice Similarity Coefficient of 0.74, True Positive Rate of 0.80, Positive Predictive Value of 0.74, and 95% Hausdorff distance of 14.84. The model's performance was comparable to Junior sonographers and slightly lower than Senior sonographers. This AI model shows promise for liver injury detection, offering a valuable tool with diagnostic capabilities similar to those of less experienced human operators.

A Multi-Centric Anthropomorphic 3D CT Phantom-Based Benchmark Dataset for Harmonization

Mohammadreza Amirian, Michael Bach, Oscar Jimenez-del-Toro, Christoph Aberle, Roger Schaer, Vincent Andrearczyk, Jean-Félix Maestrati, Maria Martin Asiain, Kyriakos Flouris, Markus Obmann, Clarisse Dromain, Benoît Dufour, Pierre-Alexandre Alois Poletti, Hendrik von Tengg-Kobligk, Rolf Hügli, Martin Kretzschmar, Hatem Alkadhi, Ender Konukoglu, Henning Müller, Bram Stieltjes, Adrien Depeursinge

arxiv logopreprintJul 2 2025
Artificial intelligence (AI) has introduced numerous opportunities for human assistance and task automation in medicine. However, it suffers from poor generalization in the presence of shifts in the data distribution. In the context of AI-based computed tomography (CT) analysis, significant data distribution shifts can be caused by changes in scanner manufacturer, reconstruction technique or dose. AI harmonization techniques can address this problem by reducing distribution shifts caused by various acquisition settings. This paper presents an open-source benchmark dataset containing CT scans of an anthropomorphic phantom acquired with various scanners and settings, which purpose is to foster the development of AI harmonization techniques. Using a phantom allows fixing variations attributed to inter- and intra-patient variations. The dataset includes 1378 image series acquired with 13 scanners from 4 manufacturers across 8 institutions using a harmonized protocol as well as several acquisition doses. Additionally, we present a methodology, baseline results and open-source code to assess image- and feature-level stability and liver tissue classification, promoting the development of AI harmonization strategies.

Volumetric and Diffusion Tensor Imaging Abnormalities Are Associated With Behavioral Changes Post-Concussion in a Youth Pig Model of Mild Traumatic Brain Injury.

Sanjida I, Alesa N, Chenyang L, Jiangyang Z, Bianca DM, Ana V, Shaun S, Avner M, Kirk M, Aimee C, Jie H, Ricardo MA, Jane M, Galit P

pubmed logopapersJul 1 2025
Mild traumatic brain injury (mTBI) caused by sports-related incidents in children and youth often leads to prolonged cognitive impairments but remains difficult to diagnose. In order to identify clinically relevant imaging and behavioral biomarkers associated concussion, a closed-head mTBI was induced in adolescent pigs. Twelve (n = 4 male and n = 8 female), 16-week old Yucatan pigs were tested; n = 6 received mTBI and n = 6 received a sham procedure. T1-weighted imaging was used to assess volumetric alterations in different regions of the brain and diffusion tensor imaging (DTI) to examine microstructural damage in white matter. The pigs were imaged at 1- and 3-month post-injury. Neuropsychological screening for executive function and anxiety were performed before and in the months after the injury. The volumetric analysis showed significant longitudinal changes in pigs with mTBI compared with sham, which may be attributed to swelling and neuroinflammation. Fractional anisotropy (FA) values derived from DTI images demonstrated a 21% increase in corpus callosum from 1 to 3 months in mTBI pigs, which is significantly higher than in sham pigs (4.8%). Additionally, comparisons of the left and right internal capsules revealed a decrease in FA in the right internal capsule for mTBI pigs, which may indicate demyelination. The neuroimaging results suggest that the injury had disrupted the maturation of white and gray matter in the developing brain. Behavioral testing showed that compare to sham pigs, mTBI pigs exhibited 23% increased activity in open field tests, 35% incraesed escape attempts, along with a 65% decrease in interaction with the novel object, suggesting possible memory impairments and cognitive deficits. The correlation analysis showed an associations between volumetric features and behavioral metrics. Furthermore, a machine learning model, which integrated FA, volumetric features and behavioral test metrics, achieved 67% accuracy, indicating its potential to differentiate the two groups. Thus, the imaging biomarkers were indicative of long-term behavioral impairments and could be crucial to the clinical management of concussion in youth.

Comparison of Deep Learning Models for fast and accurate dose map prediction in Microbeam Radiation Therapy.

Arsini L, Humphreys J, White C, Mentzel F, Paino J, Bolst D, Caccia B, Cameron M, Ciardiello A, Corde S, Engels E, Giagu S, Rosenfeld A, Tehei M, Tsoi AC, Vogel S, Lerch M, Hagenbuchner M, Guatelli S, Terracciano CM

pubmed logopapersJul 1 2025
Microbeam Radiation Therapy (MRT) is an innovative radiotherapy modality which uses highly focused synchrotron-generated X-ray microbeams. Current pre-clinical research in MRT mostly rely on Monte Carlo (MC) simulations for dose estimation, which are highly accurate but computationally intensive. Recently, Deep Learning (DL) dose engines have been proved effective in generating fast and reliable dose distributions in different RT modalities. However, relatively few studies compare different models on the same task. This work aims to compare a Graph-Convolutional-Network-based DL model, developed in the context of Very High Energy Electron RT, to the Convolutional 3D U-Net that we recently implemented for MRT dose predictions. The two DL solutions are trained with 3D dose maps, generated with the MC-Toolkit Geant4, in rats used in MRT pre-clinical research. The models are evaluated against Geant4 simulations, used as ground truth, and are assessed in terms of Mean Absolute Error, Mean Relative Error, and a voxel-wise version of the γ-index. Also presented are specific comparisons of predictions in relevant tumor regions, tissues boundaries and air pockets. The two models are finally compared from the perspective of the execution time and size. This study finds that the two models achieve comparable overall performance. Main differences are found in their dosimetric accuracy within specific regions, such as air pockets, and their respective inference times. Consequently, the choice between models should be guided primarily by data structure and time constraints, favoring the graph-based method for its flexibility or the 3D U-Net for its faster execution.

Quantitative ultrasound classification of healthy and chemically degraded ex-vivo cartilage.

Sorriento A, Guachi-Guachi L, Turini C, Lenzi E, Dolzani P, Lisignoli G, Kerdegari S, Valenza G, Canale C, Ricotti L, Cafarelli A

pubmed logopapersJul 1 2025
In this study, we explore the potential of ten quantitative (radiofrequency-based) ultrasound parameters to assess the progressive loss of collagen and proteoglycans, mimicking an osteoarthritis condition in ex-vivo bovine cartilage samples. Most analyzed metrics showed significant changes as the degradation progressed, especially with collagenase treatment. We propose for the first time a combination of these ultrasound parameters through machine learning models aimed at automatically identifying healthy and degraded cartilage samples. The random forest model showed good performance in distinguishing healthy cartilage from trypsin-treated samples, with an accuracy of 60%. The support vector machine demonstrated excellent accuracy (96%) in differentiating healthy cartilage from collagenase-degraded samples. Histological and mechanical analyses further confirmed these findings, with collagenase having a more pronounced impact on both mechanical and histological properties, compared to trypsin. These metrics were obtained using an ultrasound probe having a transmission frequency of 15 MHz, typically used for the diagnosis of musculoskeletal diseases, enabling a fully non-invasive procedure without requiring arthroscopic probes. As a perspective, the proposed quantitative ultrasound assessment has the potential to become a new standard for monitoring cartilage health, enabling the early detection of cartilage pathologies and timely interventions.

Phantom-based evaluation of image quality in Transformer-enhanced 2048-matrix CT imaging at low and ultralow doses.

Li Q, Liu L, Zhang Y, Zhang L, Wang L, Pan Z, Xu M, Zhang S, Xie X

pubmed logopapersJul 1 2025
To compare the quality of standard 512-matrix, standard 1024-matrix, and Swin2SR-based 2048-matrix phantom images under different scanning protocols. The Catphan 600 phantom was scanned using a multidetector CT scanner under two protocols: 120 kV/100 mA (CT dose index volume = 3.4 mGy) to simulate low-dose CT, and 70 kV/40 mA (0.27 mGy) to simulate ultralow-dose CT. Raw data were reconstructed into standard 512-matrix images using three methods: filtered back projection (FBP), adaptive statistical iterative reconstruction at 40% intensity (ASIR-V), and deep learning image reconstruction at high intensity (DLIR-H). The Swin2SR super-resolution model was used to generate 2048-matrix images (Swin2SR-2048), while the super-resolution convolutional neural network (SRCNN) model generated 2048-matrix images (SRCNN-2048). The quality of 2048-matrix images generated by the two models (Swin2SR and SRCNN) was compared. Image quality was evaluated by ImQuest software (v7.2.0.0, Duke University) based on line pair clarity, task-based transfer function (TTF), image noise, and noise power spectrum (NPS). At equivalent radiation doses and reconstruction method, Swin2SR-2048 images identified more line pairs than both standard-512 and standard-1024 images. Except for the 0.27 mGy/DLIR-H/standard kernel sequence, TTF-50% of Teflon increased after super-resolution processing. Statistically significant differences in TTF-50% were observed between the standard 512, 1024, and Swin2SR-2048 images (all p < 0.05). Swin2SR-2048 images exhibited lower image noise and NPS<sub>peak</sub> compared to both standard 512- and 1024-matrix images, with significant differences observed in all three matrix types (all p < 0.05). Swin2SR-2048 images also demonstrated superior quality compared to SRCNN-2048, with significant differences in image noise (p < 0.001), NPS<sub>peak</sub> (p < 0.05), and TTF-50% for Teflon (p < 0.05). Transformer-enhanced 2048-matrix CT images improve spatial resolution and reduce image noise compared to standard-512 and -1024 matrix images.

Photon-counting micro-CT scanner for deep learning-enabled small animal perfusion imaging.

Allphin AJ, Nadkarni R, Clark DP, Badea CT

pubmed logopapersJun 27 2025
In this work, we introduce a benchtop, turn-table photon-counting (PC) micro-CT scanner and highlight its application for dynamic small animal perfusion imaging.&#xD;Approach: Built on recently published hardware, the system now features a CdTe-based photon-counting detector (PCD). We validated its static spectral PC micro-CT imaging using conventional phantoms and assessed dynamic performance with a custom flow-configurable dual-compartment perfusion phantom. The phantom was scanned under varied flow conditions during injections of a low molecular weight iodinated contrast agent. In vivo mouse studies with identical injection settings demonstrated potential applications. A pretrained denoising CNN processed large multi-energy, temporal datasets (20 timepoints × 4 energies × 3 spatial dimensions), reconstructed via weighted filtered back projection. A separate CNN, trained on simulated data, performed gamma variate-based 2D perfusion mapping, evaluated qualitatively in phantom and in vivo tests.&#xD;Main Results: Full five-dimensional reconstructions were denoised using a CNN in ~3% of the time of iterative reconstruction, reducing noise in water at the highest energy threshold from 1206 HU to 86 HU. Decomposed iodine maps, which improved contrast to noise ratio from 16.4 (in the lowest energy CT images) to 29.4 (in the iodine maps), were used for perfusion analysis. The perfusion CNN outperformed pixelwise gamma variate fitting by ~33%, with a test set error of 0.04 vs. 0.06 in blood flow index (BFI) maps, and quantified linear BFI changes in the phantom with a coefficient of determination of 0.98.&#xD;Significance: This work underscores the PC micro-CT scanner's utility for high-throughput small animal perfusion imaging, leveraging spectral PC micro-CT and iodine decomposition. It provides a versatile platform for preclinical vascular research and advanced, time-resolved studies of disease models and therapeutic interventions.
Page 2 of 440 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.