Sort by:
Page 5 of 3953948 results

SimAQ: Mitigating Experimental Artifacts in Soft X-Ray Tomography using Simulated Acquisitions

Jacob Egebjerg, Daniel Wüstner

arxiv logopreprintAug 14 2025
Soft X-ray tomography (SXT) provides detailed structural insight into whole cells but is hindered by experimental artifacts such as the missing wedge and by limited availability of annotated datasets. We present \method, a simulation pipeline that generates realistic cellular phantoms and applies synthetic artifacts to produce paired noisy volumes, sinograms, and reconstructions. We validate our approach by training a neural network primarily on synthetic data and demonstrate effective few-shot and zero-shot transfer learning on real SXT tomograms. Our model delivers accurate segmentations, enabling quantitative analysis of noisy tomograms without relying on large labeled datasets or complex reconstruction methods.

Beam Hardening Correction in Clinical X-ray Dark-Field Chest Radiography using Deep Learning-Based Bone Segmentation

Lennard Kaster, Maximilian E. Lochschmidt, Anne M. Bauer, Tina Dorosti, Sofia Demianova, Thomas Koehler, Daniela Pfeiffer, Franz Pfeiffer

arxiv logopreprintAug 14 2025
Dark-field radiography is a novel X-ray imaging modality that enables complementary diagnostic information by visualizing the microstructural properties of lung tissue. Implemented via a Talbot-Lau interferometer integrated into a conventional X-ray system, it allows simultaneous acquisition of perfectly temporally and spatially registered attenuation-based conventional and dark-field radiographs. Recent clinical studies have demonstrated that dark-field radiography outperforms conventional radiography in diagnosing and staging pulmonary diseases. However, the polychromatic nature of medical X-ray sources leads to beam-hardening, which introduces structured artifacts in the dark-field radiographs, particularly from osseous structures. This so-called beam-hardening-induced dark-field signal is an artificial dark-field signal and causes undesired cross-talk between attenuation and dark-field channels. This work presents a segmentation-based beam-hardening correction method using deep learning to segment ribs and clavicles. Attenuation contribution masks derived from dual-layer detector computed tomography data, decomposed into aluminum and water, were used to refine the material distribution estimation. The method was evaluated both qualitatively and quantitatively on clinical data from healthy subjects and patients with chronic obstructive pulmonary disease and COVID-19. The proposed approach reduces bone-induced artifacts and improves the homogeneity of the lung dark-field signal, supporting more reliable visual and quantitative assessment in clinical dark-field chest radiography.

GNN-based Unified Deep Learning

Furkan Pala, Islem Rekik

arxiv logopreprintAug 14 2025
Deep learning models often struggle to maintain generalizability in medical imaging, particularly under domain-fracture scenarios where distribution shifts arise from varying imaging techniques, acquisition protocols, patient populations, demographics, and equipment. In practice, each hospital may need to train distinct models - differing in learning task, width, and depth - to match local data. For example, one hospital may use Euclidean architectures such as MLPs and CNNs for tabular or grid-like image data, while another may require non-Euclidean architectures such as graph neural networks (GNNs) for irregular data like brain connectomes. How to train such heterogeneous models coherently across datasets, while enhancing each model's generalizability, remains an open problem. We propose unified learning, a new paradigm that encodes each model into a graph representation, enabling unification in a shared graph learning space. A GNN then guides optimization of these unified models. By decoupling parameters of individual models and controlling them through a unified GNN (uGNN), our method supports parameter sharing and knowledge transfer across varying architectures (MLPs, CNNs, GNNs) and distributions, improving generalizability. Evaluations on MorphoMNIST and two MedMNIST benchmarks - PneumoniaMNIST and BreastMNIST - show that unified learning boosts performance when models are trained on unique distributions and tested on mixed ones, demonstrating strong robustness to unseen data with large distribution shifts. Code and benchmarks: https://github.com/basiralab/uGNN

FIND-Net -- Fourier-Integrated Network with Dictionary Kernels for Metal Artifact Reduction

Farid Tasharofi, Fuxin Fan, Melika Qahqaie, Mareike Thies, Andreas Maier

arxiv logopreprintAug 14 2025
Metal artifacts, caused by high-density metallic implants in computed tomography (CT) imaging, severely degrade image quality, complicating diagnosis and treatment planning. While existing deep learning algorithms have achieved notable success in Metal Artifact Reduction (MAR), they often struggle to suppress artifacts while preserving structural details. To address this challenge, we propose FIND-Net (Fourier-Integrated Network with Dictionary Kernels), a novel MAR framework that integrates frequency and spatial domain processing to achieve superior artifact suppression and structural preservation. FIND-Net incorporates Fast Fourier Convolution (FFC) layers and trainable Gaussian filtering, treating MAR as a hybrid task operating in both spatial and frequency domains. This approach enhances global contextual understanding and frequency selectivity, effectively reducing artifacts while maintaining anatomical structures. Experiments on synthetic datasets show that FIND-Net achieves statistically significant improvements over state-of-the-art MAR methods, with a 3.07% MAE reduction, 0.18% SSIM increase, and 0.90% PSNR improvement, confirming robustness across varying artifact complexities. Furthermore, evaluations on real-world clinical CT scans confirm FIND-Net's ability to minimize modifications to clean anatomical regions while effectively suppressing metal-induced distortions. These findings highlight FIND-Net's potential for advancing MAR performance, offering superior structural preservation and improved clinical applicability. Code is available at https://github.com/Farid-Tasharofi/FIND-Net

SingleStrip: learning skull-stripping from a single labeled example

Bella Specktor-Fadida, Malte Hoffmann

arxiv logopreprintAug 14 2025
Deep learning segmentation relies heavily on labeled data, but manual labeling is laborious and time-consuming, especially for volumetric images such as brain magnetic resonance imaging (MRI). While recent domain-randomization techniques alleviate the dependency on labeled data by synthesizing diverse training images from label maps, they offer limited anatomical variability when very few label maps are available. Semi-supervised self-training addresses label scarcity by iteratively incorporating model predictions into the training set, enabling networks to learn from unlabeled data. In this work, we combine domain randomization with self-training to train three-dimensional skull-stripping networks using as little as a single labeled example. First, we automatically bin voxel intensities, yielding labels we use to synthesize images for training an initial skull-stripping model. Second, we train a convolutional autoencoder (AE) on the labeled example and use its reconstruction error to assess the quality of brain masks predicted for unlabeled data. Third, we select the top-ranking pseudo-labels to fine-tune the network, achieving skull-stripping performance on out-of-distribution data that approaches models trained with more labeled images. We compare AE-based ranking to consistency-based ranking under test-time augmentation, finding that the AE approach yields a stronger correlation with segmentation accuracy. Our results highlight the potential of combining domain randomization and AE-based quality control to enable effective semi-supervised segmentation from extremely limited labeled data. This strategy may ease the labeling burden that slows progress in studies involving new anatomical structures or emerging imaging techniques.

Cross-view Generalized Diffusion Model for Sparse-view CT Reconstruction

Jixiang Chen, Yiqun Lin, Yi Qin, Hualiang Wang, Xiaomeng Li

arxiv logopreprintAug 14 2025
Sparse-view computed tomography (CT) reduces radiation exposure by subsampling projection views, but conventional reconstruction methods produce severe streak artifacts with undersampled data. While deep-learning-based methods enable single-step artifact suppression, they often produce over-smoothed results under significant sparsity. Though diffusion models improve reconstruction via iterative refinement and generative priors, they require hundreds of sampling steps and struggle with stability in highly sparse regimes. To tackle these concerns, we present the Cross-view Generalized Diffusion Model (CvG-Diff), which reformulates sparse-view CT reconstruction as a generalized diffusion process. Unlike existing diffusion approaches that rely on stochastic Gaussian degradation, CvG-Diff explicitly models image-domain artifacts caused by angular subsampling as a deterministic degradation operator, leveraging correlations across sparse-view CT at different sample rates. To address the inherent artifact propagation and inefficiency of sequential sampling in generalized diffusion model, we introduce two innovations: Error-Propagating Composite Training (EPCT), which facilitates identifying error-prone regions and suppresses propagated artifacts, and Semantic-Prioritized Dual-Phase Sampling (SPDPS), an adaptive strategy that prioritizes semantic correctness before detail refinement. Together, these innovations enable CvG-Diff to achieve high-quality reconstructions with minimal iterations, achieving 38.34 dB PSNR and 0.9518 SSIM for 18-view CT using only \textbf{10} steps on AAPM-LDCT dataset. Extensive experiments demonstrate the superiority of CvG-Diff over state-of-the-art sparse-view CT reconstruction methods. The code is available at https://github.com/xmed-lab/CvG-Diff.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy

Soorena Salari, Catherine Spino, Laurie-Anne Pharand, Fabienne Lathuiliere, Hassan Rivaz, Silvain Beriault, Yiming Xiao

arxiv logopreprintAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

Ultrasound Phase Aberrated Point Spread Function Estimation with Convolutional Neural Network: Simulation Study.

Shen WH, Lin YA, Li ML

pubmed logopapersAug 13 2025
Ultrasound imaging systems rely on accurate point spread function (PSF) estimation to support advanced image quality enhancement techniques such as deconvolution and speckle reduction. Phase aberration, caused by sound speed inhomogeneity within biological tissue, is inevitable in ultrasound imaging. It distorts the PSF by increasing sidelobe level and introducing asymmetric amplitude, making PSF estimation under phase aberration highly challenging. In this work, we propose a deep learning framework for estimating phase-aberrated PSFs using U-Net and complex U-Net architectures, operating on RF and complex k-space data, respectively, with the latter demonstrating superior performance. Synthetic phase aberration data, generated using the near-field phase screen model, is employed to train the networks. We evaluate various loss functions and find that log-compressed B-mode perceptual loss achieves the best performance, accurately predicting both the mainlobe and near sidelobe regions of the PSF. Simulation results validate the effectiveness of our approach in estimating PSFs under varying levels of phase aberration. Furthermore, we demonstrate that more accurate PSF estimation improves performance in a downstream phase aberration correction task, highlighting the broader utility of the proposed method.

Exploring Radiologists' Use of AI Chatbots for Assistance in Image Interpretation: Patterns of Use and Trust Evaluation.

Alarifi M

pubmed logopapersAug 13 2025
This study investigated radiologists' perceptions of AI-generated, patient-friendly radiology reports across three modalities: MRI, CT, and mammogram/ultrasound. The evaluation focused on report correctness, completeness, terminology complexity, and emotional impact. Seventy-nine radiologists from four major Saudi Arabian hospitals assessed AI-simplified versions of clinical radiology reports. Each participant reviewed one report from each modality and completed a structured questionnaire covering factual correctness, completeness, terminology complexity, and emotional impact. A structured and detailed prompt was used to guide ChatGPT-4 in generating the reports, which included clear findings, a lay summary, glossary, and clarification of ambiguous elements. Statistical analyses included descriptive summaries, Friedman tests, and Pearson correlations. Radiologists rated mammogram reports highest for correctness (M = 4.22), followed by CT (4.05) and MRI (3.95). Completeness scores followed a similar trend. Statistically significant differences were found in correctness (χ<sup>2</sup>(2) = 17.37, p < 0.001) and completeness (χ<sup>2</sup>(2) = 13.13, p = 0.001). Anxiety and complexity ratings were moderate, with MRI reports linked to slightly higher concern. A weak positive correlation emerged between radiologists' experience and mammogram correctness ratings (r = .235, p = .037). Radiologists expressed overall support for AI-generated simplified radiology reports when created using a structured prompt that includes summaries, glossaries, and clarification of ambiguous findings. While mammography and CT reports were rated favorably, MRI reports showed higher emotional impact, highlighting a need for clearer and more emotionally supportive language.

Economic Evaluations and Equity in the Use of Artificial Intelligence in Imaging Examinations for Medical Diagnosis in People With Dermatological, Neurological, and Pulmonary Diseases: Systematic Review.

Santana GO, Couto RM, Loureiro RM, Furriel BCRS, de Paula LGN, Rother ET, de Paiva JPQ, Correia LR

pubmed logopapersAug 13 2025
Health care systems around the world face numerous challenges. Recent advances in artificial intelligence (AI) have offered promising solutions, particularly in diagnostic imaging. This systematic review focused on evaluating the economic feasibility of AI in real-world diagnostic imaging scenarios, specifically for dermatological, neurological, and pulmonary diseases. The central question was whether the use of AI in these diagnostic assessments improves economic outcomes and promotes equity in health care systems. This systematic review has 2 main components, economic evaluation and equity assessment. We used the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) tool to ensure adherence to best practices in systematic reviews. The protocol was registered with PROSPERO (International Prospective Register of Systematic Reviews), and we followed the PRISMA-E (Preferred Reporting Items for Systematic Reviews and Meta-Analyses - Equity Extension) guidelines for equity. Scientific articles reporting on economic evaluations or equity considerations related to the use of AI-based tools in diagnostic imaging in dermatology, neurology, or pulmonology were included in the study. The search was conducted in the PubMed, Embase, Scopus, and Web of Science databases. Methodological quality was assessed using the following checklists, CHEC (Consensus on Health Economic Criteria) for economic evaluations, EPHPP (Effective Public Health Practice Project) for equity evaluation studies, and Welte for transferability. The systematic review identified 9 publications within the scope of the research question, with sample sizes ranging from 122 to over 1.3 million participants. The majority of studies addressed economic evaluation (88.9%), with most studies addressing pulmonary diseases (n=6; 66.6%), followed by neurological diseases (n=2; 22.3%), and only 1 (11.1%) study addressing dermatological diseases. These studies had an average quality access of 87.5% on the CHEC checklist. Only 2 studies were found to be transferable to Brazil and other countries with a similar health context. The economic evaluation revealed that 87.5% of studies highlighted the benefits of using AI in dermatology, neurology, and pulmonology, highlighting significant cost-effectiveness outcomes, with the most advantageous being a negative cost-effectiveness ratio of -US $27,580 per QALY (quality-adjusted life year) for melanoma diagnosis, indicating substantial cost savings in this scenario. The only study assessing equity, based on 129,819 radiographic images, identified AI-assisted underdiagnosis, particularly in certain subgroups defined by gender, ethnicity, and socioeconomic status. This review underscores the importance of transparency in the description of AI tools and the representativeness of population subgroups to mitigate health disparities. As AI is rapidly being integrated into health care, detailed assessments are essential to ensure that benefits reach all patients, regardless of sociodemographic factors.
Page 5 of 3953948 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.