Sort by:
Page 5 of 3293286 results

Lung-DDPM: Semantic Layout-guided Diffusion Models for Thoracic CT Image Synthesis.

Jiang Y, Lemarechal Y, Bafaro J, Abi-Rjeile J, Joubert P, Despres P, Manem V

pubmed logopapersAug 14 2025
With the rapid development of artificial intelligence (AI), AI-assisted medical imaging analysis demonstrates remarkable performance in early lung cancer screening. However, the costly annotation process and privacy concerns limit the construction of large-scale medical datasets, hampering the further application of AI in healthcare. To address the data scarcity in lung cancer screening, we propose Lung-DDPM, a thoracic CT image synthesis approach that effectively generates high-fidelity 3D synthetic CT images, which prove helpful in downstream lung nodule segmentation tasks. Our method is based on semantic layout-guided denoising diffusion probabilistic models (DDPM), enabling anatomically reasonable, seamless, and consistent sample generation even from incomplete semantic layouts. Our results suggest that the proposed method outperforms other state-of-the-art (SOTA) generative models in image quality evaluation and downstream lung nodule segmentation tasks. Specifically, Lung-DDPM achieved superior performance on our large validation cohort, with a Fréchet inception distance (FID) of 0.0047, maximum mean discrepancy (MMD) of 0.0070, and mean squared error (MSE) of 0.0024. These results were 7.4×, 3.1×, and 29.5× better than the second-best competitors, respectively. Furthermore, the lung nodule segmentation model, trained on a dataset combining real and Lung-DDPM-generated synthetic samples, attained a Dice Coefficient (Dice) of 0.3914 and sensitivity of 0.4393. This represents 8.8% and 18.6% improvements in Dice and sensitivity compared to the model trained solely on real samples. The experimental results highlight Lung-DDPM's potential for a broader range of medical imaging applications, such as general tumor segmentation, cancer survival estimation, and risk prediction. The code and pretrained models are available at https://github.com/Manem-Lab/Lung-DDPM/.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy.

Salari S, Spino C, Pharand LA, Lathuiliere F, Rivaz H, Beriault S, Xiao Y

pubmed logopapersAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

Deep learning-based non-invasive prediction of PD-L1 status and immunotherapy survival stratification in esophageal cancer using [<sup>18</sup>F]FDG PET/CT.

Xie F, Zhang M, Zheng C, Zhao Z, Wang J, Li Y, Wang K, Wang W, Lin J, Wu T, Wang Y, Chen X, Li Y, Zhu Z, Wu H, Li Y, Liu Q

pubmed logopapersAug 14 2025
This study aimed to develop and validate deep learning models using [<sup>18</sup>F]FDG PET/CT to predict PD-L1 status in esophageal cancer (EC) patients. Additionally, we assessed the potential of derived deep learning model scores (DLS) for survival stratification in immunotherapy. In this retrospective study, we included 331 EC patients from two centers, dividing them into training, internal validation, and external validation cohorts. Fifty patients who received immunotherapy were followed up. We developed four 3D ResNet10-based models-PET + CT + clinical factors (CPC), PET + CT (PC), PET (P), and CT (C)-using pre-treatment [<sup>18</sup>F]FDG PET/CT scans. For comparison, we also constructed a logistic model incorporating clinical factors (clinical model). The DLS were evaluated as radiological markers for survival stratification, and nomograms for predicting survival were constructed. The models demonstrated accurate prediction of PD-L1 status. The areas under the curve (AUCs) for predicting PD-L1 status were as follows: CPC (0.927), PC (0.904), P (0.886), C (0.934), and the clinical model (0.603) in the training cohort; CPC (0.882), PC (0.848), P (0.770), C (0.745), and the clinical model (0.524) in the internal validation cohort; and CPC (0.843), PC (0.806), P (0.759), C (0.667), and the clinical model (0.671) in the external validation cohort. The CPC and PC models exhibited superior predictive performance. Survival analysis revealed that the DLS from most models effectively stratified overall survival and progression-free survival at appropriate cut-off points (P < 0.05), outperforming stratification based on PD-L1 status (combined positive score ≥ 10). Furthermore, incorporating model scores with clinical factors in nomograms enhanced the predictive probability of survival after immunotherapy. Deep learning models based on [<sup>18</sup>F]FDG PET/CT can accurately predict PD-L1 status in esophageal cancer patients. The derived DLS can effectively stratify survival outcomes following immunotherapy, particularly when combined with clinical factors.

Severity Classification of Pediatric Spinal Cord Injuries Using Structural MRI Measures and Deep Learning: A Comprehensive Analysis across All Vertebral Levels.

Sadeghi-Adl Z, Naghizadehkashani S, Middleton D, Krisa L, Alizadeh M, Flanders AE, Faro SH, Wang Z, Mohamed FB

pubmed logopapersAug 14 2025
Spinal cord injury (SCI) in the pediatric population presents a unique challenge in diagnosis and prognosis due to the complexity of performing clinical assessments on children. Accurate evaluation of structural changes in the spinal cord is essential for effective treatment planning. This study aims to evaluate structural characteristics in pediatric patients with SCI by comparing cross-sectional area (CSA), anterior-posterior (AP) width, and right-left (RL) width across all vertebral levels of the spinal cord between typically developing (TD) and participants with SCI. We employed deep learning techniques to utilize these measures for detecting SCI cases and determining their injury severity. Sixty-one pediatric participants (ages 6-18), including 20 with chronic SCI and 41 TD, were enrolled and scanned by using a 3T MRI scanner. All SCI participants underwent the International Standards for Neurological Classification of Spinal Cord Injury (ISNCSCI) test to assess their neurologic function and determine their American Spinal Injury Association (ASIA) Impairment Scale (AIS) category. T2-weighted MRI scans were utilized to measure CSA, AP width, and RL widths along the entire cervical and thoracic cord. These measures were automatically extracted at every vertebral level of the spinal cord by using the spinal cord toolbox. Deep convolutional neural networks (CNNs) were utilized to classify participants into SCI or TD groups and determine their AIS classification based on structural parameters and demographic factors such as age and height. Significant differences (<i>P</i> < .05) were found in CSA, AP width, and RL width between SCI and TD participants, indicating notable structural alterations due to SCI. The CNN-based models demonstrated high performance, achieving 96.59% accuracy in distinguishing SCI from TD participants. Furthermore, the models determined AIS category classification with 94.92% accuracy. The study demonstrates the effectiveness of integrating cross-sectional structural imaging measures with deep learning methods for classification and severity assessment of pediatric SCI. The deep learning approach outperforms traditional machine learning models in diagnostic accuracy, offering potential improvements in patient care in pediatric SCI management.

Quest for a clinically relevant medical image segmentation metric: the definition and implementation of Medical Similarity Index

Szuzina Fazekas, Bettina Katalin Budai, Viktor Bérczi, Pál Maurovich-Horvat, Zsolt Vizi

arxiv logopreprintAug 13 2025
Background: In the field of radiology and radiotherapy, accurate delineation of tissues and organs plays a crucial role in both diagnostics and therapeutics. While the gold standard remains expert-driven manual segmentation, many automatic segmentation methods are emerging. The evaluation of these methods primarily relies on traditional metrics that only incorporate geometrical properties and fail to adapt to various applications. Aims: This study aims to develop and implement a clinically relevant segmentation metric that can be adapted for use in various medical imaging applications. Methods: Bidirectional local distance was defined, and the points of the test contour were paired with points of the reference contour. After correcting for the distance between the test and reference center of mass, Euclidean distance was calculated between the paired points, and a score was given to each test point. The overall medical similarity index was calculated as the average score across all the test points. For demonstration, we used myoma and prostate datasets; nnUNet neural networks were trained for segmentation. Results: An easy-to-use, sustainable image processing pipeline was created using Python. The code is available in a public GitHub repository along with Google Colaboratory notebooks. The algorithm can handle multislice images with multiple masks per slice. Mask splitting algorithm is also provided that can separate the concave masks. We demonstrate the adaptability with prostate segmentation evaluation. Conclusions: A novel segmentation evaluation metric was implemented, and an open-access image processing pipeline was also provided, which can be easily used for automatic measurement of clinical relevance of medical image segmentation.}

Multi-Contrast Fusion Module: An attention mechanism integrating multi-contrast features for fetal torso plane classification

Shengjun Zhu, Siyu Liu, Runqing Xiong, Liping Zheng, Duo Ma, Rongshang Chen, Jiaxin Cai

arxiv logopreprintAug 13 2025
Purpose: Prenatal ultrasound is a key tool in evaluating fetal structural development and detecting abnormalities, contributing to reduced perinatal complications and improved neonatal survival. Accurate identification of standard fetal torso planes is essential for reliable assessment and personalized prenatal care. However, limitations such as low contrast and unclear texture details in ultrasound imaging pose significant challenges for fine-grained anatomical recognition. Methods: We propose a novel Multi-Contrast Fusion Module (MCFM) to enhance the model's ability to extract detailed information from ultrasound images. MCFM operates exclusively on the lower layers of the neural network, directly processing raw ultrasound data. By assigning attention weights to image representations under different contrast conditions, the module enhances feature modeling while explicitly maintaining minimal parameter overhead. Results: The proposed MCFM was evaluated on a curated dataset of fetal torso plane ultrasound images. Experimental results demonstrate that MCFM substantially improves recognition performance, with a minimal increase in model complexity. The integration of multi-contrast attention enables the model to better capture subtle anatomical structures, contributing to higher classification accuracy and clinical reliability. Conclusions: Our method provides an effective solution for improving fetal torso plane recognition in ultrasound imaging. By enhancing feature representation through multi-contrast fusion, the proposed approach supports clinicians in achieving more accurate and consistent diagnoses, demonstrating strong potential for clinical adoption in prenatal screening. The codes are available at https://github.com/sysll/MCFM.

AST-n: A Fast Sampling Approach for Low-Dose CT Reconstruction using Diffusion Models

Tomás de la Sotta, José M. Saavedra, Héctor Henríquez, Violeta Chang, Aline Xavier

arxiv logopreprintAug 13 2025
Low-dose CT (LDCT) protocols reduce radiation exposure but increase image noise, compromising diagnostic confidence. Diffusion-based generative models have shown promise for LDCT denoising by learning image priors and performing iterative refinement. In this work, we introduce AST-n, an accelerated inference framework that initiates reverse diffusion from intermediate noise levels, and integrate high-order ODE solvers within conditioned models to further reduce sampling steps. We evaluate two acceleration paradigms--AST-n sampling and standard scheduling with high-order solvers -- on the Low Dose CT Grand Challenge dataset, covering head, abdominal, and chest scans at 10-25 % of standard dose. Conditioned models using only 25 steps (AST-25) achieve peak signal-to-noise ratio (PSNR) above 38 dB and structural similarity index (SSIM) above 0.95, closely matching standard baselines while cutting inference time from ~16 seg to under 1 seg per slice. Unconditional sampling suffers substantial quality loss, underscoring the necessity of conditioning. We also assess DDIM inversion, which yields marginal PSNR gains at the cost of doubling inference time, limiting its clinical practicality. Our results demonstrate that AST-n with high-order samplers enables rapid LDCT reconstruction without significant loss of image fidelity, advancing the feasibility of diffusion-based methods in clinical workflows.

Automatic detection of arterial input function for brain DCE-MRI in multi-site cohorts.

Saca L, Gaggar R, Pappas I, Benzinger T, Reiman EM, Shiroishi MS, Joe EB, Ringman JM, Yassine HN, Schneider LS, Chui HC, Nation DA, Zlokovic BV, Toga AW, Chakhoyan A, Barnes S

pubmed logopapersAug 13 2025
Arterial input function (AIF) extraction is a crucial step in quantitative pharmacokinetic modeling of DCE-MRI. This work proposes a robust deep learning model that can precisely extract an AIF from DCE-MRI images. A diverse dataset of human brain DCE-MRI images from 289 participants, totaling 384 scans, from five different institutions with extracted gadolinium-based contrast agent curves from large penetrating arteries, and with most data collected for blood-brain barrier (BBB) permeability measurement, was retrospectively analyzed. A 3D UNet model was implemented and trained on manually drawn AIF regions. The testing cohort was compared using proposed AIF quality metric AIFitness and K<sup>trans</sup> values from a standard DCE pipeline. This UNet was then applied to a separate dataset of 326 participants with a total of 421 DCE-MRI images with analyzed AIF quality and K<sup>trans</sup> values. The resulting 3D UNet model achieved an average AIFitness score of 93.9 compared to 99.7 for manually selected AIFs, and white matter K<sup>trans</sup> values were 0.45/min × 10<sup>-3</sup> and 0.45/min × 10<sup>-3</sup>, respectively. The intraclass correlation between automated and manual K<sup>trans</sup> values was 0.89. The separate replication dataset yielded an AIFitness score of 97.0 and white matter K<sup>trans</sup> of 0.44/min × 10<sup>-3</sup>. Findings suggest a 3D UNet model with additional convolutional neural network kernels and a modified Huber loss function achieves superior performance for identifying AIF curves from DCE-MRI in a diverse multi-center cohort. AIFitness scores and DCE-MRI-derived metrics, such as K<sup>trans</sup> maps, showed no significant differences in gray and white matter between manually drawn and automated AIFs.

Development of a multimodal vision transformer model for predicting traumatic versus degenerative rotator cuff tears on magnetic resonance imaging: A single-centre retrospective study.

Oettl FC, Malayeri AB, Furrer PR, Wieser K, Fürnstahl P, Bouaicha S

pubmed logopapersAug 13 2025
The differentiation between traumatic and degenerative rotator cuff tears (RCTs remains a diagnostic challenge with significant implications for treatment planning. While magnetic resonance imaging (MRI) is standard practice, traditional radiological interpretation has shown limited reliability in distinguishing these etiologies. This study evaluates the potential of artificial intelligence (AI) models, specifically a multimodal vision transformer (ViT), to differentiate between traumatic and degenerative RCT. In this retrospective, single-centre study, 99 shoulder MRIs were analysed from patients who underwent surgery at a specialised university shoulder unit between 2016 and 2019. The cohort was divided into training (n = 79) and validation (n = 20) sets. The traumatic group required a documented relevant trauma (excluding simple lifting injuries), previously asymptomatic shoulder and MRI within 3 months posttrauma. The degenerative group was of similar age and injured tendon, with patients presenting with at least 1 year of constant shoulder pain prior to imaging and no trauma history. The ViT was subsequently combined with demographic data to finalise in a multimodal ViT. Saliency maps are utilised as an explainability tool. The multimodal ViT model achieved an accuracy of 0.75 ± 0.08 with a recall of 0.8 ± 0.08, specificity of 0.71 ± 0.11 and a F1 score of 0.76 ± 0.1. The model maintained consistent performance across different patient subsets, demonstrating robust generalisation. Saliency maps do not show a consistent focus on the rotator cuff. AI shows potential in supporting the challenging differentiation between traumatic and degenerative RCT on MRI. The achieved accuracy of 75% is particularly significant given the similar groups which presented a challenging diagnostic scenario. Saliency maps were utilised to ensure explainability, the given lack of consistent focus on rotator cuff tendons hints towards underappreciated aspects in the differentiation. Not applicable.

Automated Segmentation of Coronal Brain Tissue Slabs for 3D Neuropathology

Jonathan Williams Ramirez, Dina Zemlyanker, Lucas Deden-Binder, Rogeny Herisse, Erendira Garcia Pallares, Karthik Gopinath, Harshvardhan Gazula, Christopher Mount, Liana N. Kozanno, Michael S. Marshall, Theresa R. Connors, Matthew P. Frosch, Mark Montine, Derek H. Oakley, Christine L. Mac Donald, C. Dirk Keene, Bradley T. Hyman, Juan Eugenio Iglesias

arxiv logopreprintAug 13 2025
Advances in image registration and machine learning have recently enabled volumetric analysis of \emph{postmortem} brain tissue from conventional photographs of coronal slabs, which are routinely collected in brain banks and neuropathology laboratories worldwide. One caveat of this methodology is the requirement of segmentation of the tissue from photographs, which currently requires costly manual intervention. In this article, we present a deep learning model to automate this process. The automatic segmentation tool relies on a U-Net architecture that was trained with a combination of \textit{(i)}1,414 manually segmented images of both fixed and fresh tissue, from specimens with varying diagnoses, photographed at two different sites; and \textit{(ii)}~2,000 synthetic images with randomized contrast and corresponding masks generated from MRI scans for improved generalizability to unseen photographic setups. Automated model predictions on a subset of photographs not seen in training were analyzed to estimate performance compared to manual labels -- including both inter- and intra-rater variability. Our model achieved a median Dice score over 0.98, mean surface distance under 0.4~mm, and 95\% Hausdorff distance under 1.60~mm, which approaches inter-/intra-rater levels. Our tool is publicly available at surfer.nmr.mgh.harvard.edu/fswiki/PhotoTools.
Page 5 of 3293286 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.