Sort by:
Page 96 of 99986 results

DFEN: Dual Feature Equalization Network for Medical Image Segmentation

Jianjian Yin, Yi Chen, Chengyu Li, Zhichao Zheng, Yanhui Gu, Junsheng Zhou

arxiv logopreprintMay 9 2025
Current methods for medical image segmentation primarily focus on extracting contextual feature information from the perspective of the whole image. While these methods have shown effective performance, none of them take into account the fact that pixels at the boundary and regions with a low number of class pixels capture more contextual feature information from other classes, leading to misclassification of pixels by unequal contextual feature information. In this paper, we propose a dual feature equalization network based on the hybrid architecture of Swin Transformer and Convolutional Neural Network, aiming to augment the pixel feature representations by image-level equalization feature information and class-level equalization feature information. Firstly, the image-level feature equalization module is designed to equalize the contextual information of pixels within the image. Secondly, we aggregate regions of the same class to equalize the pixel feature representations of the corresponding class by class-level feature equalization module. Finally, the pixel feature representations are enhanced by learning weights for image-level equalization feature information and class-level equalization feature information. In addition, Swin Transformer is utilized as both the encoder and decoder, thereby bolstering the ability of the model to capture long-range dependencies and spatial correlations. We conducted extensive experiments on Breast Ultrasound Images (BUSI), International Skin Imaging Collaboration (ISIC2017), Automated Cardiac Diagnosis Challenge (ACDC) and PH$^2$ datasets. The experimental results demonstrate that our method have achieved state-of-the-art performance. Our code is publicly available at https://github.com/JianJianYin/DFEN.

Noise-Consistent Siamese-Diffusion for Medical Image Synthesis and Segmentation

Kunpeng Qiu, Zhiqiang Gao, Zhiying Zhou, Mingjie Sun, Yongxin Guo

arxiv logopreprintMay 9 2025
Deep learning has revolutionized medical image segmentation, yet its full potential remains constrained by the paucity of annotated datasets. While diffusion models have emerged as a promising approach for generating synthetic image-mask pairs to augment these datasets, they paradoxically suffer from the same data scarcity challenges they aim to mitigate. Traditional mask-only models frequently yield low-fidelity images due to their inability to adequately capture morphological intricacies, which can critically compromise the robustness and reliability of segmentation models. To alleviate this limitation, we introduce Siamese-Diffusion, a novel dual-component model comprising Mask-Diffusion and Image-Diffusion. During training, a Noise Consistency Loss is introduced between these components to enhance the morphological fidelity of Mask-Diffusion in the parameter space. During sampling, only Mask-Diffusion is used, ensuring diversity and scalability. Comprehensive experiments demonstrate the superiority of our method. Siamese-Diffusion boosts SANet's mDice and mIoU by 3.6% and 4.4% on the Polyps, while UNet improves by 1.52% and 1.64% on the ISIC2018. Code is available at GitHub.

The Application of Deep Learning for Lymph Node Segmentation: A Systematic Review

Jingguo Qu, Xinyang Han, Man-Lik Chui, Yao Pu, Simon Takadiyi Gunda, Ziman Chen, Jing Qin, Ann Dorothy King, Winnie Chiu-Wing Chu, Jing Cai, Michael Tin-Cheung Ying

arxiv logopreprintMay 9 2025
Automatic lymph node segmentation is the cornerstone for advances in computer vision tasks for early detection and staging of cancer. Traditional segmentation methods are constrained by manual delineation and variability in operator proficiency, limiting their ability to achieve high accuracy. The introduction of deep learning technologies offers new possibilities for improving the accuracy of lymph node image analysis. This study evaluates the application of deep learning in lymph node segmentation and discusses the methodologies of various deep learning architectures such as convolutional neural networks, encoder-decoder networks, and transformers in analyzing medical imaging data across different modalities. Despite the advancements, it still confronts challenges like the shape diversity of lymph nodes, the scarcity of accurately labeled datasets, and the inadequate development of methods that are robust and generalizable across different imaging modalities. To the best of our knowledge, this is the first study that provides a comprehensive overview of the application of deep learning techniques in lymph node segmentation task. Furthermore, this study also explores potential future research directions, including multimodal fusion techniques, transfer learning, and the use of large-scale pre-trained models to overcome current limitations while enhancing cancer diagnosis and treatment planning strategies.

Computationally enabled polychromatic polarized imaging enables mapping of matrix architectures that promote pancreatic ductal adenocarcinoma dissemination.

Qian G, Zhang H, Liu Y, Shribak M, Eliceiri KW, Provenzano PP

pubmed logopapersMay 9 2025
Pancreatic ductal adenocarcinoma (PDA) is an extremely metastatic and lethal disease. In PDA, extracellular matrix (ECM) architectures known as Tumor-Associated Collagen Signatures (TACS) regulate invasion and metastatic spread in both early dissemination and in late-stage disease. As such, TACS has been suggested as a biomarker to aid in pathologic assessment. However, despite its significance, approaches to quantitatively capture these ECM patterns currently require advanced optical systems with signaling processing analysis. Here we present an expansion of polychromatic polarized microscopy (PPM) with inherent angular information coupled to machine learning and computational pixel-wise analysis of TACS. Using this platform, we are able to accurately capture TACS architectures in H&E stained histology sections directly through PPM contrast. Moreover, PPM facilitated identification of transitions to dissemination architectures, i.e., transitions from sequestration through expansion to dissemination from both PanINs and throughout PDA. Lastly, PPM evaluation of architectures in liver metastases, the most common metastatic site for PDA, demonstrates TACS-mediated focal and local invasion as well as identification of unique patterns anchoring aligned fibers into normal-adjacent tumor, suggesting that these patterns may be precursors to metastasis expansion and local spread from micrometastatic lesions. Combined, these findings demonstrate that PPM coupled to computational platforms is a powerful tool for analyzing ECM architecture that can be employed to advance cancer microenvironment studies and provide clinically relevant diagnostic information.

Patient-specific uncertainty calibration of deep learning-based autosegmentation networks for adaptive MRI-guided lung radiotherapy.

Rabe M, Meliadò EF, Marschner S, Belka C, Corradini S, Van den Berg CAT, Landry G, Kurz C

pubmed logopapersMay 8 2025
Uncertainty assessment of deep learning autosegmentation (DLAS) models can support contour corrections in adaptive radiotherapy (ART), e.g. by utilizing Monte Carlo Dropout (MCD) uncertainty maps. However, poorly calibrated uncertainties at the patient level often render these clinically nonviable. We evaluated population-based and patient-specific DLAS accuracy and uncertainty calibration and propose a patient-specific post-training uncertainty calibration method for DLAS in ART.&#xD;&#xD;Approach. The study included 122 lung cancer patients treated with a low-field MR-linac (80/19/23 training/validation/test cases). Ten single-label 3D-U-Net population-based baseline models (BM) were trained with dropout using planning MRIs (pMRIs) and contours for nine organs-at-riks (OARs) and gross tumor volumes (GTVs). Patient-specific models (PS) were created by fine-tuning BMs with each test patient's pMRI. Model uncertainty was assessed with MCD, averaged into probability maps. Uncertainty calibration was evaluated with reliability diagrams and expected calibration error (ECE). A proposed post-training calibration method rescaled MCD probabilities for fraction images in BM (calBM) and PS (calPS) after fitting reliability diagrams from pMRIs. All models were evaluated on fraction images using Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95) and ECE. Metrics were compared among models for all OARs combined (n=163), and the GTV (n=23), using Friedman and posthoc-Nemenyi tests (α=0.05).&#xD;&#xD;Main results. For the OARs, patient-specific fine-tuning significantly (p<0.001) increased median DSC from 0.78 (BM) to 0.86 (PS) and reduced HD95 from 14mm (BM) to 6.0mm (PS). Uncertainty calibration achieved substantial reductions in ECE, from 0.25 (BM) to 0.091 (calBM) and 0.22 (PS) to 0.11 (calPS) (p<0.001), without significantly affecting DSC or HD95 (p>0.05). For the GTV, BM performance was poor (DSC=0.05) but significantly (p<0.001) improved with PS training (DSC=0.75) while uncertainty calibration reduced ECE from 0.22 (PS) to 0.15 (calPS) (p=0.45).&#xD;&#xD;Significance. Post-training uncertainty calibration yields geometrically accurate DLAS models with well-calibrated uncertainty estimates, crucial for ART applications.

nnU-Net-based high-resolution CT features quantification for interstitial lung diseases.

Lin Q, Zhang Z, Xiong X, Chen X, Ma T, Chen Y, Li T, Long Z, Luo Q, Sun Y, Jiang L, He W, Deng Y

pubmed logopapersMay 8 2025
To develop a new high-resolution (HR)CT abnormalities quantification tool (CVILDES) for interstitial lung diseases (ILDs) based on the nnU-Net network structure and to determine whether the quantitative parameters derived from this new software could offer a reliable and precise assessment in a clinical setting that is in line with expert visual evaluation. HRCT scans from 83 cases of ILDs and 20 cases of other diffuse lung diseases were labeled section by section by multiple radiologists and were used as training data for developing a deep learning model based on nnU-Net, employing a supervised learning approach. For clinical validation, a cohort including 51 cases of interstitial pneumonia with autoimmune features (IPAF) and 14 cases of idiopathic pulmonary fibrosis (IPF) had CT parenchymal patterns evaluated quantitatively with CVILDES and by visual evaluation. Subsequently, we assessed the correlation of the two methodologies for ILD features quantification. Furthermore, the correlation between the quantitative results derived from the two methods and pulmonary function parameters (DL<sub>CO</sub>%, FVC%, and FEV%) was compared. All CT data were successfully quantified using CVILDES. CVILDES-quantified results (total ILD extent, ground-glass opacity, consolidation, reticular pattern and honeycombing) showed a strong correlation with visual evaluation and were numerically close to the visual evaluation results (r = 0.64-0.89, p < 0.0001), particularly for the extent of fibrosis (r = 0.82, p < 0.0001). As judged by correlation with pulmonary function parameters, CVILDES quantification was comparable or even superior to visual evaluation. nnU-Net-based CVILDES was comparable to visual evaluation for ILD abnormalities quantification. Question Visual assessment of ILD on HRCT is time-consuming and exhibits poor inter-observer agreement, making it challenging to accurately evaluate the therapeutic efficacy. Findings nnU-Net-based Computer vision-based ILD evaluation system (CVILDES) accurately segmented and quantified the HRCT features of ILD, and results were comparable to visual evaluation. Clinical relevance This study developed a new tool that has the potential to be applied in the quantitative assessment of ILD.

Quantitative analysis and clinical determinants of orthodontically induced root resorption using automated tooth segmentation from CBCT imaging.

Lin J, Zheng Q, Wu Y, Zhou M, Chen J, Wang X, Kang T, Zhang W, Chen X

pubmed logopapersMay 8 2025
Orthodontically induced root resorption (OIRR) is difficult to assess accurately using traditional 2D imaging due to distortion and low sensitivity. While CBCT offers more precise 3D evaluation, manual segmentation remains labor-intensive and prone to variability. Recent advances in deep learning enable automatic, accurate tooth segmentation from CBCT images. This study applies deep learning and CBCT technology to quantify OIRR and analyze its risk factors, aiming to improve assessment accuracy, efficiency, and clinical decision-making. This study retrospectively analyzed CBCT scans of 108 orthodontic patients to assess OIRR using deep learning-based tooth segmentation and volumetric analysis. Statistical analysis was performed using linear regression to evaluate the influence of patient-related factors. A significance level of p < 0.05 was considered statistically significant. Root volume significantly decreased after orthodontic treatment (p < 0.001). Age, gender, open (deep) bite, severe crowding, and other factors significantly influenced root resorption rates in different tooth positions. Multivariable regression analysis showed these factors can predict root resorption, explaining 3% to 15.4% of the variance. This study applied a deep learning model to accurately assess root volume changes using CBCT, revealing significant root volume reduction after orthodontic treatment. It found that underage patients experienced less root resorption, while factors like anterior open bite and deep overbite influenced resorption in specific teeth, though skeletal pattern, overjet, and underbite were not significant predictors.

Automated Thoracolumbar Stump Rib Detection and Analysis in a Large CT Cohort

Hendrik Möller, Hanna Schön, Alina Dima, Benjamin Keinert-Weth, Robert Graf, Matan Atad, Johannes Paetzold, Friederike Jungmann, Rickmer Braren, Florian Kofler, Bjoern Menze, Daniel Rueckert, Jan S. Kirschke

arxiv logopreprintMay 8 2025
Thoracolumbar stump ribs are one of the essential indicators of thoracolumbar transitional vertebrae or enumeration anomalies. While some studies manually assess these anomalies and describe the ribs qualitatively, this study aims to automate thoracolumbar stump rib detection and analyze their morphology quantitatively. To this end, we train a high-resolution deep-learning model for rib segmentation and show significant improvements compared to existing models (Dice score 0.997 vs. 0.779, p-value < 0.01). In addition, we use an iterative algorithm and piece-wise linear interpolation to assess the length of the ribs, showing a success rate of 98.2%. When analyzing morphological features, we show that stump ribs articulate more posteriorly at the vertebrae (-19.2 +- 3.8 vs -13.8 +- 2.5, p-value < 0.01), are thinner (260.6 +- 103.4 vs. 563.6 +- 127.1, p-value < 0.01), and are oriented more downwards and sideways within the first centimeters in contrast to full-length ribs. We show that with partially visible ribs, these features can achieve an F1-score of 0.84 in differentiating stump ribs from regular ones. We publish the model weights and masks for public use.

Comparative analysis of open-source against commercial AI-based segmentation models for online adaptive MR-guided radiotherapy.

Langner D, Nachbar M, Russo ML, Boeke S, Gani C, Niyazi M, Thorwarth D

pubmed logopapersMay 8 2025
Online adaptive magnetic resonance-guided radiotherapy (MRgRT) has emerged as a state-of-the-art treatment option for multiple tumour entities, accounting for daily anatomical and tumour volume changes, thus allowing sparing of relevant organs at risk (OARs). However, the annotation of treatment-relevant anatomical structures in context of online plan adaptation remains challenging, often relying on commercial segmentation solutions due to limited availability of clinically validated alternatives. The aim of this study was to investigate whether an open-source artificial intelligence (AI) segmentation network can compete with the annotation accuracy of a commercial solution, both trained on the identical dataset, questioning the need for commercial models in clinical practice. For 47 pelvic patients, T2w MR imaging data acquired on a 1.5 T MR-Linac were manually contoured, identifying prostate, seminal vesicles, rectum, anal canal, bladder, penile bulb, and bony structures. These training data were used for the generation of an in-house AI segmentation model, a nnU-Net with residual encoder architecture featuring a streamlined single image inference pipeline, and re-training of a commercial solution. For quantitative evaluation, 20 MR images were contoured by a radiation oncologist, considered as ground truth contours (GTC) and compared with the in-house/commercial AI-based contours (iAIC/cAIC) using Dice Similarity Coefficient (DSC), 95% Hausdorff distances (HD95), and surface DSC (sDSC). For qualitative evaluation, four radiation oncologists assessed the usability of OAR/target iAIC within an online adaptive workflow using a four-point Likert scale: (1) acceptable without modification, (2) requiring minor adjustments, (3) requiring major adjustments, and (4) not usable. Patient-individual annotations were generated in a median [range] time of 23 [16-34] s for iAIC and 152 [121-198] s for cAIC, respectively. OARs showed a maximum median DSC of 0.97/0.97 (iAIC/cAIC) for bladder and minimum median DSC of 0.78/0.79 (iAIC/cAIC) for anal canal/penile bulb. Maximal respectively minimal median HD95 were detected for rectum with 17.3/20.6 mm (iAIC/cAIC) and for bladder with 5.6/6.0 mm (iAIC/cAIC). Overall, the average median DSC/HD95 values were 0.87/11.8mm (iAIC) and 0.83/10.2mm (cAIC) for OAR/targets and 0.90/11.9mm (iAIC) and 0.91/16.5mm (cAIC) for bony structures. For a tolerance of 3 mm, the highest and lowest sDSC were determined for bladder (iAIC:1.00, cAIC:0.99) and prostate in iAIC (0.89) and anal canal in cAIC (0.80), respectively. Qualitatively, 84.8% of analysed contours were considered as clinically acceptable for iAIC, while 12.9% required minor and 2.3% major adjustments or were classed as unusable. Contour-specific analysis showed that iAIC achieved the highest mean scores with 1.00 for the anal canal and the lowest with 1.61 for the prostate. This study demonstrates that open-source segmentation framework can achieve comparable annotation accuracy to commercial solutions for pelvic anatomy in online adaptive MRgRT. The adapted framework not only maintained high segmentation performance, with 84.8% of contours accepted by physicians or requiring only minor corrections (12.9%) but also enhanced clinical workflow efficiency of online adaptive MRgRT through reduced inference times. These findings establish open-source frameworks as viable alternatives to commercial systems in supervised clinical workflows.

Advancement of an automatic segmentation pipeline for metallic artifact removal in post-surgical ACL MRI.

Barnes DA, Murray CJ, Molino J, Beveridge JE, Kiapour AM, Murray MM, Fleming BC

pubmed logopapersMay 8 2025
Magnetic resonance imaging (MRI) has the potential to identify post-operative risk factors for re-tearing an anterior cruciate ligament (ACL) using a combination of imaging signal intensity (SI) and cross-sectional area measurements of the healing ACL. During surgery micro-debris can result from drilling the osseous tunnels for graft and/or suture insertion. The debris presents a limitation when using post-surgical MRI to assess reinjury risk as it causes rapid magnetic field variations during acquisition, leading to signal loss within a voxel. The present study demonstrates how K-means clustering can refine an automatic segmentation algorithm to remove the lost signal intensity values induced by the artifacts in the image. MRI data were obtained from 82 patients enrolled in three prospective clinical trials of ACL surgery. Constructive Interference in Steady State MRIs were collected at 6 months post-operation. Manual segmentation of the ACL with metallic artifacts removed served as the gold standard. The accuracy of the automatic ACL segmentations was compared using Dice coefficient, sensitivity, and precision. The performance of the automatic segmentation was comparable to manual segmentation (Dice coefficient = .81, precision = .81, sensitivity = .82). The normalized average signal intensity was calculated as 1.06 (±0.25) for the automatic and 1.04 (±0.23) for the manual segmentation, yielding a difference of 2%. These metrics emphasize the automatic segmentation model's ability to precisely capture ACL signal intensity while excluding artifact regions. The automatic artifact segmentation model described here could enhance qMRI's clinical utility by allowing for more accurate and time-efficient segmentations of the ACL.
Page 96 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.