Sort by:
Page 61 of 64634 results

The Application of Deep Learning for Lymph Node Segmentation: A Systematic Review

Jingguo Qu, Xinyang Han, Man-Lik Chui, Yao Pu, Simon Takadiyi Gunda, Ziman Chen, Jing Qin, Ann Dorothy King, Winnie Chiu-Wing Chu, Jing Cai, Michael Tin-Cheung Ying

arxiv logopreprintMay 9 2025
Automatic lymph node segmentation is the cornerstone for advances in computer vision tasks for early detection and staging of cancer. Traditional segmentation methods are constrained by manual delineation and variability in operator proficiency, limiting their ability to achieve high accuracy. The introduction of deep learning technologies offers new possibilities for improving the accuracy of lymph node image analysis. This study evaluates the application of deep learning in lymph node segmentation and discusses the methodologies of various deep learning architectures such as convolutional neural networks, encoder-decoder networks, and transformers in analyzing medical imaging data across different modalities. Despite the advancements, it still confronts challenges like the shape diversity of lymph nodes, the scarcity of accurately labeled datasets, and the inadequate development of methods that are robust and generalizable across different imaging modalities. To the best of our knowledge, this is the first study that provides a comprehensive overview of the application of deep learning techniques in lymph node segmentation task. Furthermore, this study also explores potential future research directions, including multimodal fusion techniques, transfer learning, and the use of large-scale pre-trained models to overcome current limitations while enhancing cancer diagnosis and treatment planning strategies.

KEVS: enhancing segmentation of visceral adipose tissue in pre-cystectomy CT with Gaussian kernel density estimation.

Boucher T, Tetlow N, Fung A, Dewar A, Arina P, Kerneis S, Whittle J, Mazomenos EB

pubmed logopapersMay 9 2025
The distribution of visceral adipose tissue (VAT) in cystectomy patients is indicative of the incidence of postoperative complications. Existing VAT segmentation methods for computed tomography (CT) employing intensity thresholding have limitations relating to inter-observer variability. Moreover, the difficulty in creating ground-truth masks limits the development of deep learning (DL) models for this task. This paper introduces a novel method for VAT prediction in pre-cystectomy CT, which is fully automated and does not require ground-truth VAT masks for training, overcoming aforementioned limitations. We introduce the kernel density-enhanced VAT segmentator (KEVS), combining a DL semantic segmentation model, for multi-body feature prediction, with Gaussian kernel density estimation analysis of predicted subcutaneous adipose tissue to achieve accurate scan-specific predictions of VAT in the abdominal cavity. Uniquely for a DL pipeline, KEVS does not require ground-truth VAT masks. We verify the ability of KEVS to accurately segment abdominal organs in unseen CT data and compare KEVS VAT segmentation predictions to existing state-of-the-art (SOTA) approaches in a dataset of 20 pre-cystectomy CT scans, collected from University College London Hospital (UCLH-Cyst), with expert ground-truth annotations. KEVS presents a <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>4.80</mn> <mo>%</mo></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>6.02</mn> <mo>%</mo></mrow> </math> improvement in Dice coefficient over the second best DL and thresholding-based VAT segmentation techniques respectively when evaluated on UCLH-Cyst. This research introduces KEVS, an automated, SOTA method for the prediction of VAT in pre-cystectomy CT which eliminates inter-observer variability and is trained entirely on open-source CT datasets which do not contain ground-truth VAT masks.

Automated Thoracolumbar Stump Rib Detection and Analysis in a Large CT Cohort

Hendrik Möller, Hanna Schön, Alina Dima, Benjamin Keinert-Weth, Robert Graf, Matan Atad, Johannes Paetzold, Friederike Jungmann, Rickmer Braren, Florian Kofler, Bjoern Menze, Daniel Rueckert, Jan S. Kirschke

arxiv logopreprintMay 8 2025
Thoracolumbar stump ribs are one of the essential indicators of thoracolumbar transitional vertebrae or enumeration anomalies. While some studies manually assess these anomalies and describe the ribs qualitatively, this study aims to automate thoracolumbar stump rib detection and analyze their morphology quantitatively. To this end, we train a high-resolution deep-learning model for rib segmentation and show significant improvements compared to existing models (Dice score 0.997 vs. 0.779, p-value < 0.01). In addition, we use an iterative algorithm and piece-wise linear interpolation to assess the length of the ribs, showing a success rate of 98.2%. When analyzing morphological features, we show that stump ribs articulate more posteriorly at the vertebrae (-19.2 +- 3.8 vs -13.8 +- 2.5, p-value < 0.01), are thinner (260.6 +- 103.4 vs. 563.6 +- 127.1, p-value < 0.01), and are oriented more downwards and sideways within the first centimeters in contrast to full-length ribs. We show that with partially visible ribs, these features can achieve an F1-score of 0.84 in differentiating stump ribs from regular ones. We publish the model weights and masks for public use.

nnU-Net-based high-resolution CT features quantification for interstitial lung diseases.

Lin Q, Zhang Z, Xiong X, Chen X, Ma T, Chen Y, Li T, Long Z, Luo Q, Sun Y, Jiang L, He W, Deng Y

pubmed logopapersMay 8 2025
To develop a new high-resolution (HR)CT abnormalities quantification tool (CVILDES) for interstitial lung diseases (ILDs) based on the nnU-Net network structure and to determine whether the quantitative parameters derived from this new software could offer a reliable and precise assessment in a clinical setting that is in line with expert visual evaluation. HRCT scans from 83 cases of ILDs and 20 cases of other diffuse lung diseases were labeled section by section by multiple radiologists and were used as training data for developing a deep learning model based on nnU-Net, employing a supervised learning approach. For clinical validation, a cohort including 51 cases of interstitial pneumonia with autoimmune features (IPAF) and 14 cases of idiopathic pulmonary fibrosis (IPF) had CT parenchymal patterns evaluated quantitatively with CVILDES and by visual evaluation. Subsequently, we assessed the correlation of the two methodologies for ILD features quantification. Furthermore, the correlation between the quantitative results derived from the two methods and pulmonary function parameters (DL<sub>CO</sub>%, FVC%, and FEV%) was compared. All CT data were successfully quantified using CVILDES. CVILDES-quantified results (total ILD extent, ground-glass opacity, consolidation, reticular pattern and honeycombing) showed a strong correlation with visual evaluation and were numerically close to the visual evaluation results (r = 0.64-0.89, p < 0.0001), particularly for the extent of fibrosis (r = 0.82, p < 0.0001). As judged by correlation with pulmonary function parameters, CVILDES quantification was comparable or even superior to visual evaluation. nnU-Net-based CVILDES was comparable to visual evaluation for ILD abnormalities quantification. Question Visual assessment of ILD on HRCT is time-consuming and exhibits poor inter-observer agreement, making it challenging to accurately evaluate the therapeutic efficacy. Findings nnU-Net-based Computer vision-based ILD evaluation system (CVILDES) accurately segmented and quantified the HRCT features of ILD, and results were comparable to visual evaluation. Clinical relevance This study developed a new tool that has the potential to be applied in the quantitative assessment of ILD.

Quantitative analysis and clinical determinants of orthodontically induced root resorption using automated tooth segmentation from CBCT imaging.

Lin J, Zheng Q, Wu Y, Zhou M, Chen J, Wang X, Kang T, Zhang W, Chen X

pubmed logopapersMay 8 2025
Orthodontically induced root resorption (OIRR) is difficult to assess accurately using traditional 2D imaging due to distortion and low sensitivity. While CBCT offers more precise 3D evaluation, manual segmentation remains labor-intensive and prone to variability. Recent advances in deep learning enable automatic, accurate tooth segmentation from CBCT images. This study applies deep learning and CBCT technology to quantify OIRR and analyze its risk factors, aiming to improve assessment accuracy, efficiency, and clinical decision-making. This study retrospectively analyzed CBCT scans of 108 orthodontic patients to assess OIRR using deep learning-based tooth segmentation and volumetric analysis. Statistical analysis was performed using linear regression to evaluate the influence of patient-related factors. A significance level of p < 0.05 was considered statistically significant. Root volume significantly decreased after orthodontic treatment (p < 0.001). Age, gender, open (deep) bite, severe crowding, and other factors significantly influenced root resorption rates in different tooth positions. Multivariable regression analysis showed these factors can predict root resorption, explaining 3% to 15.4% of the variance. This study applied a deep learning model to accurately assess root volume changes using CBCT, revealing significant root volume reduction after orthodontic treatment. It found that underage patients experienced less root resorption, while factors like anterior open bite and deep overbite influenced resorption in specific teeth, though skeletal pattern, overjet, and underbite were not significant predictors.

Advancement of an automatic segmentation pipeline for metallic artifact removal in post-surgical ACL MRI.

Barnes DA, Murray CJ, Molino J, Beveridge JE, Kiapour AM, Murray MM, Fleming BC

pubmed logopapersMay 8 2025
Magnetic resonance imaging (MRI) has the potential to identify post-operative risk factors for re-tearing an anterior cruciate ligament (ACL) using a combination of imaging signal intensity (SI) and cross-sectional area measurements of the healing ACL. During surgery micro-debris can result from drilling the osseous tunnels for graft and/or suture insertion. The debris presents a limitation when using post-surgical MRI to assess reinjury risk as it causes rapid magnetic field variations during acquisition, leading to signal loss within a voxel. The present study demonstrates how K-means clustering can refine an automatic segmentation algorithm to remove the lost signal intensity values induced by the artifacts in the image. MRI data were obtained from 82 patients enrolled in three prospective clinical trials of ACL surgery. Constructive Interference in Steady State MRIs were collected at 6 months post-operation. Manual segmentation of the ACL with metallic artifacts removed served as the gold standard. The accuracy of the automatic ACL segmentations was compared using Dice coefficient, sensitivity, and precision. The performance of the automatic segmentation was comparable to manual segmentation (Dice coefficient = .81, precision = .81, sensitivity = .82). The normalized average signal intensity was calculated as 1.06 (±0.25) for the automatic and 1.04 (±0.23) for the manual segmentation, yielding a difference of 2%. These metrics emphasize the automatic segmentation model's ability to precisely capture ACL signal intensity while excluding artifact regions. The automatic artifact segmentation model described here could enhance qMRI's clinical utility by allowing for more accurate and time-efficient segmentations of the ACL.

Comparative analysis of open-source against commercial AI-based segmentation models for online adaptive MR-guided radiotherapy.

Langner D, Nachbar M, Russo ML, Boeke S, Gani C, Niyazi M, Thorwarth D

pubmed logopapersMay 8 2025
Online adaptive magnetic resonance-guided radiotherapy (MRgRT) has emerged as a state-of-the-art treatment option for multiple tumour entities, accounting for daily anatomical and tumour volume changes, thus allowing sparing of relevant organs at risk (OARs). However, the annotation of treatment-relevant anatomical structures in context of online plan adaptation remains challenging, often relying on commercial segmentation solutions due to limited availability of clinically validated alternatives. The aim of this study was to investigate whether an open-source artificial intelligence (AI) segmentation network can compete with the annotation accuracy of a commercial solution, both trained on the identical dataset, questioning the need for commercial models in clinical practice. For 47 pelvic patients, T2w MR imaging data acquired on a 1.5 T MR-Linac were manually contoured, identifying prostate, seminal vesicles, rectum, anal canal, bladder, penile bulb, and bony structures. These training data were used for the generation of an in-house AI segmentation model, a nnU-Net with residual encoder architecture featuring a streamlined single image inference pipeline, and re-training of a commercial solution. For quantitative evaluation, 20 MR images were contoured by a radiation oncologist, considered as ground truth contours (GTC) and compared with the in-house/commercial AI-based contours (iAIC/cAIC) using Dice Similarity Coefficient (DSC), 95% Hausdorff distances (HD95), and surface DSC (sDSC). For qualitative evaluation, four radiation oncologists assessed the usability of OAR/target iAIC within an online adaptive workflow using a four-point Likert scale: (1) acceptable without modification, (2) requiring minor adjustments, (3) requiring major adjustments, and (4) not usable. Patient-individual annotations were generated in a median [range] time of 23 [16-34] s for iAIC and 152 [121-198] s for cAIC, respectively. OARs showed a maximum median DSC of 0.97/0.97 (iAIC/cAIC) for bladder and minimum median DSC of 0.78/0.79 (iAIC/cAIC) for anal canal/penile bulb. Maximal respectively minimal median HD95 were detected for rectum with 17.3/20.6 mm (iAIC/cAIC) and for bladder with 5.6/6.0 mm (iAIC/cAIC). Overall, the average median DSC/HD95 values were 0.87/11.8mm (iAIC) and 0.83/10.2mm (cAIC) for OAR/targets and 0.90/11.9mm (iAIC) and 0.91/16.5mm (cAIC) for bony structures. For a tolerance of 3 mm, the highest and lowest sDSC were determined for bladder (iAIC:1.00, cAIC:0.99) and prostate in iAIC (0.89) and anal canal in cAIC (0.80), respectively. Qualitatively, 84.8% of analysed contours were considered as clinically acceptable for iAIC, while 12.9% required minor and 2.3% major adjustments or were classed as unusable. Contour-specific analysis showed that iAIC achieved the highest mean scores with 1.00 for the anal canal and the lowest with 1.61 for the prostate. This study demonstrates that open-source segmentation framework can achieve comparable annotation accuracy to commercial solutions for pelvic anatomy in online adaptive MRgRT. The adapted framework not only maintained high segmentation performance, with 84.8% of contours accepted by physicians or requiring only minor corrections (12.9%) but also enhanced clinical workflow efficiency of online adaptive MRgRT through reduced inference times. These findings establish open-source frameworks as viable alternatives to commercial systems in supervised clinical workflows.

Chest X-Ray Visual Saliency Modeling: Eye-Tracking Dataset and Saliency Prediction Model.

Lou J, Wang H, Wu X, Ng JCH, White R, Thakoor KA, Corcoran P, Chen Y, Liu H

pubmed logopapersMay 8 2025
Radiologists' eye movements during medical image interpretation reflect their perceptual-cognitive processes of diagnostic decisions. The eye movement data can be modeled to represent clinically relevant regions in a medical image and potentially integrated into an artificial intelligence (AI) system for automatic diagnosis in medical imaging. In this article, we first conduct a large-scale eye-tracking study involving 13 radiologists interpreting 191 chest X-ray (CXR) images, establishing a best-of-its-kind CXR visual saliency benchmark. We then perform analysis to quantify the reliability and clinical relevance of saliency maps (SMs) generated for CXR images. We develop CXR image saliency prediction method (CXRSalNet), a novel saliency prediction model that leverages radiologists' gaze information to optimize the use of unlabeled CXR images, enhancing training and mitigating data scarcity. We also demonstrate the application of our CXR saliency model in enhancing the performance of AI-powered diagnostic imaging systems.

Relevance of choroid plexus volumes in multiple sclerosis.

Krieger B, Bellenberg B, Roenneke AK, Schneider R, Ladopoulos T, Abbas Z, Rust R, Schmitz-Hübsch T, Chien C, Gold R, Paul F, Lukas C

pubmed logopapersMay 8 2025
The choroid plexus (ChP) plays a pivotal role in inflammatory processes that occur in multiple sclerosis (MS). The enlargement of the ChP in relapsing-remitting multiple sclerosis (RRMS) is considered to be an indication of disease activity and has been associated with periventricular remyelination failure. This cross-sectional study aimed to identify the relationship between ChP and periventricular tissue damage which occurs in MS, and to elucidate the role of neuroinflammation in primary progressive multiple sclerosis (PPMS). ChP volume was assessed by a novel deep learning segmentation method based on structural MRI data acquired from two centers. In total, 141 RRMS and 64 PPMS patients were included, along with 75 healthy control subjects. In addition, T1w/FLAIR ratios were calculated within periventricular bands to quantify microstructural tissue damage and to assess its relationship to ChP volume. When compared to healthy controls, ChP volumes were significantly increased in RRMS, but not in patients with PPMS. T1w/FLAIR ratios in the normal appearing white matter (NAWM) showing periventricular gradients were decreased in patients with multiple sclerosis when compared to healthy control subjects and lower T1w/FLAIR ratios radiating out from the lateral ventricles were found in patients with PPMS. A relationship between ChP volume and T1w/FLAIR ratio in NAWM was found within the inner periventricular bands in RRMS patients. A longer duration of disease was associated with larger ChP volumes only in RRMS patients. Enlarged ChP volumes were also significantly associated with reduced cortex volumes and increased lesion volumes in RRMS. Our analysis confirmed that the ChP was significantly enlarged in patients with RRMS, which was related to brain lesion volumes and which suggested a dynamic development as it was associated with disease duration. Plexus enlargement was further associated with periventricular demyelination or tissue damage assessed by T1w/FLAIR ratios in RRMS. Furthermore, we did not find an enlargement of the ChP in patients with PPMS, possibly indicating the reduced involvement of inflammatory processes in the progressive phase of MS. The association between enlarged ChP volumes and cortical atrophy in RRMS highlighted the vulnerability of structures close to the CSF.

Patient-specific uncertainty calibration of deep learning-based autosegmentation networks for adaptive MRI-guided lung radiotherapy.

Rabe M, Meliadò EF, Marschner S, Belka C, Corradini S, Van den Berg CAT, Landry G, Kurz C

pubmed logopapersMay 8 2025
Uncertainty assessment of deep learning autosegmentation (DLAS) models can support contour corrections in adaptive radiotherapy (ART), e.g. by utilizing Monte Carlo Dropout (MCD) uncertainty maps. However, poorly calibrated uncertainties at the patient level often render these clinically nonviable. We evaluated population-based and patient-specific DLAS accuracy and uncertainty calibration and propose a patient-specific post-training uncertainty calibration method for DLAS in ART.&#xD;&#xD;Approach. The study included 122 lung cancer patients treated with a low-field MR-linac (80/19/23 training/validation/test cases). Ten single-label 3D-U-Net population-based baseline models (BM) were trained with dropout using planning MRIs (pMRIs) and contours for nine organs-at-riks (OARs) and gross tumor volumes (GTVs). Patient-specific models (PS) were created by fine-tuning BMs with each test patient's pMRI. Model uncertainty was assessed with MCD, averaged into probability maps. Uncertainty calibration was evaluated with reliability diagrams and expected calibration error (ECE). A proposed post-training calibration method rescaled MCD probabilities for fraction images in BM (calBM) and PS (calPS) after fitting reliability diagrams from pMRIs. All models were evaluated on fraction images using Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95) and ECE. Metrics were compared among models for all OARs combined (n=163), and the GTV (n=23), using Friedman and posthoc-Nemenyi tests (α=0.05).&#xD;&#xD;Main results. For the OARs, patient-specific fine-tuning significantly (p<0.001) increased median DSC from 0.78 (BM) to 0.86 (PS) and reduced HD95 from 14mm (BM) to 6.0mm (PS). Uncertainty calibration achieved substantial reductions in ECE, from 0.25 (BM) to 0.091 (calBM) and 0.22 (PS) to 0.11 (calPS) (p<0.001), without significantly affecting DSC or HD95 (p>0.05). For the GTV, BM performance was poor (DSC=0.05) but significantly (p<0.001) improved with PS training (DSC=0.75) while uncertainty calibration reduced ECE from 0.22 (PS) to 0.15 (calPS) (p=0.45).&#xD;&#xD;Significance. Post-training uncertainty calibration yields geometrically accurate DLAS models with well-calibrated uncertainty estimates, crucial for ART applications.
Page 61 of 64634 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.