Sort by:
Page 95 of 1411410 results

Deep learning NTCP model for late dysphagia after radiotherapy for head and neck cancer patients based on 3D dose, CT and segmentations

de Vette, S. P., Neh, H., van der Hoek, L., MacRae, D. C., Chu, H., Gawryszuk, A., Steenbakkers, R. J., van Ooijen, P. M., Fuller, C. D., Hutcheson, K. A., Langendijk, J. A., Sijtsema, N. M., van Dijk, L. V.

medrxiv logopreprintJun 20 2025
Background & purposeLate radiation-associated dysphagia after head and neck cancer (HNC) significantly impacts patients health and quality of life. Conventional normal tissue complication probability (NTCP) models use discrete dose parameters to predict toxicity risk but fail to fully capture the complexity of this side effect. Deep learning (DL) offers potential improvements by incorporating 3D dose data for all anatomical structures involved in swallowing. This study aims to enhance dysphagia prediction with 3D DL NTCP models compared to conventional NTCP models. Materials & methodsA multi-institutional cohort of 1484 HNC patients was used to train and validate a 3D DL model (Residual Network) incorporating 3D dose distributions, organ-at-risk segmentations, and CT scans, with or without patient- or treatment-related data. Predictions of grade [≥]2 dysphagia (CTCAEv4) at six months post-treatment were evaluated using area under the curve (AUC) and calibration curves. Results were compared to a conventional NTCP model based on pre-treatment dysphagia, tumour location, and mean dose to swallowing organs. Attention maps highlighting regions of interest for individual patients were assessed. ResultsDL models outperformed the conventional NTCP model in both the independent test set (AUC=0.80-0.84 versus 0.76) and external test set (AUC=0.73-0.74 versus 0.63) in AUC and calibration. Attention maps showed a focus on the oral cavity and superior pharyngeal constrictor muscle. ConclusionDL NTCP models performed better than the conventional NTCP model, suggesting the benefit of using 3D-input over the conventional discrete dose parameters. Attention maps highlighted relevant regions linked to dysphagia, supporting the utility of DL for improved predictions.

Concordance between single-slice abdominal computed tomography-based and bioelectrical impedance-based analysis of body composition in a prospective study.

Fehrenbach U, Hosse C, Wienbrandt W, Walter-Rittel T, Kolck J, Auer TA, Blüthner E, Tacke F, Beetz NL, Geisel D

pubmed logopapersJun 19 2025
Body composition analysis (BCA) is a recognized indicator of patient frailty. Apart from the established bioelectrical impedance analysis (BIA), computed tomography (CT)-derived BCA is being increasingly explored. The aim of this prospective study was to directly compare BCA obtained from BIA and CT. A total of 210 consecutive patients scheduled for CT, including a high proportion of cancer patients, were prospectively enrolled. Immediately prior to the CT scan, all patients underwent BIA. CT-based BCA was performed using a single-slice AI tool for automated detection and segmentation at the level of the third lumbar vertebra (L3). BIA-based parameters, body fat mass (BFM<sub>BIA</sub>) and skeletal muscle mass (SMM<sub>BIA</sub>), CT-based parameters, subcutaneous and visceral adipose tissue area (SATA<sub>CT</sub> and VATA<sub>CT</sub>) and total abdominal muscle area (TAMA<sub>CT</sub>) were determined. Indices were calculated by normalizing the BIA and CT parameters to patient's weight (body fat percentage (BFP<sub>BIA</sub>) and body fat index (BFI<sub>CT</sub>)) or height (skeletal muscle index (SMI<sub>BIA</sub>) and lumbar skeletal muscle index (LSMI<sub>CT</sub>)). Parameters representing fat, BFM<sub>BIA</sub> and SATA<sub>CT</sub> + VATA<sub>CT</sub>, and parameters representing muscle tissue, SMM<sub>BIA</sub> and TAMA<sub>CT</sub>, showed strong correlations in female (fat: r = 0.95; muscle: r = 0.72; p < 0.001) and male (fat: r = 0.91; muscle: r = 0.71; p < 0.001) patients. Linear regression analysis was statistically significant (fat: R<sup>2</sup> = 0.73 (female) and 0.74 (male); muscle: R<sup>2</sup> = 0.56 (female) and 0.56 (male); p < 0.001), showing that BFI<sub>CT</sub> and LSMI<sub>CT</sub> allowed prediction of BFP<sub>BIA</sub> and SMI<sub>BIA</sub> for both sexes. CT-based BCA strongly correlates with BIA results and yields quantitative results for BFP and SMI comparable to the existing gold standard. Question CT-based body composition analysis (BCA) is moving more and more into clinical focus, but validation against established methods is lacking. Findings Fully automated CT-based BCA correlates very strongly with guideline-accepted bioelectrical impedance analysis (BIA). Clinical relevance BCA is currently moving further into clinical focus to improve assessment of patient frailty and individualize therapies accordingly. Comparability with established BIA strengthens the value of CT-based BCA and supports its translation into clinical routine.

VesselSDF: Distance Field Priors for Vascular Network Reconstruction

Salvatore Esposito, Daniel Rebain, Arno Onken, Changjian Li, Oisin Mac Aodha

arxiv logopreprintJun 19 2025
Accurate segmentation of vascular networks from sparse CT scan slices remains a significant challenge in medical imaging, particularly due to the thin, branching nature of vessels and the inherent sparsity between imaging planes. Existing deep learning approaches, based on binary voxel classification, often struggle with structural continuity and geometric fidelity. To address this challenge, we present VesselSDF, a novel framework that leverages signed distance fields (SDFs) for robust vessel reconstruction. Our method reformulates vessel segmentation as a continuous SDF regression problem, where each point in the volume is represented by its signed distance to the nearest vessel surface. This continuous representation inherently captures the smooth, tubular geometry of blood vessels and their branching patterns. We obtain accurate vessel reconstructions while eliminating common SDF artifacts such as floating segments, thanks to our adaptive Gaussian regularizer which ensures smoothness in regions far from vessel surfaces while producing precise geometry near the surface boundaries. Our experimental results demonstrate that VesselSDF significantly outperforms existing methods and preserves vessel geometry and connectivity, enabling more reliable vascular analysis in clinical settings.

PMFF-Net: A deep learning-based image classification model for UIP, NSIP, and OP.

Xu MW, Zhang ZH, Wang X, Li CT, Yang HY, Liao ZH, Zhang JQ

pubmed logopapersJun 19 2025
High-resolution computed tomography (HRCT) is helpful for diagnosing interstitial lung diseases (ILD), but it largely depends on the experience of physicians. Herein, our study aims to develop a deep-learning-based classification model to differentiate the three common types of ILD, so as to provide a reference to help physicians make the diagnosis and improve the accuracy of ILD diagnosis. Patients were selected from four tertiary Grade A hospitals in Kunming based on inclusion and exclusion criteria. HRCT scans of 130 patients were included. The imaging manifestations were usual interstitial pneumonia (UIP), non-specific interstitial pneumonia (NSIP), and organizing pneumonia (OP). Additionally, 50 chest HRCT cases without imaging abnormalities during the same period were selected.Construct a data set. Conduct the training, validation, and testing of the Parallel Multi-scale Feature Fusion Network (PMFF-Net) deep learning model. Utilize Python software to generate data and charts pertaining to model performance. Assess the model's accuracy, precision, recall, and F1-score, and juxtapose its diagnostic efficacy against that of physicians across various hospital levels, with differing levels of seniority, and from various departments. The PMFF -Net deep learning model is capable of classifying imaging types such as UIP, NSIP, and OP, as well as normal imaging. In a mere 105 s, it makes the diagnosis for 18 HRCT images with a diagnostic accuracy of 92.84 %, precision of 91.88 %, recall of 91.95 %, and an F1 score of 0.9171. The diagnostic accuracy of senior radiologists (83.33 %) and pulmonologists (77.77 %) from tertiary hospitals is higher than that of internists from secondary hospitals (33.33 %). Meanwhile, the diagnostic accuracy of middle-aged radiologists (61.11 %) and pulmonologists (66.66 %) are higher than junior radiologists (38.88 %) and pulmonologists (44.44 %) in tertiary hospitals, whereas junior and middle-aged internists at secondary hospitals were unable to complete the tests. This study found that the PMFF-Net model can effectively classify UIP, NSIP, OP imaging types, and normal imaging, which can help doctors of different hospital levels and departments make clinical decisions quickly and effectively.

Optimization of Photon-Counting CT Myelography for the Detection of CSF-Venous Fistulas Using Convolutional Neural Network Denoising: A Comparative Analysis of Reconstruction Techniques.

Madhavan AA, Zhou Z, Farnsworth PJ, Thorne J, Amrhein TJ, Kranz PG, Brinjikji W, Cutsforth-Gregory JK, Kodet ML, Weber NM, Thompson G, Diehn FE, Yu L

pubmed logopapersJun 19 2025
Photon-counting detector CT myelography (PCD-CTM) is a recently described technique used for detecting spinal CSF leaks, including CSF-venous fistulas. Various image reconstruction techniques, including smoother-versus-sharper kernels and virtual monoenergetic images, are available with photon-counting CT. Moreover, denoising algorithms have shown promise in improving sharp kernel images. No prior studies have compared image quality of these different reconstructions on photon-counting CT myelography. Here, we sought to compare several image reconstructions using various parameters important for the detection of CSF-venous fistulas. We performed a retrospective review of all consecutive decubitus PCD-CTM between February 1, 2022, and August 1, 2024, at 1 institution. We included patients whose studies had the following reconstructions: Br48-40 keV virtual monoenergetic reconstruction, Br56 low-energy threshold (T3D), Qr89-T3D denoised with quantum iterative reconstruction, and Qr89-T3D denoised with a convolutional neural network algorithm. We excluded patients who had extradural CSF on preprocedural imaging or a technically unsatisfactory myelogram-. All 4 reconstructions were independently reviewed by 2 neuroradiologists. Each reviewer rated spatial resolution, noise, the presence of artifacts, image quality, and diagnostic confidence (whether positive or negative) on a 1-5 scale. These metrics were compared using the Friedman test. Additionally, noise and contrast were quantitatively assessed by a third reviewer and compared. The Qr89 reconstructions demonstrated higher spatial resolution than their Br56 or Br48-40keV counterparts. Qr89 with convolutional neural network denoising had less noise, better image quality, and improved diagnostic confidence compared with Qr89 with quantum iterative reconstruction denoising. The Br48-40keV reconstruction had the highest contrast-to-noise ratio quantitatively. In our study, the sharpest quantitative kernel (Qr89-T3D) with convolutional neural network denoising demonstrated the best performance regarding spatial resolution, noise level, image quality, and diagnostic confidence for detecting or excluding the presence of a CSF-venous fistula.

Multi-domain information fusion diffusion model (MDIF-DM) for limited-angle computed tomography.

Ma G, Xia D, Zhao S

pubmed logopapersJun 19 2025
BackgroundLimited-angle Computed Tomography imaging suffers from severe artifacts in the reconstructed image due to incomplete projection data. Deep learning methods have been developed currently to address the challenges of robustness and low contrast of the limited-angle CT reconstruction with a relatively effective way.ObjectiveTo improve the low contrast of the current limited-angle CT reconstruction image, enhance the robustness of the reconstruction method and the contrast of the limited-angle image.MethodIn this paper, we proposed a limited-angle CT reconstruction method that combining the Fourier domain reweighting and wavelet domain enhancement, which fused information from different domains, thereby getting high-resolution reconstruction images.ResultsWe verified the feasibility and effectiveness of the proposed solution through experiments, and the reconstruction results are improved compared with the state-of-the-art methods.ConclusionsThe proposed method enhances some features of the original image domain data from different domains, which is beneficial to the reasonable diffusion and restoration of diffuse detail texture features.

Data extraction from free-text stroke CT reports using GPT-4o and Llama-3.3-70B: the impact of annotation guidelines.

Wihl J, Rosenkranz E, Schramm S, Berberich C, Griessmair M, Woźnicki P, Pinto F, Ziegelmayer S, Adams LC, Bressem KK, Kirschke JS, Zimmer C, Wiestler B, Hedderich D, Kim SH

pubmed logopapersJun 19 2025
To evaluate the impact of an annotation guideline on the performance of large language models (LLMs) in extracting data from stroke computed tomography (CT) reports. The performance of GPT-4o and Llama-3.3-70B in extracting ten imaging findings from stroke CT reports was assessed in two datasets from a single academic stroke center. Dataset A (n = 200) was a stratified cohort including various pathological findings, whereas dataset B (n = 100) was a consecutive cohort. Initially, an annotation guideline providing clear data extraction instructions was designed based on a review of cases with inter-annotator disagreements in dataset A. For each LLM, data extraction was performed under two conditions: with the annotation guideline included in the prompt and without it. GPT-4o consistently demonstrated superior performance over Llama-3.3-70B under identical conditions, with micro-averaged precision ranging from 0.83 to 0.95 for GPT-4o and from 0.65 to 0.86 for Llama-3.3-70B. Across both models and both datasets, incorporating the annotation guideline into the LLM input resulted in higher precision rates, while recall rates largely remained stable. In dataset B, the precision of GPT-4o and Llama-3-70B improved from 0.83 to 0.95 and from 0.87 to 0.94, respectively. Overall classification performance with and without the annotation guideline was significantly different in five out of six conditions. GPT-4o and Llama-3.3-70B show promising performance in extracting imaging findings from stroke CT reports, although GPT-4o steadily outperformed Llama-3.3-70B. We also provide evidence that well-defined annotation guidelines can enhance LLM data extraction accuracy. Annotation guidelines can improve the accuracy of LLMs in extracting findings from radiological reports, potentially optimizing data extraction for specific downstream applications. LLMs have utility in data extraction from radiology reports, but the role of annotation guidelines remains underexplored. Data extraction accuracy from stroke CT reports by GPT-4o and Llama-3.3-70B improved when well-defined annotation guidelines were incorporated into the model prompt. Well-defined annotation guidelines can improve the accuracy of LLMs in extracting imaging findings from radiological reports.

Development and validation of an AI-driven radiomics model using non-enhanced CT for automated severity grading in chronic pancreatitis.

Chen C, Zhou J, Mo S, Li J, Fang X, Liu F, Wang T, Wang L, Lu J, Shao C, Bian Y

pubmed logopapersJun 19 2025
To develop and validate the chronic pancreatitis CT severity model (CATS), an artificial intelligence (AI)-based tool leveraging automated 3D segmentation and radiomics analysis of non-enhanced CT scans for objective severity stratification in chronic pancreatitis (CP). This retrospective study encompassed patients with recurrent acute pancreatitis (RAP) and CP from June 2016 to May 2020. A 3D convolutional neural network segmented non-enhanced CT scans, extracting 1843 radiomic features to calculate the radiomics score (Rad-score). The CATS was formulated using multivariable logistic regression and validated in a subsequent cohort from June 2020 to April 2023. Overall, 2054 patients with RAP and CP were included in the training (n = 927), validation set (n = 616), and external test (n = 511) sets. CP grade I and II patients accounted for 300 (14.61%) and 1754 (85.39%), respectively. The Rad-score significantly correlated with the acinus-to-stroma ratio (p = 0.023; OR, -2.44). The CATS model demonstrated high discriminatory performance in differentiating CP severity grades, achieving an area under the curve (AUC) of 0.96 (95% CI: 0.94-0.98) and 0.88 (95% CI: 0.81-0.90) in the validation and test cohorts. CATS-predicted grades correlated with exocrine insufficiency (all p < 0.05) and showed significant prognostic differences (all p < 0.05). CATS outperformed radiologists in detecting calcifications, identifying all minute calcifications missed by radiologists. The CATS, developed using non-enhanced CT and AI, accurately predicts CP severity, reflects disease morphology, and forecasts short- to medium-term prognosis, offering a significant advancement in CP management. Question Existing CP severity assessments rely on semi-quantitative CT evaluations and multi-modality imaging, leading to inconsistency and inaccuracy in early diagnosis and prognosis prediction. Findings The AI-driven CATS model, using non-enhanced CT, achieved high accuracy in grading CP severity, and correlated with histopathological fibrosis markers. Clinical relevance CATS provides a cost-effective, widely accessible tool for precise CP severity stratification, enabling early intervention, personalized management, and improved outcomes without contrast agents or invasive biopsies.

A fusion-based deep-learning algorithm predicts PDAC metastasis based on primary tumour CT images: a multinational study.

Xue N, Sabroso-Lasa S, Merino X, Munzo-Beltran M, Schuurmans M, Olano M, Estudillo L, Ledesma-Carbayo MJ, Liu J, Fan R, Hermans JJ, van Eijck C, Malats N

pubmed logopapersJun 19 2025
Diagnosing the presence of metastasis of pancreatic cancer is pivotal for patient management and treatment, with contrast-enhanced CT scans (CECT) as the cornerstone of diagnostic evaluation. However, this diagnostic modality requires a multifaceted approach. To develop a convolutional neural network (CNN)-based model (PMPD, Pancreatic cancer Metastasis Prediction Deep-learning algorithm) to predict the presence of metastases based on CECT images of the primary tumour. CECT images in the portal venous phase of 335 patients with pancreatic ductal adenocarcinoma (PDAC) from the PanGenEU study and The First Affiliated Hospital of Zhengzhou University (ZZU) were randomly divided into training and internal validation sets by applying fivefold cross-validation. Two independent external validation datasets of 143 patients from the Radboud University Medical Center (RUMC), included in the PANCAIM study (RUMC-PANCAIM) and 183 patients from the PREOPANC trial of the Dutch Pancreatic Cancer Group (PREOPANC-DPCG) were used to evaluate the results. The area under the receiver operating characteristic curve (AUROC) for the internally tested model was 0.895 (0.853-0.937) and 0.779 (0.741-0.817) in the PanGenEU and ZZU sets, respectively. In the external validation sets, the mean AUROC was 0.806 (0.787-0.826) for the RUMC-PANCAIM and 0.761 (0.717-0.804) for the PREOPANC-DPCG. When stratified by the different metastasis sites, the PMPD model achieved the average AUROC between 0.901-0.927 in PanGenEU, 0.782-0.807 in ZZU and 0.761-0.820 in PREOPANC-DPCG sets. A PMPD-derived Metastasis Risk Score (MRS) (HR: 2.77, 95% CI 1.99 to 3.86, p=1.59e-09) outperformed the Resectability status from the National Comprehensive Cancer Network guideline and the CA19-9 biomarker in predicting overall survival. Meanwhile, the MRS could potentially predict developed metastasis (AUROC: 0.716 for within 3 months, 0.645 for within 6 months). This study represents a pioneering utilisation of a high-performance deep-learning model to predict extrapancreatic organ metastasis in patients with PDAC.

A Deep Learning Lung Cancer Segmentation Pipeline to Facilitate CT-based Radiomics

So, A. C. P., Cheng, D., Aslani, S., Azimbagirad, M., Yamada, D., Dunn, R., Josephides, E., McDowall, E., Henry, A.-R., Bille, A., Sivarasan, N., Karapanagiotou, E., Jacob, J., Pennycuick, A.

medrxiv logopreprintJun 18 2025
BackgroundCT-based radio-biomarkers could provide non-invasive insights into tumour biology to risk-stratify patients. One of the limitations is laborious manual segmentation of regions-of-interest (ROI). We present a deep learning auto-segmentation pipeline for radiomic analysis. Patients and Methods153 patients with resected stage 2A-3B non-small cell lung cancer (NSCLCs) had tumours segmented using nnU-Net with review by two clinicians. The nnU-Net was pretrained with anatomical priors in non-cancerous lungs and finetuned on NSCLCs. Three ROIs were segmented: intra-tumoural, peri-tumoural, and whole lung. 1967 features were extracted using PyRadiomics. Feature reproducibility was tested using segmentation perturbations. Features were selected using minimum-redundancy-maximum-relevance with Random Forest-recursive feature elimination nested in 500 bootstraps. ResultsAuto-segmentation time was [~]36 seconds/series. Mean volumetric and surface Dice-Sorensen coefficient (DSC) scores were 0.84 ({+/-}0.28), and 0.79 ({+/-}0.34) respectively. DSC were significantly correlated with tumour shape (sphericity, diameter) and location (worse with chest wall adherence), but not batch effects (e.g. contrast, reconstruction kernel). 6.5% cases had missed segmentations; 6.5% required major changes. Pre-training on anatomical priors resulted in better segmentations compared to training on tumour-labels alone (p<0.001) and tumour with anatomical labels (p<0.001). Most radiomic features were not reproducible following perturbations and resampling. Adding radiomic features, however, did not significantly improve the clinical model in predicting 2-year disease-free survival: AUCs 0.67 (95%CI 0.59-0.75) vs 0.63 (95%CI 0.54-0.71) respectively (p=0.28). ConclusionOur study demonstrates that integrating auto-segmentation into radio-biomarker discovery is feasible with high efficiency and accuracy. Whilst radiomic analysis show limited reproducibility, our auto-segmentation may allow more robust radio-biomarker analysis using deep learning features.
Page 95 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.