Sort by:
Page 75 of 1421416 results

An AI-based tool for prosthetic crown segmentation serving automated intraoral scan-to-CBCT registration in challenging high artifact scenarios.

Elgarba BM, Ali S, Fontenele RC, Meeus J, Jacobs R

pubmed logopapersJul 1 2025
Accurately registering intraoral and cone beam computed tomography (CBCT) scans in patients with metal artifacts poses a significant challenge. Whether a cloud-based platform trained for artificial intelligence (AI)-driven segmentation can improve registration is unclear. The purpose of this clinical study was to validate a cloud-based platform trained for the AI-driven segmentation of prosthetic crowns on CBCT scans and subsequent multimodal intraoral scan-to-CBCT registration in the presence of high metal artifact expression. A dataset consisting of 30 time-matched maxillary and mandibular CBCT and intraoral scans, each containing at least 4 prosthetic crowns, was collected. CBCT acquisition involved placing cotton rolls between the cheeks and teeth to facilitate soft tissue delineation. Segmentation and registration were compared using either a semi-automated (SA) method or an AI-automated (AA). SA served as clinical reference, where prosthetic crowns and their radicular parts (natural roots or implants) were threshold-based segmented with point surface-based registration. The AA method included fully automated segmentation and registration based on AI algorithms. Quantitative assessment compared AA's median surface deviation (MSD) and root mean square (RMS) in crown segmentation and subsequent intraoral scan-to-CBCT registration with those of SA. Additionally, segmented crown STL files were voxel-wise analyzed for comparison between AA and SA. A qualitative assessment of AA-based crown segmentation evaluated the need for refinement, while the AA-based registration assessment scrutinized the alignment of the registered-intraoral scan with the CBCT teeth and soft tissue contours. Ultimately, the study compared the time efficiency and consistency of both methods. Quantitative outcomes were analyzed with the Kruskal-Wallis, Mann-Whitney, and Student t tests, and qualitative outcomes with the Wilcoxon test (all α=.05). Consistency was evaluated by using the intraclass correlation coefficient (ICC). Quantitatively, AA methods excelled with a 0.91 Dice Similarity Coefficient for crown segmentation and an MSD of 0.03 ±0.05 mm for intraoral scan-to-CBCT registration. Additionally, AA achieved 91% clinically acceptable matches of teeth and gingiva on CBCT scans, surpassing SA method's 80%. Furthermore, AA was significantly faster than SA (P<.05), being 200 times faster in segmentation and 4.5 times faster in registration. Both AA and SA exhibited excellent consistency in segmentation and registration, with ICC values of 0.99 and 1 for AA and 0.99 and 0.96 for SA, respectively. The novel cloud-based platform demonstrated accurate, consistent, and time-efficient prosthetic crown segmentation, as well as intraoral scan-to-CBCT registration in scenarios with high artifact expression.

Deep learning radiomics and mediastinal adipose tissue-based nomogram for preoperative prediction of postoperative‌ brain metastasis risk in non-small cell lung cancer.

Niu Y, Jia HB, Li XM, Huang WJ, Liu PP, Liu L, Liu ZY, Wang QJ, Li YZ, Miao SD, Wang RT, Duan ZX

pubmed logopapersJul 1 2025
Brain metastasis (BM) significantly affects the prognosis of non-small cell lung cancer (NSCLC) patients. Increasing evidence suggests that adipose tissue influences cancer progression and metastasis. This study aimed to develop a predictive nomogram integrating mediastinal fat area (MFA) and deep learning (DL)-derived tumor characteristics to stratify postoperative‌ BM risk in NSCLC patients. A retrospective cohort of 585 surgically resected NSCLC patients was analyzed. Preoperative computed tomography (CT) scans were utilized to quantify MFA using ImageJ software (radiologist-validated measurements). Concurrently, a DL algorithm extracted tumor radiomic features, generating a deep learning brain metastasis score (DLBMS). Multivariate logistic regression identified independent BM predictors, which were incorporated into a nomogram. Model performance was assessed via area under the receiver operating characteristic curve (AUC), calibration plots, integrated discrimination improvement (IDI), net reclassification improvement (NRI), and decision curve analysis (DCA). Multivariate analysis identified N stage, EGFR mutation status, MFA, and DLBMS as independent predictors of BM. The nomogram achieved superior discriminative capacity (AUC: 0.947 in the test set), significantly outperforming conventional models. MFA contributed substantially to predictive accuracy, with IDI and NRI values confirming its incremental utility (IDI: 0.123, <i>P</i> < 0.001; NRI: 0.386, <i>P</i> = 0.023). Calibration analysis demonstrated strong concordance between predicted and observed BM probabilities, while DCA confirmed clinical net benefit across risk thresholds. This DL-enhanced nomogram, incorporating MFA and tumor radiomics, represents a robust and clinically useful tool for preoperative prediction of postoperative BM risk in NSCLC. The integration of adipose tissue metrics with advanced imaging analytics advances personalized prognostic assessment in NSCLC patients. The online version contains supplementary material available at 10.1186/s12885-025-14466-5.

Comparison of Deep Learning Models for fast and accurate dose map prediction in Microbeam Radiation Therapy.

Arsini L, Humphreys J, White C, Mentzel F, Paino J, Bolst D, Caccia B, Cameron M, Ciardiello A, Corde S, Engels E, Giagu S, Rosenfeld A, Tehei M, Tsoi AC, Vogel S, Lerch M, Hagenbuchner M, Guatelli S, Terracciano CM

pubmed logopapersJul 1 2025
Microbeam Radiation Therapy (MRT) is an innovative radiotherapy modality which uses highly focused synchrotron-generated X-ray microbeams. Current pre-clinical research in MRT mostly rely on Monte Carlo (MC) simulations for dose estimation, which are highly accurate but computationally intensive. Recently, Deep Learning (DL) dose engines have been proved effective in generating fast and reliable dose distributions in different RT modalities. However, relatively few studies compare different models on the same task. This work aims to compare a Graph-Convolutional-Network-based DL model, developed in the context of Very High Energy Electron RT, to the Convolutional 3D U-Net that we recently implemented for MRT dose predictions. The two DL solutions are trained with 3D dose maps, generated with the MC-Toolkit Geant4, in rats used in MRT pre-clinical research. The models are evaluated against Geant4 simulations, used as ground truth, and are assessed in terms of Mean Absolute Error, Mean Relative Error, and a voxel-wise version of the γ-index. Also presented are specific comparisons of predictions in relevant tumor regions, tissues boundaries and air pockets. The two models are finally compared from the perspective of the execution time and size. This study finds that the two models achieve comparable overall performance. Main differences are found in their dosimetric accuracy within specific regions, such as air pockets, and their respective inference times. Consequently, the choice between models should be guided primarily by data structure and time constraints, favoring the graph-based method for its flexibility or the 3D U-Net for its faster execution.

Application and optimization of the U-Net++ model for cerebral artery segmentation based on computed tomographic angiography images.

Kim H, Seo KH, Kim K, Shim J, Lee Y

pubmed logopapersJul 1 2025
Accurate segmentation of cerebral arteries on computed tomography angiography (CTA) images is essential for the diagnosis and management of cerebrovascular diseases, including ischemic stroke. This study implemented a deep learning-based U-Net++ model for cerebral artery segmentation in CTA images, focusing on optimizing pruning levels by analyzing the trade-off between segmentation performance and computational cost. Dual-energy CTA and direct subtraction CTA datasets were utilized to segment the internal carotid and vertebral arteries in close proximity to the bone. We implemented four pruning levels (L1-L4) in the U-Net++ model and evaluated the segmentation performance using accuracy, intersection over union, F1-score, boundary F1-score, and Hausdorff distance. Statistical analyses were conducted to assess the significance of segmentation performance differences across pruning levels. In addition, we measured training and inference times to evaluate the trade-off between segmentation performance and computational efficiency. Applying deep supervision improved segmentation performance across all factors. While the L4 pruning level achieved the highest segmentation performance, L3 significantly reduced training and inference times (by an average of 51.56 % and 22.62 %, respectively), while incurring only a small decrease in segmentation performance (7.08 %) compared to L4. These results suggest that L3 achieves an optimal balance between performance and computational cost. This study demonstrates that pruning levels in U-Net++ models can be optimized to reduce computational cost while maintaining effective segmentation performance. By simplifying deep learning models, this approach can improve the efficiency of cerebrovascular segmentation, contributing to faster and more accurate diagnoses in clinical settings.

CALIMAR-GAN: An unpaired mask-guided attention network for metal artifact reduction in CT scans.

Scardigno RM, Brunetti A, Marvulli PM, Carli R, Dotoli M, Bevilacqua V, Buongiorno D

pubmed logopapersJul 1 2025
High-quality computed tomography (CT) scans are essential for accurate diagnostic and therapeutic decisions, but the presence of metal objects within the body can produce distortions that lower image quality. Deep learning (DL) approaches using image-to-image translation for metal artifact reduction (MAR) show promise over traditional methods but often introduce secondary artifacts. Additionally, most rely on paired simulated data due to limited availability of real paired clinical data, restricting evaluation on clinical scans to qualitative analysis. This work presents CALIMAR-GAN, a generative adversarial network (GAN) model that employs a guided attention mechanism and the linear interpolation algorithm to reduce artifacts using unpaired simulated and clinical data for targeted artifact reduction. Quantitative evaluations on simulated images demonstrated superior performance, achieving a PSNR of 31.7, SSIM of 0.877, and Fréchet inception distance (FID) of 22.1, outperforming state-of-the-art methods. On real clinical images, CALIMAR-GAN achieved the lowest FID (32.7), validated as a valuable complement to qualitative assessments through correlation with pixel-based metrics (r=-0.797 with PSNR, p<0.01; r=-0.767 with MS-SSIM, p<0.01). This work advances DL-based artifact reduction into clinical practice with high-fidelity reconstructions that enhance diagnostic accuracy and therapeutic outcomes. Code is available at https://github.com/roberto722/calimar-gan.

Automatic adult age estimation using bone mineral density of proximal femur via deep learning.

Cao Y, Ma Y, Zhang S, Li C, Chen F, Zhang J, Huang P

pubmed logopapersJul 1 2025
Accurate adult age estimation (AAE) is critical for forensic and anthropological applications, yet traditional methods relying on bone mineral density (BMD) face significant challenges due to biological variability and methodological limitations. This study aims to develop an end-to-end Deep Learning (DL) based pipeline for automated AAE using BMD from proximal femoral CT scans. The main objectives are to construct a large-scale dataset of 5151 CT scans from real-world clinical and cadaver cohorts, fine-tune the Segment Anything Model (SAM) for accurate femoral bone segmentation, and evaluate multiple convolutional neural networks (CNNs) for precise age estimation based on segmented BMD data. Model performance was assessed through cross-validation, internal clinical testing, and external post-mortem validation. SAM achieved excellent segmentation performance with a Dice coefficient of 0.928 and an average intersection over union (mIoU) of 0.869. The CNN models achieved an average mean absolute error (MAE) of 5.20 years in cross-validation (male: 5.72; female: 4.51), which improved to 4.98 years in the independent clinical test set (male: 5.32; female: 4.56). External validation on the post-mortem dataset revealed an MAE of 6.91 years, with 6.97 for males and 6.69 for females. Ensemble learning further improved accuracy, reducing MAE to 4.78 years (male: 5.12; female: 4.35) in the internal test set, and 6.58 years (male: 6.64; female: 6.37) in the external validation set. These findings highlight the feasibility of dl-driven AAE and its potential for forensic applications, offering a fully automated framework for robust age estimation.

Development and validation of a fusion model based on multi-phase contrast CT radiomics combined with clinical features for predicting Ki-67 expression in gastric cancer.

Song T, Xue B, Liu M, Chen L, Cao A, Du P

pubmed logopapersJul 1 2025
The present study aimed to develop and validate a fusion model based on multi-phase contrast-enhanced computed tomography (CECT) radiomics features combined with clinical features to preoperatively predict the expression levels of Ki-67 in patients with gastric cancer (GC). A total of 164 patients with GC who underwent surgical treatment at our hospital between September 2015 and September 2023 were retrospectively included and were randomly divided into a training set (n=114) and a testing set (n=50). Using Pyradiomics, radiomics features were extracted from multi-phase CECT images and were combined with significant clinical features through various machine learning algorithms [support vector machine (SVM), random forest (RandomForest), K-nearest neighbors (KNN), LightGBM and XGBoost] to build a fusion model. Receiver operating characteristic, area under the curve (AUC), calibration curve and decision curve analysis (DCA) were used to evaluate, validate and compare the predictive performance and clinical utility of the model. Among the three single-phase models, for the arterial phase model, the SVM radiomics model had the highest AUC value in the training set, which was 0.697; and the RandomForest radiomics model had the highest AUC value in the testing set, which was 0.658. For the venous phase model, the SVM radiomics model had the highest AUC value in the training set, which was 0.783; and the LightGBM radiomics model had the highest AUC value in the testing set, which was 0.747. For the delayed phase model, the KNN radiomics model had the highest AUC value in the training set, which was 0.772; and the SVM radiomics model had the highest AUC in the testing set, which was 0.719. The clinical feature model had the lowest AUC values in both the training set and the testing set, which were 0.614 and 0.520, respectively. Notably, the multi-phase model and the fusion model, which were constructed by combining the clinical features and the multi-phase features, demonstrated excellent discriminative performance, with the fusion model achieving AUC values of 0.933 and 0.817 in the training and testing sets, thus outperforming other models (DeLong test, both P<0.05). The calibration curve showed that the fusion model had goodness of fit (Hosmer-Lemeshow test, >0.5 in the training and validation sets). The DCA showed that the net benefit of the fusion model in identifying high expression of Ki-67 was improved compared with that of other models. Furthermore, the fusion model achieved an AUC value of 0.805 in the external validation data from The Cancer Imaging Archive. In conclusion, the fusion model established in the present study was revealed to have excellent performance and is expected to serve as a non-invasive tool for predicting Ki-67 status and guiding clinical treatment.

Acquisition and Reconstruction Techniques for Coronary CT Angiography: Current Status and Trends over the Past Decade.

Fukui R, Harashima S, Samejima W, Shimizu Y, Washizuka F, Kariyasu T, Nishikawa M, Yamaguchi H, Takeuchi H, Machida H

pubmed logopapersJul 1 2025
Coronary CT angiography (CCTA) has been widely used as a noninvasive modality for accurate assessment of coronary artery disease (CAD) in clinical settings. However, the following limitations of CCTA remain issues of interest: motion, stair-step, and blooming artifacts; suboptimal image noise; ionizing radiation exposure; administration of contrast medium; and complex imaging workflow. Various acquisition and reconstruction techniques have been introduced over the past decade to overcome these limitations. Low-tube-voltage acquisition using a high-output x-ray tube can reasonably reduce the contrast medium and radiation dose. Fast x-ray tube and gantry rotation, dual-source CT, and a motion-correction algorithm (MCA) can improve temporal resolution and reduce coronary motion artifacts. High-definition CT (HDCT), ultrahigh-resolution CT (UHRCT), and superresolution deep learning reconstruction (DLR) algorithms can improve the spatial resolution and delineation of the vessel lumen with coronary calcifications or stents by reducing blooming artifacts. Whole-heart coverage using area-detector CT can eliminate stair-step artifacts. The DLR algorithm can effectively reduce image noise and radiation dose while maintaining image quality, particularly during high-resolution acquisition using HDCT or UHRCT, during low-tube-voltage acquisition, or when imaging patients with a large body habitus. Automatic cardiac protocol selection, automatic optimal cardiac phase selection, and MCA can improve the imaging workflow for each CCTA examination. A sufficient understanding of current and novel acquisition and reconstruction techniques is important to enhance the clinical value of CCTA for noninvasive assessment of CAD. <sup>©</sup>RSNA, 2025 Supplemental material is available for this article.

Agreement between Routine-Dose and Lower-Dose CT with and without Deep Learning-based Denoising for Active Surveillance of Solid Small Renal Masses: A Multiobserver Study.

Borgbjerg J, Breen BS, Kristiansen CH, Larsen NE, Medrud L, Mikalone R, Müller S, Naujokaite G, Negård A, Nielsen TK, Salte IM, Frøkjær JB

pubmed logopapersJul 1 2025
Purpose To assess the agreement between routine-dose (RD) and lower-dose (LD) contrast-enhanced CT scans, with and without Digital Imaging and Communications in Medicine-based deep learning-based denoising (DLD), in evaluating small renal masses (SRMs) during active surveillance. Materials and Methods In this retrospective study, CT scans from patients undergoing active surveillance for an SRM were included. Using a validated simulation technique, LD CT images were generated from the RD images to simulate 75% (LD75) and 90% (LD90) radiation dose reductions. Two additional LD image sets, in which the DLD was applied (LD75-DLD and LD90-DLD), were generated. Between January 2023 and June 2024, nine radiologists from three institutions independently evaluated 350 CT scans across five datasets for tumor size, tumor nearness to the collecting system (TN), and tumor shape irregularity (TSI), and interobserver reproducibility and agreement were assessed using the 95% limits of agreement with the mean (LOAM) and Gwet AC2 coefficient, respectively. Subjective and quantitative image quality assessments were also performed. Results The study sample included 70 patients (mean age, 73.2 years ± 9.2 [SD]; 48 male, 22 female). LD75 CT was found to be in agreement with RD scans for assessing SRM diameter, with a LOAM of ±2.4 mm (95% CI: 2.3, 2.6) for LD75 compared with ±2.2 mm (95% CI: 2.1, 2.4) for RD. However, a 90% dose reduction compromised reproducibility (LOAM ±3.0 mm; 95% CI: 2.8, 3.2). LD90-DLD preserved measurement reproducibility (LOAM ±2.4 mm; 95% CI: 2.3, 2.6). Observer agreement was comparable between TN and TSI assessments across all image sets, with no statistically significant differences identified (all comparisons <i>P</i> ≥ .35 for TN and <i>P</i> ≥ .02 for TSI; Holm-corrected significance threshold, <i>P</i> = .013). Subjective and quantitative image quality assessments confirmed that DLD effectively restored image quality at reduced dose levels: LD75-DLD had the highest overall image quality, significantly lower noise, and improved contrast-to-noise ratio compared with RD (<i>P</i> < .001). Conclusion A 75% reduction in radiation dose is feasible for SRM assessment in active surveillance using CT with a conventional iterative reconstruction technique, whereas applying DLD allows submillisievert dose reduction. <b>Keywords:</b> CT, Urinary, Kidney, Radiation Safety, Observer Performance, Technology Assessment <i>Supplemental material is available for this article.</i> © RSNA, 2025 See also commentary by Muglia in this issue.

Mechanically assisted non-invasive ventilation for liver SABR: Improve CBCT, treat more accurately.

Pierrard J, Audag N, Massih CA, Garcia MA, Moreno EA, Colot A, Jardinet S, Mony R, Nevez Marques AF, Servaes L, Tison T, den Bossche VV, Etume AW, Zouheir L, Ooteghem GV

pubmed logopapersJul 1 2025
Cone-beam computed tomography (CBCT) for image-guided radiotherapy (IGRT) during liver stereotactic ablative radiotherapy (SABR) is degraded by respiratory motion artefacts, potentially jeopardising treatment accuracy. Mechanically assisted non-invasive ventilation-induced breath-hold (MANIV-BH) can reduce these artefacts. This study compares MANIV-BH and free-breathing CBCTs regarding image quality, IGRT variability, automatic registration accuracy, and deep-learning auto-segmentation performance. Liver SABR CBCTs were presented blindly to 14 operators: 25 patients with FB and 25 with MANIV-BH. They rated CBCT quality and IGRT ease (rigid registration with planning CT). Interoperator IGRT variability was compared between FB and MANIV-BH. Automatic gross tumour volume (GTV) mapping accuracy was compared using automatic rigid registration and image-guided deformable registration. Deep-learning organ-at-risk (OAR) auto-segmentation was rated by an operator, who recorded the time dedicated for manual correction of these volumes. MANIV-BH significantly improved CBCT image quality ("Excellent"/"Good": 83.4 % versus 25.4 % with FB, p < 0.001), facilitated IGRT ("Very easy"/"Easy": 68.0 % versus 38.9 % with FB, p < 0.001), and reduced IGRT variability, particularly for trained operators (overall variability of 3.2 mm versus 4.6 mm with FB, p = 0.010). MANIV-BH improved deep-learning auto-segmentation performance (80.0 % rated "Excellent"/"Good" versus 4.0 % with FB, p < 0.001), and reduced median manual correction time by 54.2 % compared to FB (p < 0.001). However, automatic GTV mapping accuracy was not significantly different between MANIV-BH and FB. In liver SABR, MANIV-BH significantly improves CBCT quality, reduces interoperator IGRT variability, and enhances OAR auto-segmentation. Beyond being safe and effective for respiratory motion mitigation, MANIV increases accuracy during treatment delivery, although its implementation requires resources.
Page 75 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.