Sort by:
Page 26 of 1421417 results

Diagnostic performance of lumbar spine CT using deep learning denoising to evaluate disc herniation and spinal stenosis.

Park S, Kang JH, Moon SG

pubmed logopapersJun 7 2025
To evaluate the diagnostic performance of lumbar spine CT using deep learning denoising (DLD CT) for detecting disc herniation and spinal stenosis. This retrospective study included 47 patients (229 intervertebral discs from L1/2 to L5/S1; 18 men and 29 women; mean age, 69.1 ± 10.9 years) who underwent lumbar spine CT and MRI within 1 month. CT images were reconstructed using filtered back projection (FBP) and denoised using a deep learning algorithm (ClariCT.AI). Three radiologists independently evaluated standard CT and DLD CT at an 8-week interval for the presence of disc herniation, central canal stenosis, and neural foraminal stenosis. Subjective image quality and diagnostic confidence were also assessed using five-point Likert scales. Standard CT and DLD CT were compared using MRI as a reference standard. DLD CT showed higher sensitivity (60% (70/117) vs. 44% (51/117); p < 0.001) and similar specificity (94% (534/570) vs. 94% (538/570); p = 0.465) for detecting disc herniation. Specificity for detecting spinal canal stenosis and neural foraminal stenosis was higher in DLD CT (90% (487/540) vs. 86% (466/540); p = 0.003, 94% (1202/1272) vs. 92% (1171/1272); p < 0.001), while sensitivity was comparable (81% (119/147) vs. 77% (113/147); p = 0.233, 83% (85/102) vs. 81% (83/102); p = 0.636). Image quality and diagnostic confidence were superior for DLD CT (all comparisons, p < 0.05). Compared to standard CT, DLD CT can improve diagnostic performance in detecting disc herniation and spinal stenosis with superior image quality and diagnostic confidence. Question The accurate diagnosis of disc herniation and spinal stenosis is limited on lumbar spine CT because of the low soft-tissue contrast. Findings Lumbar spine CT using deep learning denoising (DLD CT) demonstrated superior diagnostic performance in detecting disc herniation and spinal stenosis compared to standard CT. Clinical relevance DLD CT can be used as a simple and cost-effective screening test.

Automatic MRI segmentation of masticatory muscles using deep learning enables large-scale muscle parameter analysis.

Ten Brink RSA, Merema BJ, den Otter ME, Jensma ML, Witjes MJH, Kraeima J

pubmed logopapersJun 7 2025
Mandibular reconstruction to restore mandibular continuity often relies on patient-specific implants and virtual surgical planning, but current implant designs rarely consider individual biomechanical demands, which are critical for preventing complications such as stress shielding, screw loosening, and implant failure. The inclusion of patient-specific masticatory muscle parameters such as cross-sectional area, vectors, and volume could improve implant success, but manual segmentation of these parameters is time-consuming, limiting large-scale analyses. In this study, a deep learning model was trained for automatic segmentation of eight masticatory muscles on MRI images. Forty T1-weighted MRI scans were segmented manually or via pseudo-labelling for training. Training employed 5-fold cross-validation over 1000 epochs per fold and testing was done on 10 manually segmented scans. The model achieved a mean Dice similarity coefficient (DSC) of 0.88, intersection over union (IoU) of 0.79, precision of 0.87, and recall of 0.89, demonstrating high segmentation accuracy. These results indicate the feasibility of large-scale, reproducible analyses of muscle volumes, directions, and estimated forces. By integrating these parameters into implant design and surgical planning, this method offers a step forward in developing personalized surgical strategies that could improve postoperative outcomes in mandibular reconstruction. This brings the field closer to truly individualized patient care.

Simulating workload reduction with an AI-based prostate cancer detection pathway using a prediction uncertainty metric.

Fransen SJ, Bosma JS, van Lohuizen Q, Roest C, Simonis FFJ, Kwee TC, Yakar D, Huisman H

pubmed logopapersJun 7 2025
This study compared two uncertainty quantification (UQ) metrics to rule out prostate MRI scans with a high-confidence artificial intelligence (AI) prediction and investigated the resulting potential radiologist's workload reduction in a clinically significant prostate cancer (csPCa) detection pathway. This retrospective study utilized 1612 MRI scans from three institutes for csPCa (Gleason Grade Group ≥ 2) assessment. We compared the standard diagnostic pathway (radiologist reading) to an AI-based rule-out pathway in terms of efficacy and accuracy in diagnosing csPCa. In the rule-out pathway, 15 AI submodels (trained on 7756 cases) diagnosed each MRI scan, and any prediction deemed uncertain was referred to a radiologist for reading. We compared the mean (meanUQ) and variability (varUQ) of predictions using the DeLong test on the area under the receiver operating characteristic curves (AUROC). The level of workload reduction of the best UQ method was determined based on a maintained sensitivity at non-inferior specificity using the margins 0.05 and 0.10. The workload reduction of the proposed pathway was institute-specific: up to 20% at a 0.10 non-inferiority margin (p < 0.05) and non-significant workload reduction at a 0.05 margin. VarUQ-based rule out gave higher but non-significant AUROC scores than meanUQ in certain selected cases (+0.05 AUROC, p > 0.05). MeanUQ and varUQ showed promise in AI-based rule-out csPCa detection. Using varUQ in an AI-based csPCa detection pathway could reduce the number of scans radiologists need to read. The varying performance of the UQ rule-out indicates the need for institute-specific UQ thresholds. Question AI can autonomously assess prostate MRI scans with high certainty at a non-inferior performance compared to radiologists, potentially reducing the workload of radiologists. Findings The optimal ratio of AI-model and radiologist readings is institute-dependent and requires calibration. Clinical relevance Semi-autonomous AI-based prostate cancer detection with variational UQ scores shows promise in reducing the number of scans radiologists need to read.

Comparison of AI-Powered Tools for CBCT-Based Mandibular Incisive Canal Segmentation: A Validation Study.

da Andrade-Bortoletto MFS, Jindanil T, Fontenele RC, Jacobs R, Freitas DQ

pubmed logopapersJun 7 2025
Identification of the mandibular incisive canal (MIC) prior to anterior implant placement is often challenging. The present study aimed to validate an enhanced artificial intelligence (AI)-driven model dedicated to automated segmentation of MIC on cone beam computed tomography (CBCT) scans and to compare its accuracy and time efficiency with simultaneous segmentation of both mandibular canal (MC) and MIC by either human experts or a previously trained AI model. An enhanced AI model was developed based on 100 CBCT scans using expert-optimized MIC segmentation within the Virtual Patient Creator platform. The performance of the enhanced AI model was tested against human experts and a previously trained AI model using another 40 CBCT scans. Performance metrics included intersection over union (IoU), dice similarity coefficient (DSC), recall, precision, accuracy, and root mean square error (RSME). Time efficiency was also evaluated. The enhanced AI model had IoU of 93%, DSC of 93%, recall of 94%, precision of 93%, accuracy of 99%, and RMSE of 0.23 mm. These values were significantly higher than those of the previously trained AI model for all metrics, and for manual segmentation for IoU, DSC, recall, and accuracy (p < 0.0001). The enhanced AI model demonstrated significant time efficiency, completing segmentation in 17.6 s (125 times faster than manual segmentation) (p < 0.0001). The enhanced AI model proved to allow a unique and accurate automated MIC segmentation with high accuracy and time efficiency. Besides, its performance was superior to human expert segmentation and a previously trained AI model segmentation.

Estimation of tumor coverage after RF ablation of hepatocellular carcinoma using single 2D image slices.

Varble N, Li M, Saccenti L, Borde T, Arrichiello A, Christou A, Lee K, Hazen L, Xu S, Lencioni R, Wood BJ

pubmed logopapersJun 7 2025
To assess the technical success of radiofrequency ablation (RFA) in patients with hepatocellular carcinoma (HCC), an artificial intelligence (AI) model was developed to estimate the tumor coverage without the need for segmentation or registration tools. A secondary retrospective analysis of 550 patients in the multicenter and multinational OPTIMA trial (3-7 cm solidary HCC lesions, randomized to RFA or RFA + LTLD) identified 182 patients with well-defined pre-RFA tumor and 1-month post-RFA devascularized ablation zones on enhanced CT. The ground-truth, or percent tumor coverage, was determined based on the result of semi-automatic 3D tumor and ablation zone segmentation and elastic registration. The isocenter of the tumor and ablation was isolated on 2D axial CT images. Feature extraction was performed, and classification and linear regression models were built. Images were augmented, and 728 image pairs were used for training and testing. The estimated percent tumor coverage using the models was compared to ground-truth. Validation was performed on eight patient cases from a separate institution, where RFA was performed, and pre- and post-ablation images were collected. In testing cohorts, the best model accuracy was with classification and moderate data augmentation (AUC = 0.86, TPR = 0.59, and TNR = 0.89, accuracy = 69%) and regression with random forest (RMSE = 12.6%, MAE = 9.8%). Validation in a separate institution did not achieve accuracy greater than random estimation. Visual review of training cases suggests that poor tumor coverage may be a result of atypical ablation zone shrinkage 1 month post-RFA, which may not be reflected in clinical utilization. An AI model that uses 2D images at the center of the tumor and 1 month post-ablation can accurately estimate ablation tumor coverage. In separate validation cohorts, translation could be challenging.

De-identification of medical imaging data: a comprehensive tool for ensuring patient privacy.

Rempe M, Heine L, Seibold C, Hörst F, Kleesiek J

pubmed logopapersJun 7 2025
Medical imaging data employed in research frequently comprises sensitive Protected Health Information (PHI) and Personal Identifiable Information (PII), which is subject to rigorous legal frameworks such as the General Data Protection Regulation (GDPR) or the Health Insurance Portability and Accountability Act (HIPAA). Consequently, these types of data must be de-identified prior to utilization, which presents a significant challenge for many researchers. Given the vast array of medical imaging data, it is necessary to employ a variety of de-identification techniques. To facilitate the de-identification process for medical imaging data, we have developed an open-source tool that can be used to de-identify Digital Imaging and Communications in Medicine (DICOM) magnetic resonance images, computer tomography images, whole slide images and magnetic resonance twix raw data. Furthermore, the implementation of a neural network enables the removal of text within the images. The proposed tool reaches comparable results to current state-of-the-art algorithms at reduced computational time (up to × 265). The tool also manages to fully de-identify image data of various types, such as Neuroimaging Informatics Technology Initiative (NIfTI) or Whole Slide Image (WSI-)DICOMS. The proposed tool automates an elaborate de-identification pipeline for multiple types of inputs, reducing the need for additional tools used for de-identification of imaging data. Question How can researchers effectively de-identify sensitive medical imaging data while complying with legal frameworks to protect patient health information? Findings We developed an open-source tool that automates the de-identification of various medical imaging formats, enhancing the efficiency of de-identification processes. Clinical relevance This tool addresses the critical need for robust and user-friendly de-identification solutions in medical imaging, facilitating data exchange in research while safeguarding patient privacy.

Contribution of Labrum and Cartilage to Joint Surface in Different Hip Deformities: An Automatic Deep Learning-Based 3-Dimensional Magnetic Resonance Imaging Analysis.

Meier MK, Roshardt JA, Ruckli AC, Gerber N, Lerch TD, Jung B, Tannast M, Schmaranzer F, Steppacher SD

pubmed logopapersJun 7 2025
Multiple 2-dimensional magnetic resonance imaging (MRI) studies have indicated that the size of the labrum adjusts in response to altered joint loading. In patients with hip dysplasia, it tends to increase as a compensatory mechanism for inadequate acetabular coverage. To determine the differences in labral contribution to the joint surface among different hip deformities as well as which radiographic parameters influence labral contribution to the joint surface using a deep learning-based approach for automatic 3-dimensional (3D) segmentation of MRI. Cross-sectional study; Level of evidence, 4. This retrospective study was approved by the local ethics committee with waiver for informed consent. A total of 98 patients (100 hips) with symptomatic hip deformities undergoing direct hip magnetic resonance arthrography (3 T) between January 2020 and October 2021 were consecutively selected (mean age, 30 ± 9 years; 64% female). The standard imaging protocol included proton density-weighted turbo spin echo images and an axial-oblique 3D T1-weighted MP2RAGE sequence. According to acetabular morphology, hips were divided into subgroups: dysplasia (lateral center-edge [LCE] angle, <23°), normal coverage (LCE, 23°-33°), overcoverage (LCE, 33°-39°), severe overcoverage (LCE, >39°), and retroversion (retroversion index >10% and all 3 retroversion signs positive). A previously validated deep learning approach for automatic segmentation and software for calculation of the joint surface were used. The labral contribution to the joint surface was defined as follows: labrum surface area/(labrum surface area + cartilage surface area). One-way analysis of variance with Tukey correction for multiple comparison and linear regression analysis was performed. The mean labral contribution of the joint surface of dysplastic hips was 26% ± 5% (95% CI, 24%-28%) and higher compared with all other hip deformities (<i>P</i> value range, .001-.036). Linear regression analysis identified LCE angle (β = -.002; <i>P</i> < .001) and femoral torsion (β = .001; <i>P</i> = .008) as independent predictors for labral contribution to the joint surface with a goodness-of-fit <i>R</i><sup>2</sup> value of 0.35. The labral contribution to the joint surface differs among hip deformities and is influenced by lateral acetabular coverage and femoral torsion. This study paves the way for a more in-depth understanding of the underlying pathomechanism and a reliable 3D analysis of the hip joint that can be indicative for surgical decision-making in patients with hip deformities.

Diffusion-based image translation model from low-dose chest CT to calcium scoring CT with random point sampling.

Jung JH, Lee JE, Lee HS, Yang DH, Lee JG

pubmed logopapersJun 7 2025
Coronary artery calcium (CAC) scoring is an important method for cardiovascular risk assessment. While artificial intelligence (AI) has been applied to automate CAC scoring in calcium scoring computed tomography (CSCT), its application to low-dose computed tomography (LDCT) scans, typically used for lung cancer screening, remains challenging due to the lower image quality and higher noise levels of LDCT. This study presents a diffusion model-based method for converting LDCT to CSCT images with the aim of improving CAC scoring accuracy from LDCT scans. A conditional diffusion model was developed to generate CSCT images from LDCT by modifying the denoising diffusion implicit model (DDIM) sampling process. Two main modifications were introduced: (1) random pointing, a novel sampling technique that enhances the trajectory guidance methodology of DDIM using stochastic Gaussian noise to optimize domain adaptation, and (2) intermediate sampling, an advanced methodology that strategically injects minimal noise into LDCT images prior to sampling to maximize structural preservation. The model was trained on LDCT and CSCT images obtained from the same patients but acquired separately at different time points and patient positions, and validated on 37 test cases. The proposed method showed superior performance compared to widely used image-to-image models (CycleGAN, CUT, DCLGAN, NEGCUT) across several evaluation metrics, including peak signal-to-noise ratio (39.93 ± 0.44), Local Normalized Cross-Correlation (0.97 ± 0.01), structural similarity index (0.97 ± 0.01), and Dice similarity coefficient (0.73 ± 0.10). The modifications to the sampling process reduced the number of iterations from 1000 to 10 while maintaining image quality. Volumetric analysis indicated a stronger correlation between the calcium volumes in the enhanced CSCT images and expert-verified annotations, as compared to the original LDCT images. The proposed method effectively transforms LDCT images to CSCT images while preserving anatomical structures and calcium deposits. The reduction in sampling time and the improved preservation of calcium structures suggest that the method could be applicable for clinical use in cardiovascular risk assessment.

SCAI-Net: An AI-driven framework for optimized, fast, and resource-efficient skull implant generation for cranioplasty using CT images.

Juneja M, Poddar A, Kharbanda M, Sudhir A, Gupta S, Joshi P, Goel A, Fatma N, Gupta M, Tarkas S, Gupta V, Jindal P

pubmed logopapersJun 7 2025
Skull damage caused by craniectomy or trauma necessitates accurate and precise Patient-Specific Implant (PSI) design to restore the cranial cavity. Conventional Computer-Aided Design (CAD)-based methods for PSI design are highly infrastructure-intensive, require specialised skills, and are time-consuming, resulting in prolonged patient wait times. Recent advancements in Artificial Intelligence (AI) provide automated, faster and scalable alternatives. This study introduces the Skull Completion using AI Network (SCAI-Net) framework, a deep-learning-based approach for automated cranial defect reconstruction using Computer Tomography (CT) images. The framework proposes two defect reconstruction variants: SCAI-Net-SDR (Subtraction-based Defect Reconstruction), which first reconstructs the full skull, then performs binary subtraction to obtain the reconstructed defect, and SCAI-Net-DDR (Direct Defect Reconstruction), which generates the reconstructed defect directly without requiring full-skull reconstruction. To enhance model robustness, the SCAI-Net was trained on an augmented dataset of 2760 images, created by combining MUG500+ and SkullFix datasets, featuring artificial defects across multiple cranial regions. Unlike subtraction-based SCAI-Net-SDR, which requires full-skull reconstruction before binary subtraction, and conventional CAD-based methods, which rely on interpolation or mirroring, SCAI-Net-DDR significantly reduces computational overhead. By eliminating the full-skull reconstruction step, DDR reduces training time by 66 % (85 min vs. 250 min for SDR) and achieves a 99.996 % faster defect reconstruction time compared to CAD (0.1s vs. 2400s). Based on the quantitative evaluation conducted on the SkullFix test cases, SCAI-Net-DDR emerged as the leading model among all evaluated approaches. SCAI-Net-DDR achieved the highest Dice Similarity Coefficient (DSC: 0.889), a low Hausdorff Distance (HD: 1.856 mm), and a superior Structural Similarity Index (SSIM: 0.897). Similarly, within the subset of subtraction-based reconstruction approaches evaluated, SCAI-Net-SDR demonstrated competitive performance, achieving the best HD (1.855 mm) and the highest SSIM (0.889), confirming its strong standing among methods using the subtraction paradigm. SCAI-Net generates reconstructed defects, which undergo post-processing to ensure manufacturing readiness. Steps include surface smoothing, thickness validation and edge preparation for secure fixation and seamless digital manufacturing compatibility. End-to-end implant generation time for DDR demonstrated a 96.68 % reduction (93.5 s), while SDR achieved a 96.64 % reduction (94.6 s), significantly outperforming CAD-based methods (2820s). Finite Element Analysis (FEA) confirmed the SCAI-Net-generated implants' robust load-bearing capacity under extreme loading (1780N) conditions, while edge gap analysis validated precise anatomical fit. Clinical validation further confirmed boundary accuracy, curvature alignment, and secure fit within cranial cavity. These results position SCAI-Net as a transformative, time-efficient, and resource-optimized solution for AI-driven cranial defect reconstruction and implant generation.
Page 26 of 1421417 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.