Sort by:
Page 80 of 2252246 results

Comparison of AI-Powered Tools for CBCT-Based Mandibular Incisive Canal Segmentation: A Validation Study.

da Andrade-Bortoletto MFS, Jindanil T, Fontenele RC, Jacobs R, Freitas DQ

pubmed logopapersJun 7 2025
Identification of the mandibular incisive canal (MIC) prior to anterior implant placement is often challenging. The present study aimed to validate an enhanced artificial intelligence (AI)-driven model dedicated to automated segmentation of MIC on cone beam computed tomography (CBCT) scans and to compare its accuracy and time efficiency with simultaneous segmentation of both mandibular canal (MC) and MIC by either human experts or a previously trained AI model. An enhanced AI model was developed based on 100 CBCT scans using expert-optimized MIC segmentation within the Virtual Patient Creator platform. The performance of the enhanced AI model was tested against human experts and a previously trained AI model using another 40 CBCT scans. Performance metrics included intersection over union (IoU), dice similarity coefficient (DSC), recall, precision, accuracy, and root mean square error (RSME). Time efficiency was also evaluated. The enhanced AI model had IoU of 93%, DSC of 93%, recall of 94%, precision of 93%, accuracy of 99%, and RMSE of 0.23 mm. These values were significantly higher than those of the previously trained AI model for all metrics, and for manual segmentation for IoU, DSC, recall, and accuracy (p < 0.0001). The enhanced AI model demonstrated significant time efficiency, completing segmentation in 17.6 s (125 times faster than manual segmentation) (p < 0.0001). The enhanced AI model proved to allow a unique and accurate automated MIC segmentation with high accuracy and time efficiency. Besides, its performance was superior to human expert segmentation and a previously trained AI model segmentation.

Estimation of tumor coverage after RF ablation of hepatocellular carcinoma using single 2D image slices.

Varble N, Li M, Saccenti L, Borde T, Arrichiello A, Christou A, Lee K, Hazen L, Xu S, Lencioni R, Wood BJ

pubmed logopapersJun 7 2025
To assess the technical success of radiofrequency ablation (RFA) in patients with hepatocellular carcinoma (HCC), an artificial intelligence (AI) model was developed to estimate the tumor coverage without the need for segmentation or registration tools. A secondary retrospective analysis of 550 patients in the multicenter and multinational OPTIMA trial (3-7 cm solidary HCC lesions, randomized to RFA or RFA + LTLD) identified 182 patients with well-defined pre-RFA tumor and 1-month post-RFA devascularized ablation zones on enhanced CT. The ground-truth, or percent tumor coverage, was determined based on the result of semi-automatic 3D tumor and ablation zone segmentation and elastic registration. The isocenter of the tumor and ablation was isolated on 2D axial CT images. Feature extraction was performed, and classification and linear regression models were built. Images were augmented, and 728 image pairs were used for training and testing. The estimated percent tumor coverage using the models was compared to ground-truth. Validation was performed on eight patient cases from a separate institution, where RFA was performed, and pre- and post-ablation images were collected. In testing cohorts, the best model accuracy was with classification and moderate data augmentation (AUC = 0.86, TPR = 0.59, and TNR = 0.89, accuracy = 69%) and regression with random forest (RMSE = 12.6%, MAE = 9.8%). Validation in a separate institution did not achieve accuracy greater than random estimation. Visual review of training cases suggests that poor tumor coverage may be a result of atypical ablation zone shrinkage 1 month post-RFA, which may not be reflected in clinical utilization. An AI model that uses 2D images at the center of the tumor and 1 month post-ablation can accurately estimate ablation tumor coverage. In separate validation cohorts, translation could be challenging.

De-identification of medical imaging data: a comprehensive tool for ensuring patient privacy.

Rempe M, Heine L, Seibold C, Hörst F, Kleesiek J

pubmed logopapersJun 7 2025
Medical imaging data employed in research frequently comprises sensitive Protected Health Information (PHI) and Personal Identifiable Information (PII), which is subject to rigorous legal frameworks such as the General Data Protection Regulation (GDPR) or the Health Insurance Portability and Accountability Act (HIPAA). Consequently, these types of data must be de-identified prior to utilization, which presents a significant challenge for many researchers. Given the vast array of medical imaging data, it is necessary to employ a variety of de-identification techniques. To facilitate the de-identification process for medical imaging data, we have developed an open-source tool that can be used to de-identify Digital Imaging and Communications in Medicine (DICOM) magnetic resonance images, computer tomography images, whole slide images and magnetic resonance twix raw data. Furthermore, the implementation of a neural network enables the removal of text within the images. The proposed tool reaches comparable results to current state-of-the-art algorithms at reduced computational time (up to × 265). The tool also manages to fully de-identify image data of various types, such as Neuroimaging Informatics Technology Initiative (NIfTI) or Whole Slide Image (WSI-)DICOMS. The proposed tool automates an elaborate de-identification pipeline for multiple types of inputs, reducing the need for additional tools used for de-identification of imaging data. Question How can researchers effectively de-identify sensitive medical imaging data while complying with legal frameworks to protect patient health information? Findings We developed an open-source tool that automates the de-identification of various medical imaging formats, enhancing the efficiency of de-identification processes. Clinical relevance This tool addresses the critical need for robust and user-friendly de-identification solutions in medical imaging, facilitating data exchange in research while safeguarding patient privacy.

Contribution of Labrum and Cartilage to Joint Surface in Different Hip Deformities: An Automatic Deep Learning-Based 3-Dimensional Magnetic Resonance Imaging Analysis.

Meier MK, Roshardt JA, Ruckli AC, Gerber N, Lerch TD, Jung B, Tannast M, Schmaranzer F, Steppacher SD

pubmed logopapersJun 7 2025
Multiple 2-dimensional magnetic resonance imaging (MRI) studies have indicated that the size of the labrum adjusts in response to altered joint loading. In patients with hip dysplasia, it tends to increase as a compensatory mechanism for inadequate acetabular coverage. To determine the differences in labral contribution to the joint surface among different hip deformities as well as which radiographic parameters influence labral contribution to the joint surface using a deep learning-based approach for automatic 3-dimensional (3D) segmentation of MRI. Cross-sectional study; Level of evidence, 4. This retrospective study was approved by the local ethics committee with waiver for informed consent. A total of 98 patients (100 hips) with symptomatic hip deformities undergoing direct hip magnetic resonance arthrography (3 T) between January 2020 and October 2021 were consecutively selected (mean age, 30 ± 9 years; 64% female). The standard imaging protocol included proton density-weighted turbo spin echo images and an axial-oblique 3D T1-weighted MP2RAGE sequence. According to acetabular morphology, hips were divided into subgroups: dysplasia (lateral center-edge [LCE] angle, <23°), normal coverage (LCE, 23°-33°), overcoverage (LCE, 33°-39°), severe overcoverage (LCE, >39°), and retroversion (retroversion index >10% and all 3 retroversion signs positive). A previously validated deep learning approach for automatic segmentation and software for calculation of the joint surface were used. The labral contribution to the joint surface was defined as follows: labrum surface area/(labrum surface area + cartilage surface area). One-way analysis of variance with Tukey correction for multiple comparison and linear regression analysis was performed. The mean labral contribution of the joint surface of dysplastic hips was 26% ± 5% (95% CI, 24%-28%) and higher compared with all other hip deformities (<i>P</i> value range, .001-.036). Linear regression analysis identified LCE angle (β = -.002; <i>P</i> < .001) and femoral torsion (β = .001; <i>P</i> = .008) as independent predictors for labral contribution to the joint surface with a goodness-of-fit <i>R</i><sup>2</sup> value of 0.35. The labral contribution to the joint surface differs among hip deformities and is influenced by lateral acetabular coverage and femoral torsion. This study paves the way for a more in-depth understanding of the underlying pathomechanism and a reliable 3D analysis of the hip joint that can be indicative for surgical decision-making in patients with hip deformities.

Diffusion-based image translation model from low-dose chest CT to calcium scoring CT with random point sampling.

Jung JH, Lee JE, Lee HS, Yang DH, Lee JG

pubmed logopapersJun 7 2025
Coronary artery calcium (CAC) scoring is an important method for cardiovascular risk assessment. While artificial intelligence (AI) has been applied to automate CAC scoring in calcium scoring computed tomography (CSCT), its application to low-dose computed tomography (LDCT) scans, typically used for lung cancer screening, remains challenging due to the lower image quality and higher noise levels of LDCT. This study presents a diffusion model-based method for converting LDCT to CSCT images with the aim of improving CAC scoring accuracy from LDCT scans. A conditional diffusion model was developed to generate CSCT images from LDCT by modifying the denoising diffusion implicit model (DDIM) sampling process. Two main modifications were introduced: (1) random pointing, a novel sampling technique that enhances the trajectory guidance methodology of DDIM using stochastic Gaussian noise to optimize domain adaptation, and (2) intermediate sampling, an advanced methodology that strategically injects minimal noise into LDCT images prior to sampling to maximize structural preservation. The model was trained on LDCT and CSCT images obtained from the same patients but acquired separately at different time points and patient positions, and validated on 37 test cases. The proposed method showed superior performance compared to widely used image-to-image models (CycleGAN, CUT, DCLGAN, NEGCUT) across several evaluation metrics, including peak signal-to-noise ratio (39.93 ± 0.44), Local Normalized Cross-Correlation (0.97 ± 0.01), structural similarity index (0.97 ± 0.01), and Dice similarity coefficient (0.73 ± 0.10). The modifications to the sampling process reduced the number of iterations from 1000 to 10 while maintaining image quality. Volumetric analysis indicated a stronger correlation between the calcium volumes in the enhanced CSCT images and expert-verified annotations, as compared to the original LDCT images. The proposed method effectively transforms LDCT images to CSCT images while preserving anatomical structures and calcium deposits. The reduction in sampling time and the improved preservation of calcium structures suggest that the method could be applicable for clinical use in cardiovascular risk assessment.

SCAI-Net: An AI-driven framework for optimized, fast, and resource-efficient skull implant generation for cranioplasty using CT images.

Juneja M, Poddar A, Kharbanda M, Sudhir A, Gupta S, Joshi P, Goel A, Fatma N, Gupta M, Tarkas S, Gupta V, Jindal P

pubmed logopapersJun 7 2025
Skull damage caused by craniectomy or trauma necessitates accurate and precise Patient-Specific Implant (PSI) design to restore the cranial cavity. Conventional Computer-Aided Design (CAD)-based methods for PSI design are highly infrastructure-intensive, require specialised skills, and are time-consuming, resulting in prolonged patient wait times. Recent advancements in Artificial Intelligence (AI) provide automated, faster and scalable alternatives. This study introduces the Skull Completion using AI Network (SCAI-Net) framework, a deep-learning-based approach for automated cranial defect reconstruction using Computer Tomography (CT) images. The framework proposes two defect reconstruction variants: SCAI-Net-SDR (Subtraction-based Defect Reconstruction), which first reconstructs the full skull, then performs binary subtraction to obtain the reconstructed defect, and SCAI-Net-DDR (Direct Defect Reconstruction), which generates the reconstructed defect directly without requiring full-skull reconstruction. To enhance model robustness, the SCAI-Net was trained on an augmented dataset of 2760 images, created by combining MUG500+ and SkullFix datasets, featuring artificial defects across multiple cranial regions. Unlike subtraction-based SCAI-Net-SDR, which requires full-skull reconstruction before binary subtraction, and conventional CAD-based methods, which rely on interpolation or mirroring, SCAI-Net-DDR significantly reduces computational overhead. By eliminating the full-skull reconstruction step, DDR reduces training time by 66 % (85 min vs. 250 min for SDR) and achieves a 99.996 % faster defect reconstruction time compared to CAD (0.1s vs. 2400s). Based on the quantitative evaluation conducted on the SkullFix test cases, SCAI-Net-DDR emerged as the leading model among all evaluated approaches. SCAI-Net-DDR achieved the highest Dice Similarity Coefficient (DSC: 0.889), a low Hausdorff Distance (HD: 1.856 mm), and a superior Structural Similarity Index (SSIM: 0.897). Similarly, within the subset of subtraction-based reconstruction approaches evaluated, SCAI-Net-SDR demonstrated competitive performance, achieving the best HD (1.855 mm) and the highest SSIM (0.889), confirming its strong standing among methods using the subtraction paradigm. SCAI-Net generates reconstructed defects, which undergo post-processing to ensure manufacturing readiness. Steps include surface smoothing, thickness validation and edge preparation for secure fixation and seamless digital manufacturing compatibility. End-to-end implant generation time for DDR demonstrated a 96.68 % reduction (93.5 s), while SDR achieved a 96.64 % reduction (94.6 s), significantly outperforming CAD-based methods (2820s). Finite Element Analysis (FEA) confirmed the SCAI-Net-generated implants' robust load-bearing capacity under extreme loading (1780N) conditions, while edge gap analysis validated precise anatomical fit. Clinical validation further confirmed boundary accuracy, curvature alignment, and secure fit within cranial cavity. These results position SCAI-Net as a transformative, time-efficient, and resource-optimized solution for AI-driven cranial defect reconstruction and implant generation.

RARL: Improving Medical VLM Reasoning and Generalization with Reinforcement Learning and LoRA under Data and Hardware Constraints

Tan-Hanh Pham, Chris Ngo

arxiv logopreprintJun 7 2025
The growing integration of vision-language models (VLMs) in medical applications offers promising support for diagnostic reasoning. However, current medical VLMs often face limitations in generalization, transparency, and computational efficiency-barriers that hinder deployment in real-world, resource-constrained settings. To address these challenges, we propose a Reasoning-Aware Reinforcement Learning framework, \textbf{RARL}, that enhances the reasoning capabilities of medical VLMs while remaining efficient and adaptable to low-resource environments. Our approach fine-tunes a lightweight base model, Qwen2-VL-2B-Instruct, using Low-Rank Adaptation and custom reward functions that jointly consider diagnostic accuracy and reasoning quality. Training is performed on a single NVIDIA A100-PCIE-40GB GPU, demonstrating the feasibility of deploying such models in constrained environments. We evaluate the model using an LLM-as-judge framework that scores both correctness and explanation quality. Experimental results show that RARL significantly improves VLM performance in medical image analysis and clinical reasoning, outperforming supervised fine-tuning on reasoning-focused tasks by approximately 7.78%, while requiring fewer computational resources. Additionally, we demonstrate the generalization capabilities of our approach on unseen datasets, achieving around 27% improved performance compared to supervised fine-tuning and about 4% over traditional RL fine-tuning. Our experiments also illustrate that diversity prompting during training and reasoning prompting during inference are crucial for enhancing VLM performance. Our findings highlight the potential of reasoning-guided learning and reasoning prompting to steer medical VLMs toward more transparent, accurate, and resource-efficient clinical decision-making. Code and data are publicly available.

NeXtBrain: Combining local and global feature learning for brain tumor classification.

Pacal I, Akhan O, Deveci RT, Deveci M

pubmed logopapersJun 7 2025
The accurate and timely diagnosis of brain tumors is of paramount clinical significance for effective treatment planning and improved patient outcomes. While deep learning has advanced medical image analysis, concurrently achieving high classification accuracy, robust generalization, and computational efficiency remains a formidable challenge. This is often due to the difficulty in optimally capturing both fine-grained local tumor features and their broader global contextual cues without incurring substantial computational costs. This paper introduces NeXtBrain, a novel hybrid architecture meticulously designed to overcome these limitations. NeXtBrain's core innovations, the NeXt Convolutional Block (NCB) and the NeXt Transformer Block (NTB), synergistically enhance feature learning: NCB leverages Multi-Head Convolutional Attention and a SwiGLU-based MLP to precisely extract subtle local tumor morphologies and detailed textures, while NTB integrates self-attention with convolutional attention and a SwiGLU MLP to effectively model long-range spatial dependencies and global contextual relationships, crucial for differentiating complex tumor characteristics. Evaluated on two publicly available benchmark datasets, Figshare and Kaggle, NeXtBrain was rigorously compared against 17 state-of-the-art (SOTA) models. On Figshare, it achieved 99.78 % accuracy and a 99.77 % F1-score. On Kaggle, it attained 99.78 % accuracy and a 99.81 % F1-score, surpassing leading SOTA ViT, CNN, and hybrid models. Critically, NeXtBrain demonstrates exceptional computational efficiency, achieving these SOTA results with only 23.91 million parameters, requiring just 10.32 GFLOPs, and exhibiting a rapid inference time of 0.007 ms. This efficiency allows it to outperform significantly larger models such as DeiT3-Base with 85.82 M parameters, Swin-Base with 86.75 M parameters in both accuracy and computational demand.

Physics-informed neural networks for denoising high b-value diffusion-weighted images.

Lin Q, Yang F, Yan Y, Zhang H, Xie Q, Zheng J, Yang W, Qian L, Liu S, Yao W, Qu X

pubmed logopapersJun 7 2025
Diffusion-weighted imaging (DWI) is widely applied in tumor diagnosis by measuring the diffusion of water molecules. To increase the sensitivity to tumor identification, faithful high b-value DWI images are expected by setting a stronger strength of gradient field in magnetic resonance imaging (MRI). However, high b-value DWI images are heavily affected by reduced signal-to-noise ratio due to the exponential decay of signal intensity. Thus, removing noise becomes important for high b-value DWI images. Here, we propose a Physics-Informed neural Network for high b-value DWI images Denoising (PIND) by leveraging information from physics-informed loss and prior information from low b-value DWI images with high signal-to-noise ratio. Experiments are conducted on a prostate DWI dataset that has 125 subjects. Compared with the original noisy images, PIND improves the peak signal-to-noise ratio from 31.25 dB to 36.28 dB, and structural similarity index measure from 0.77 to 0.92. Our schemes can save 83% data acquisition time since fewer averages of high b-value DWI images need to be acquired, while maintaining 98% accuracy of the apparent diffusion coefficient value, suggesting its potential effectiveness in preserving essential diffusion characteristics. Reader study by 4 radiologists (3, 6, 13, and 18 years of experience) indicates PIND's promising performance on overall quality, signal-to-noise ratio, artifact suppression, and lesion conspicuity, showing potential for improving clinical DWI applications.
Page 80 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.