Sort by:
Page 38 of 2252246 results

Intelligent Virtual Dental Implant Placement via 3D Segmentation Strategy.

Cai G, Wen B, Gong Z, Lin Y, Liu H, Zeng P, Shi M, Wang R, Chen Z

pubmed logopapersJun 23 2025
Virtual dental implant placement in cone-beam computed tomography (CBCT) is a prerequisite for digital implant surgery, carrying clinical significance. However, manual placement is a complex process that should meet clinical essential requirements of restoration orientation, bone adaptation, and anatomical safety. This complexity presents challenges in balancing multiple considerations comprehensively and automating the entire workflow efficiently. This study aims to achieve intelligent virtual dental implant placement through a 3-dimensional (3D) segmentation strategy. Focusing on the missing mandibular first molars, we developed a segmentation module based on nnU-Net to generate the virtual implant from the edentulous region of CBCT and employed an approximation module for mathematical optimization. The generated virtual implant was integrated with the original CBCT to meet clinical requirements. A total of 190 CBCT scans from 4 centers were collected for model development and testing. This tool segmented the virtual implant with a surface Dice coefficient (sDice) of 0.903 and 0.884 on internal and external testing sets. Compared to the ground truth, the average deviations of the implant platform, implant apex, and angle were 0.850 ± 0.554 mm, 1.442 ± 0.539 mm, and 4.927 ± 3.804° on the internal testing set and 0.822 ± 0.353 mm, 1.467 ± 0.560 mm, and 5.517 ± 2.850° on the external testing set, respectively. The 3D segmentation-based artificial intelligence tool demonstrated good performance in predicting both the dimension and position of the virtual implants, showing significant clinical application potential in implant planning.

Development and validation of a SOTA-based system for biliopancreatic segmentation and station recognition system in EUS.

Zhang J, Zhang J, Chen H, Tian F, Zhang Y, Zhou Y, Jiang Z

pubmed logopapersJun 23 2025
Endoscopic ultrasound (EUS) is a vital tool for diagnosing biliopancreatic disease, offering detailed imaging to identify key abnormalities. Its interpretation demands expertise, which limits its accessibility for less trained practitioners. Thus, the creation of tools or systems to assist in interpreting EUS images is crucial for improving diagnostic accuracy and efficiency. To develop an AI-assisted EUS system for accurate pancreatic and biliopancreatic duct segmentation, and evaluate its impact on endoscopists' ability to identify biliary-pancreatic diseases during segmentation and anatomical localization. The EUS-AI system was designed to perform station positioning and anatomical structure segmentation. A total of 45,737 EUS images from 1852 patients were used for model training. Among them, 2881 images were for internal testing, and 2747 images from 208 patients were for external validation. Additionally, 340 images formed a man-machine competition test set. During the research process, various newer state-of-the-art (SOTA) deep learning algorithms were also compared. In classification, in the station recognition task, compared to the ResNet-50 and YOLOv8-CLS algorithms, the Mean Teacher algorithm achieved the highest accuracy, with an average of 95.60% (92.07%-99.12%) in the internal test set and 92.72% (88.30%-97.15%) in the external test set. For segmentation, compared to the UNet ++ and YOLOv8 algorithms, the U-Net v2 algorithm was optimal. Ultimately, the EUS-AI system was constructed using the optimal models from two tasks, and a man-machine competition experiment was conducted. The results demonstrated that the performance of the EUS-AI system significantly outperformed that of mid-level endoscopists, both in terms of position recognition (p < 0.001) and pancreas and biliopancreatic duct segmentation tasks (p < 0.001, p = 0.004). The EUS-AI system is expected to significantly shorten the learning curve for the pancreatic EUS examination and enhance procedural standardization.

Chest X-ray Foundation Model with Global and Local Representations Integration.

Yang Z, Xu X, Zhang J, Wang G, Kalra MK, Yan P

pubmed logopapersJun 23 2025
Chest X-ray (CXR) is the most frequently ordered imaging test, supporting diverse clinical tasks from thoracic disease detection to postoperative monitoring. However, task-specific classification models are limited in scope, require costly labeled data, and lack generalizability to out-of-distribution datasets. To address these challenges, we introduce CheXFound, a self-supervised vision foundation model that learns robust CXR representations and generalizes effectively across a wide range of downstream tasks. We pretrained CheXFound on a curated CXR-987K dataset, comprising over approximately 987K unique CXRs from 12 publicly available sources. We propose a Global and Local Representations Integration (GLoRI) head for downstream adaptations, by incorporating fine- and coarse-grained disease-specific local features with global image features for enhanced performance in multilabel classification. Our experimental results showed that CheXFound outperformed state-of-the-art models in classifying 40 disease findings across different prevalence levels on the CXR-LT 24 dataset and exhibited superior label efficiency on downstream tasks with limited training data. Additionally, CheXFound achieved significant improvements on downstream tasks with out-of-distribution datasets, including opportunistic cardiovascular disease risk estimation, mortality prediction, malpositioned tube detection, and anatomical structure segmentation. The above results demonstrate CheXFound's strong generalization capabilities, which will enable diverse downstream adaptations with improved label efficiency in future applications. The project source code is publicly available at https://github.com/RPIDIAL/CheXFound.

DCLNet: Double Collaborative Learning Network on Stationary-Dynamic Functional Brain Network for Brain Disease Classification.

Zhou J, Jie B, Wang Z, Zhang Z, Bian W, Yang Y, Li H, Sun F, Liu M

pubmed logopapersJun 23 2025
Stationary functional brain networks (sFBNs) and dynamic functional brain networks (dFBNs) derived from resting-state functional MRI characterize the complex interactions of the human brain from different aspects and could offer complementary information for brain disease analysis. Most current studies focus on sFBN or dFBN analysis, thus limiting the performance of brain network analysis. A few works have explored integrating sFBN and dFBN to identify brain diseases, and achieved better performance than conventional methods. However, these studies still ignore some valuable discriminative information, such as the distribution information of subjects between and within categories. This paper presents a Double Collaborative Learning Network (DCLNet), which takes advantage of both collaborative encoder and collaborative contrastive learning, to learn complementary information of sFBN and dFBN and distribution information of subjects between inter- and intra-categories for brain disease classification. Specifically, we first construct sFBN and dFBN using traditional correlation-based methods with rs-fMRI data, respectively. Then, we build a collaborative encoder to extract brain network features at different levels (i.e., connectivity-based, brain-region-based, and brain-network-based features), and design a prune-graft transformer module to embed the complementary information of the features at each level between two kinds of FBNs. We also develop a collaborative contrastive learning module to capture the distribution information of subjects between and within different categories, thereby learning the more discriminative features of brain networks. We evaluate the DCLNet on two real brain disease datasets with rs-fMRI data, with experimental results demonstrating the superiority of the proposed method.

Self-Supervised Optimization of RF Data Coherence for Improving Breast Reflection UCT Reconstruction.

He L, Liu Z, Cai Y, Zhang Q, Zhou L, Yuan J, Xu Y, Ding M, Yuchi M, Qiu W

pubmed logopapersJun 23 2025
Reflection Ultrasound Computed Tomography (UCT) is gaining prominence as an essential instrument for breast cancer screening. However, reflection UCT quality is often compromised by the variability in sound speed across breast tissue. Traditionally, reflection UCT utilizes the Delay and Sum (DAS) algorithm, where the Time of Flight significantly affects the coherence of the reflected radio frequency (RF) data, based on an oversimplified assumption of uniform sound speed. This study introduces three meticulously engineered modules that leverage the spatial correlation of receiving arrays to improve the coherence of RF data and enable more effective summation. These modules include the self-supervised blind RF data segment block (BSegB) and the state-space model-based strong reflection prediction block (SSM-SRP), followed by a polarity-based adaptive replacing refinement (PARR) strategy to suppress sidelobe noise caused by aperture narrowing. To assess the effectiveness of our method, we utilized standard image quality metrics, including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Root Mean Squared Error (RMSE). Additionally, coherence factor (CF) and variance (Var) were employed to verify the method's ability to enhance signal coherence at the RF data level. The findings reveal that our approach greatly improves performance, achieving an average PSNR of 19.64 dB, an average SSIM of 0.71, and an average RMSE of 0.10, notably under conditions of sparse transmission. The conducted experimental analyses affirm the superior performance of our framework compared to alternative enhancement strategies, including adaptive beamforming methods and deep learning-based beamforming approaches.

Evaluation of deep learning reconstruction in accelerated knee MRI: comparison of visual and diagnostic performance metrics.

Wen S, Xu Y, Yang G, Huang F, Zeng Z

pubmed logopapersJun 23 2025
To investigate the clinical value of deep learning reconstruction (DLR) in accelerated magnetic resonance imaging (MRI) of the knee and compare its visual quality and diagnostic performance metrics with conventional fast spin-echo T2-weighted imaging with fat suppression (FSE-T2WI-FS). This prospective study included 116 patients with knee injuries. All patients underwent both conventional FSE-T2WI-FS and DLR-accelerated FSE-T2WI-FS scans on a 1.5‑T MRI scanner. Two radiologists independently evaluated overall image quality, artifacts, and image sharpness using a 5-point Likert scale. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of lesion regions were measured. Subjective scores were compared using the Wilcoxon signed-rank test, SNR/CNR differences were analyzed via paired t tests, and inter-reader agreement was assessed using Cohen's kappa. The accelerated sequences with DLR achieved a 36 % reduction in total scan time compared to conventional sequences (p < 0.05), shortening acquisition from 9 min 50 s to 6 min 15 s. Moreover, DLR demonstrated superior artifact suppression and enhanced quantitative image quality, with significantly higher SNR and CNR (p < 0.001). Despite these improvements, diagnostic equivalence was maintained: No significant differences were observed in overall image quality, sharpness (p > 0.05), or lesion detection rates. Inter-reader agreement was good (κ> 0.75), further validating the clinical reliability of the DLR technique. Using DLR-accelerated FSE-T2WI-FS reduces scan time, suppresses artifacts, and improves quantitative image quality while maintaining diagnostic accuracy comparable to conventional sequences. This technology holds promise for optimizing clinical workflows in MRI of the knee.

Clinical benefits of deep learning-assisted ultrasound in predicting lymph node metastasis in pancreatic cancer patients.

Wen DY, Chen JM, Tang ZP, Pang JS, Qin Q, Zhang L, He Y, Yang H

pubmed logopapersJun 23 2025
This study aimed to develop and validate a deep learning radiomics nomogram (DLRN) derived from ultrasound images to improve predictive accuracy for lymph node metastasis (LNM) in pancreatic cancer (PC) patients. A retrospective analysis of 249 histopathologically confirmed PC cases, including 78 with LNM, was conducted, with an 8:2 division into training and testing cohorts. Eight transfer learning models and a baseline logistic regression model incorporating handcrafted radiomic and clinicopathological features were developed to evaluate predictive performance. Diagnostic effectiveness was assessed for junior and senior ultrasound physicians, both with and without DLRN assistance. InceptionV3 showed the highest performance among DL models (AUC = 0.844), while the DLRN model, integrating deep learning and radiomic features, demonstrated superior accuracy (AUC = 0.909), robust calibration, and significant clinical utility per decision curve analysis. DLRN assistance notably enhanced diagnostic performance, with AUC improvements of 0.238 (<i>p</i> = 0.006) for junior and 0.152 (<i>p</i> = 0.085) for senior physicians. The ultrasound-based DLRN model exhibits strong predictive capability for LNM in PC, offering a valuable decision-support tool that bolsters diagnostic accuracy, especially among less experienced clinicians, thereby supporting more tailored therapeutic strategies for PC patients.

Enabling Early Identification of Malignant Vertebral Compression Fractures via 2.5D Convolutional Neural Network Model with CT Image Analysis.

Huang C, Li E, Hu J, Huang Y, Wu Y, Wu B, Tang J, Yang L

pubmed logopapersJun 23 2025
This study employed a retrospective data analysis approach combined with model development and validation. The present study introduces a 2.5D convolutional neural network (CNN) model leveraging CT imaging to facilitate the early detection of malignant vertebral compression fractures (MVCFs), potentially reducing reliance on invasive biopsies. Vertebral histopathological biopsy is recognized as the gold standard for differentiating between osteoporotic and malignant vertebral compression fractures (VCFs). Nevertheless, its application is restricted due to its invasive nature and high cost, highlighting the necessity for alternative methods to identify MVCFs. The clinical, imaging, and pathological data of patients who underwent vertebral augmentation and biopsy at Institution 1 and Institution 2 were collected and analyzed. Based on the vertebral CT images of these patients, 2D, 2.5D, and 3D CNN models were developed to identify the patients with osteoporotic vertebral compression fractures (OVCF) and MVCF. To verify the clinical application value of the CNN model, two rounds of reader studies were performed. The 2.5D CNN model performed well, and its performance in identifying MVCF patients was significantly superior to that of the 2D and 3D CNN models. In the training dataset, the area under the receiver operating characteristic curve (AUC) of the 2.5D CNN model was 0.996 and an F1 score of 0.915. In the external cohort test, the AUC was 0.815 and an F1 score of 0.714. And clinicians' ability to identify MVCF patients has been enhanced by the 2.5D CNN model. With the assistance of the 2.5D CNN model, the AUC of senior clinicians was 0.882, and the F1 score was 0.774. For junior clinicians, the 2.5D CNN model-assisted AUC was 0.784 and the F1 score was 0.667. The development of our 2.5D CNN model marks a significant step towards non-invasive identification of MVCF patients,. The 2.5D CNN model may be a potential model to assist clinicians in better identifying MVCF patients.

Physiological Response of Tissue-Engineered Vascular Grafts to Vasoactive Agents in an Ovine Model.

Guo M, Villarreal D, Watanabe T, Wiet M, Ulziibayar A, Morrison A, Nelson K, Yuhara S, Hussaini SF, Shinoka T, Breuer C

pubmed logopapersJun 23 2025
Tissue-engineered vascular grafts (TEVGs) are emerging as promising alternatives to synthetic grafts, particularly in pediatric cardiovascular surgery. While TEVGs have demonstrated growth potential, compliance, and resistance to calcification, their functional integration into the circulation, especially their ability to respond to physiological stimuli, remains underexplored. Vasoreactivity, the dynamic contraction or dilation of blood vessels in response to vasoactive agents, is a key property of native vessels that affects systemic hemodynamics and long-term vascular function. This study aimed to develop and validate an <i>in vivo</i> protocol to assess the vasoreactive capacity of TEVGs implanted as inferior vena cava (IVC) interposition grafts in a large animal model. Bone marrow-seeded TEVGs were implanted in the thoracic IVC of Dorset sheep. A combination of intravascular ultrasound (IVUS) imaging and invasive hemodynamic monitoring was used to evaluate vessel response to norepinephrine (NE) and sodium nitroprusside (SNP). Cross-sectional luminal area changes were measured using a custom Python-based software package (VIVUS) that leverages deep learning for IVUS image segmentation. Physiological parameters including blood pressure, heart rate, and cardiac output were continuously recorded. NE injections induced significant, dose-dependent vasoconstriction of TEVGs, with peak reductions in luminal area averaging ∼15% and corresponding increases in heart rate and mean arterial pressure. Conversely, SNP did not elicit measurable vasodilation in TEVGs, likely due to structural differences in venous tissue, the low-pressure environment of the thoracic IVC, and systemic confounders. Overall, the TEVGs demonstrated active, rapid, and reversible vasoconstrictive behavior in response to pharmacologic stimuli. This study presents a novel <i>in vivo</i> method for assessing TEVG vasoreactivity using real-time imaging and hemodynamic data. TEVGs possess functional vasoactivity, suggesting they may play an active role in modulating venous return and systemic hemodynamics. These findings are particularly relevant for Fontan patients and other scenarios where dynamic venous regulation is critical. Future work will compare TEVG vasoreactivity with native veins and synthetic grafts to further characterize their physiological integration and potential clinical benefits.

GPT-4o and Specialized AI in Breast Ultrasound Imaging: A comparative Study on Accuracy, Agreement, Limitations, and Diagnostic Potential.

Sanli DET, Sanli AN, Buyukdereli Atadag Y, Kurt A, Esmerer E

pubmed logopapersJun 23 2025
This study aimed to evaluate the ability of ChatGPT and Breast Ultrasound Helper, a special ChatGPT-based subprogram trained on ultrasound image analysis, to analyze and differentiate benign and malignant breast lesions on ultrasound images. Ultrasound images of histopathologically confirmed breast cancer and fibroadenoma patients were read GPT-4o (the latest ChatGPT version) and Breast Ultrasound Helper (BUH), a tool from the "Explore" section of ChatGPT. Both were prompted in English using ACR BI-RADS Breast Ultrasound Lexicon criteria: lesion shape, orientation, margin, internal echo pattern, echogenicity, posterior acoustic features, microcalcifications or hyperechoic foci, perilesional hyperechoic rim, edema or architectural distortion, lesion size, and BI-RADS category. Two experienced radiologists evaluated the images and the responses of the programs in consensus. The outputs, BI-RADS category agreement, and benign/malignant discrimination were statistically compared. A total of 232 ultrasound images were analyzed, of which 133 (57.3%) were malignant and 99 (42.7%) benign. In comparative analysis, BUH showed superior performance overall, with higher kappa values and statistically significant results across multiple features (P .001). However, the overall level of agreement with the radiologists' consensus for all features was similar for BUH (κ: 0.387-0.755) and GPT-4o (κ: 0.317-0.803). On the other hand, BI-RADS category agreement was slightly higher in GPT-4o than in BUH (69.4% versus 65.9%), but BUH was slightly more successful in distinguishing benign lesions from malignant lesions (65.9% versus 67.7%). Although both AI tools show moderate-good performance in ultrasound image analysis, their limited compatibility with radiologists' evaluations and BI-RADS categorization suggests that their clinical application in breast ultrasound interpretation is still early and unreliable.
Page 38 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.