Sort by:
Page 20 of 1331322 results

Fully-Guided Placement of Dental Implants Utilizing Nasopalatine Canal Fixation in a Novel Rotational Path Surgical Template Design: A Retrospective Case Series.

Ganz SD

pubmed logopapersSep 3 2025
Precise implant placement in the anterior and posterior maxilla often presents challenges due to variable bone and soft tissue anatomy. Many clinicians elect a freehand surgical approach because conventional surgical guides may not always be easy to design, fabricate, or utilize. Guided surgery has been proven to have advantages over freehand surgical protocols and therefore, the present study proposed utilizing the nasopalatine canal (NPC) as an anatomical reference and point of fixation for a novel rotational path surgical template during computer-aided implant surgery (CAIS). The present digital workflow combined artificial intelligence (AI) facilitated cone beam computed tomography (CBCT) software bone segmentation of the maxillary arch to assess the NPC and surrounding hard tissues, to design and fabricate static surgical guides to precisely place implants. After rotational engagement of the maxillary buccal undercuts, each novel surgical guide incorporated the NPC for fixation with a single pin to achieve initial stability. 22 consecutive patients requiring maxillary reconstruction received 123 implants (7 fully and 15 partially edentulous) utilizing a fully-guided surgical protocol to complete 4 overdenture and 18 full-arch fixed restorations. 12 patients required extensive maxillary bone augmentation before implant placement. 13 patients required delayed loading based on bone density and 9 patients were restoratively loaded within 24 to 96 hours post-surgery, accomplished with the use of photogrammetry for the fabrication of 3D-printed restorations. The initial implant success rate was 98.37% and 100% initial prosthetic success. The use of the NPC for fixation of surgical guides did not result in any neurovascular post-operative complications. The novel template concept can improve surgical outcomes using a bone-borne template design for implant-supported rehabilitation of the partial and fully edentulous maxillary arch. Preliminary case series confirmed controlled placement accuracy with limited risk of neurovascular complications for full-arch overdenture and fixed restorations. NPC is a vital maxillary anatomic landmark for implant planning, with an expanded role for the stabilization of novel surgical guide designs due to advancements in AI bone segmentation.

Multi-task deep learning for automatic image segmentation and treatment response assessment in metastatic ovarian cancer.

Drury B, Machado IP, Gao Z, Buddenkotte T, Mahani G, Funingana G, Reinius M, McCague C, Woitek R, Sahdev A, Sala E, Brenton JD, Crispin-Ortuzar M

pubmed logopapersSep 3 2025
 : High-grade serous ovarian carcinoma (HGSOC) is characterised by significant spatial and temporal heterogeneity, often presenting at an advanced metastatic stage. One of the most common treatment approaches involves neoadjuvant chemotherapy (NACT), followed by surgery. However, the multi-scale complexity of HGSOC poses a major challenge in evaluating response to NACT.  : Here, we present a multi-task deep learning approach that facilitates simultaneous segmentation of pelvic/ovarian and omental lesions in contrast-enhanced computerised tomography (CE-CT) scans, as well as treatment response assessment in metastatic ovarian cancer. The model combines multi-scale feature representations from two identical U-Net architectures, allowing for an in-depth comparison of CE-CT scans acquired before and after treatment. The network was trained using 198 CE-CT images of 99 ovarian cancer patients for predicting segmentation masks and evaluating treatment response.  : It achieves an AUC of 0.78 (95% CI [0.70-0.91]) in an independent cohort of 98 scans of 49 ovarian cancer patients from a different institution. In addition to the classification performance, the segmentation Dice scores are only slightly lower than the current state-of-the-art for HGSOC segmentation.  : This work is the first to demonstrate the feasibility of a multi-task deep learning approach in assessing chemotherapy-induced tumour changes across the main disease burden of patients with complex multi-site HGSOC, which could be used for treatment response evaluation and disease monitoring.

Deep Self-knowledge Distillation: A hierarchical supervised learning for coronary artery segmentation

Mingfeng Lin

arxiv logopreprintSep 3 2025
Coronary artery disease is a leading cause of mortality, underscoring the critical importance of precise diagnosis through X-ray angiography. Manual coronary artery segmentation from these images is time-consuming and inefficient, prompting the development of automated models. However, existing methods, whether rule-based or deep learning models, struggle with issues like poor performance and limited generalizability. Moreover, current knowledge distillation methods applied in this field have not fully exploited the hierarchical knowledge of the model, leading to certain information waste and insufficient enhancement of the model's performance capabilities for segmentation tasks. To address these issues, this paper introduces Deep Self-knowledge Distillation, a novel approach for coronary artery segmentation that leverages hierarchical outputs for supervision. By combining Deep Distribution Loss and Pixel-wise Self-knowledge Distillation Loss, our method enhances the student model's segmentation performance through a hierarchical learning strategy, effectively transferring knowledge from the teacher model. Our method combines a loosely constrained probabilistic distribution vector with tightly constrained pixel-wise supervision, providing dual regularization for the segmentation model while also enhancing its generalization and robustness. Extensive experiments on XCAD and DCA1 datasets demonstrate that our approach outperforms the dice coefficient, accuracy, sensitivity and IoU compared to other models in comparative evaluations.

Coronary Plaque Volume in an Asymptomatic Population: Miami Heart Study at Baptist Health South Florida.

Ichikawa K, Ronen S, Bishay R, Krishnan S, Benzing T, Kianoush S, Aldana-Bitar J, Cainzos-Achirica M, Feldman T, Fialkow J, Budoff MJ, Nasir K

pubmed logopapersSep 3 2025
Coronary computed tomography angiography (CTA)-derived plaque burden is associated with the risk of cardiovascular events and is expected to be used in clinical practice. Understanding the normative values of computed tomography-based quantitative plaque volume in the general population is clinically important for determining patient management. This study aimed to investigate the distribution of plaque volume in the general population and to develop nomograms using MiHEART (Miami Heart Study) at Baptist Health South Florida, a large community-based cohort study. The study included 2,301 asymptomatic subjects without cardiovascular disease enrolled in MiHEART. Quantitative assessment of plaque volume was performed by using artificial intelligence-guided quantitative coronary computed tomography angiography (AI-QCT) analysis. The percentiles of the plaque distribution were estimated with nonparametric techniques. Mean age of the participants was 53.5 years, and 50.4% were male. The median total plaque volume was 54 mm<sup>3</sup> (Q1-Q3: 16-126 mm<sup>3</sup>) and increased with age. Male subjects had greater median total plaque volume than female subjects (80 mm<sup>3</sup> [Q1-Q3: 31-181 mm<sup>3</sup>] vs 34 mm<sup>3</sup> [Q1-Q3: 9-85 mm<sup>3</sup>]; P < 0.001); there was no difference according to race/ethnicity (Hispanic 53 mm<sup>3</sup> [Q1-Q3: 14-119 mm<sup>3</sup>] vs non-Hispanic 54 mm<sup>3</sup> [Q1-Q3: 17-127 mm<sup>3</sup>]; P = 0.756). The prevalence of subjects with total plaque volume ≥20 mm<sup>3</sup> was 81.5% in male subjects and 61.9% in female subjects. Younger individuals had a greater percentage of noncalcified plaque. The large majority of study subjects had plaque detected by using AI-QCT. Furthermore, age- and sex-specific nomograms provided information on the plaque volume distribution in an asymptomatic population. (Miami Heart Study [MiHEART] at Baptist Health South Florida; NCT02508454).

Advancing Positron Emission Tomography Image Quantification: Artificial Intelligence-Driven Methods, Clinical Challenges, and Emerging Opportunities in Long-Axial Field-of-View Positron Emission Tomography/Computed Tomography Imaging

Fereshteh Yousefirizi, Movindu Dassanayake, Alejandro Lopez, Andrew Reader, Gary J. R. Cook, Clemens Mingels, Arman Rahmim, Robert Seifert, Ian Alberts

arxiv logopreprintSep 3 2025
MTV is increasingly recognized as an accurate estimate of disease burden, which has prognostic value, but its implementation has been hindered by the time-consuming need for manual segmentation of images. Automated quantitation using AI-driven approaches is promising. AI-driven automated quantification significantly reduces labor-intensive manual segmentation, improving consistency, reproducibility, and feasibility for routine clinical practice. AI-enhanced radiomics provides comprehensive characterization of tumor biology, capturing intratumoral and intertumoral heterogeneity beyond what conventional volumetric metrics alone offer, supporting improved patient stratification and therapy planning. AI-driven segmentation of normal organs improves radioligand therapy planning by enabling accurate dose predictions and comprehensive organ-based radiomics analysis, further refining personalized patient management.

Automated Kidney Tumor Segmentation in CT Images Using Deep Learning: A Multi-Stage Approach.

Kan HC, Fan GM, Wei MH, Lin PH, Shao IH, Yu KJ, Chien TH, Pang ST, Wu CT, Peng SJ

pubmed logopapersSep 3 2025
Computed tomography (CT) remains the primary modality for assessing renal tumors; however, tumor identification and segmentation rely heavily on manual interpretation by clinicians, which is time-consuming and subject to inter-observer variability. The heterogeneity of tumor appearance and indistinct margins further complicate accurate delineation, impacting histopathological classification, treatment planning, and prognostic assessment. There is a pressing clinical need for an automated segmentation tool to enhance diagnostic workflows and support clinical decision-making with results that are reliable, accurate, and reproducible. This study developed a fully automated pipeline based on the DeepMedic 3D convolutional neural network for the segmentation of kidneys and renal tumors through multi-scale feature extraction. The model was trained and evaluated using 5-fold cross-validation on a dataset of 382 contrast-enhanced CT scans manually annotated by experienced physicians. Image preprocessing included Hounsfield unit conversion, windowing, 3D reconstruction, and voxel resampling. Post-processing was also employed to refine output masks and improve model generalizability. The proposed model achieved high performance in kidney segmentation, with an average Dice coefficient of 93.82 ± 1.38%, precision of 94.86 ± 1.59%, and recall of 93.66 ± 1.77%. In renal tumor segmentation, the model attained a Dice coefficient of 88.19 ± 1.24%, precision of 90.36 ± 1.90%, and recall of 88.23 ± 2.02%. Visual comparisons with ground truth annotations confirmed the clinical relevance and accuracy of the predictions. The proposed DeepMedic-based framework demonstrates robust, accurate segmentation of kidneys and renal tumors on CT images. With its potential for real-time application, this model could enhance diagnostic efficiency and treatment planning in renal oncology.

A review of image processing and analysis of computed tomography images using deep learning methods.

Anderson D, Ramachandran P, Trapp J, Fielding A

pubmed logopapersSep 3 2025
The use of machine learning has seen extraordinary growth since the development of deep learning techniques, notably the deep artificial neural network. Deep learning methodology excels in addressing complicated problems such as image classification, object detection, and natural language processing. A key feature of these networks is the capability to extract useful patterns from vast quantities of complex data, including images. As many branches of healthcare revolves around the generation, processing, and analysis of images, these techniques have become increasingly commonplace. This is especially true for radiotherapy, which relies on the use of anatomical and functional images from a range of imaging modalities, such as Computed Tomography (CT). The aim of this review is to provide an understanding of deep learning methodologies, including neural network types and structure, as well as linking these general concepts to medical CT image processing for radiotherapy. Specifically, it focusses on the stages of enhancement and analysis, incorporating image denoising, super-resolution, generation, registration, and segmentation, supported by examples of recent literature.

MedLiteNet: Lightweight Hybrid Medical Image Segmentation Model

Pengyang Yu, Haoquan Wang, Gerard Marks, Tahar Kechadi, Laurence T. Yang, Sahraoui Dhelim, Nyothiri Aung

arxiv logopreprintSep 3 2025
Accurate skin-lesion segmentation remains a key technical challenge for computer-aided diagnosis of skin cancer. Convolutional neural networks, while effective, are constrained by limited receptive fields and thus struggle to model long-range dependencies. Vision Transformers capture global context, yet their quadratic complexity and large parameter budgets hinder use on the small-sample medical datasets common in dermatology. We introduce the MedLiteNet, a lightweight CNN Transformer hybrid tailored for dermoscopic segmentation that achieves high precision through hierarchical feature extraction and multi-scale context aggregation. The encoder stacks depth-wise Mobile Inverted Bottleneck blocks to curb computation, inserts a bottleneck-level cross-scale token-mixing unit to exchange information between resolutions, and embeds a boundary-aware self-attention module to sharpen lesion contours.

From Noisy Labels to Intrinsic Structure: A Geometric-Structural Dual-Guided Framework for Noise-Robust Medical Image Segmentation

Tao Wang, Zhenxuan Zhang, Yuanbo Zhou, Xinlin Zhang, Yuanbin Chen, Tao Tan, Guang Yang, Tong Tong

arxiv logopreprintSep 2 2025
The effectiveness of convolutional neural networks in medical image segmentation relies on large-scale, high-quality annotations, which are costly and time-consuming to obtain. Even expert-labeled datasets inevitably contain noise arising from subjectivity and coarse delineations, which disrupt feature learning and adversely impact model performance. To address these challenges, this study propose a Geometric-Structural Dual-Guided Network (GSD-Net), which integrates geometric and structural cues to improve robustness against noisy annotations. It incorporates a Geometric Distance-Aware module that dynamically adjusts pixel-level weights using geometric features, thereby strengthening supervision in reliable regions while suppressing noise. A Structure-Guided Label Refinement module further refines labels with structural priors, and a Knowledge Transfer module enriches supervision and improves sensitivity to local details. To comprehensively assess its effectiveness, we evaluated GSD-Net on six publicly available datasets: four containing three types of simulated label noise, and two with multi-expert annotations that reflect real-world subjectivity and labeling inconsistencies. Experimental results demonstrate that GSD-Net achieves state-of-the-art performance under noisy annotations, achieving improvements of 2.52% on Kvasir, 22.76% on Shenzhen, 8.87% on BU-SUC, and 4.59% on BraTS2020 under SR simulated noise. The codes of this study are available at https://github.com/ortonwang/GSD-Net.

RegGAN-based contrast-free CT enhances esophageal cancer assessment: multicenter validation of automated tumor segmentation and T-staging.

Huang X, Li W, Wang Y, Wu Q, Li P, Xu K, Huang Y

pubmed logopapersSep 2 2025
This study aimed to develop a deep learning (DL) framework using registration-guided generative adversarial networks (RegGAN) to synthesize contrast-enhanced CT (Syn-CECT) from non-contrast CT (NCCT), enabling iodine-free esophageal cancer (EC) T-staging. A retrospective multicenter analysis included 1,092 EC patients (2013-2024) divided into training (N = 313), internal (N = 117), and external test cohorts (N = 116 and N = 546). RegGAN synthesized Syn-CECT by integrating registration and adversarial training to address NCCT-CECT misalignment. Tumor segmentation used CSSNet with hierarchical feature fusion, while T-staging employed a dual-path DL model combining radiomic features (from NCCT/Syn-CECT) and Vision Transformer-derived deep features. Performance was validated via quantitative metrics (NMAE, PSNR, SSIM), Dice scores, AUC, and reader studies comparing six clinicians with/without model assistance. RegGAN achieved Syn-CECT quality comparable to real CECT (NMAE = 0.1903, SSIM = 0.7723; visual scores: p ≥ 0.12). CSSNet produced accurate tumor segmentation (Dice = 0.89, 95% HD = 2.27 in external tests). The DL staging model outperformed machine learning (AUC = 0.7893-0.8360 vs. ≤ 0.8323), surpassing early-career clinicians (AUC = 0.641-0.757) and matching experts (AUC = 0.840). Syn-CECT-assisted clinicians improved diagnostic accuracy (AUC increase: ~ 0.1, p < 0.01), with decision curve analysis confirming clinical utility at > 35% risk threshold. The RegGAN-based framework eliminates contrast agents while maintaining diagnostic accuracy for EC segmentation (Dice > 0.88) and T-staging (AUC > 0.78). It offers a safe, cost-effective alternative for patients with iodine allergies or renal impairment and enhances diagnostic consistency across clinician experience levels. This approach addresses limitations of invasive staging and repeated contrast exposure, demonstrating transformative potential for resource-limited settings.
Page 20 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.