Sort by:
Page 29 of 1331328 results

Quantitative Evaluation of AI-based Organ Segmentation Across Multiple Anatomical Sites Using Eight Commercial Software Platforms.

Yuan L, Chen Q, Al-Hallaq H, Yang J, Yang X, Geng H, Latifi K, Cai B, Wu QJ, Xiao Y, Benedict SH, Rong Y, Buchsbaum J, Qi XS

pubmed logopapersAug 23 2025
To evaluate organs-at-risk (OARs) segmentation variability across eight commercial AI-based segmentation software using independent multi-institutional datasets, and to provide recommendations for clinical practices utilizing AI-segmentation. 160 planning CT image sets from four anatomical sites: head-and-neck, thorax, abdomen and pelvis were retrospectively pooled from three institutions. Contours for 31 OARs generated by the software were compared to clinical contours using multiple accuracy metrics, including: Dice similarity coefficient (DSC), 95 Percentile of Hausdorff distance (HD95), surface DSC, as well as relative added path length (RAPL) as an efficiency metric. A two-factor analysis of variance was used to quantify variability in contouring accuracy across software platforms (inter-software) and patients (inter-patient). Pairwise comparisons were performed to categorize the software into different performance groups, and inter-software variations (ISV) were calculated as the average performance differences between the groups. Significant inter-software and inter-patient contouring accuracy variations (p<0.05) were observed for most OARs. The largest ISV in DSC in each anatomical region were cervical esophagus (0.41), trachea (0.10), spinal cord (0.13) and prostate (0.17). Among the organs evaluated, 7 had mean DSC >0.9 (i.e., heart, liver), 15 had DSC ranging from 0.7 to 0.89 (i.e., parotid, esophagus). The remaining organs (i.e., optic nerves, seminal vesicle) had DSC<0.7. 16 of the 31 organs (52%) had RAPL less than 0.1. Our results reveal significant inter-software and inter-patient variability in the performance of AI-segmentation software. These findings highlight the need of thorough software commissioning, testing, and quality assurance across disease sites, patient-specific anatomies and image acquisition protocols.

An Efficient Dual-Line Decoder Network with Multi-Scale Convolutional Attention for Multi-organ Segmentation

Riad Hassan, M. Rubaiyat Hossain Mondal, Sheikh Iqbal Ahamed, Fahad Mostafa, Md Mostafijur Rahman

arxiv logopreprintAug 23 2025
Proper segmentation of organs-at-risk is important for radiation therapy, surgical planning, and diagnostic decision-making in medical image analysis. While deep learning-based segmentation architectures have made significant progress, they often fail to balance segmentation accuracy with computational efficiency. Most of the current state-of-the-art methods either prioritize performance at the cost of high computational complexity or compromise accuracy for efficiency. This paper addresses this gap by introducing an efficient dual-line decoder segmentation network (EDLDNet). The proposed method features a noisy decoder, which learns to incorporate structured perturbation at training time for better model robustness, yet at inference time only the noise-free decoder is executed, leading to lower computational cost. Multi-Scale convolutional Attention Modules (MSCAMs), Attention Gates (AGs), and Up-Convolution Blocks (UCBs) are further utilized to optimize feature representation and boost segmentation performance. By leveraging multi-scale segmentation masks from both decoders, we also utilize a mutation-based loss function to enhance the model's generalization. Our approach outperforms SOTA segmentation architectures on four publicly available medical imaging datasets. EDLDNet achieves SOTA performance with an 84.00% Dice score on the Synapse dataset, surpassing baseline model like UNet by 13.89% in Dice score while significantly reducing Multiply-Accumulate Operations (MACs) by 89.7%. Compared to recent approaches like EMCAD, our EDLDNet not only achieves higher Dice score but also maintains comparable computational efficiency. The outstanding performance across diverse datasets establishes EDLDNet's strong generalization, computational efficiency, and robustness. The source code, pre-processed data, and pre-trained weights will be available at https://github.com/riadhassan/EDLDNet .

Epicardial and paracardial adipose tissue quantification in short-axis cardiac cine MRI using deep learning.

Zhang R, Wang X, Zhou Z, Ni L, Jiang M, Hu P

pubmed logopapersAug 23 2025
Epicardial and paracardial adipose tissues (EAT and PAT) are two types of fat depots around the heart and they have important roles in cardiac physiology. Manual quantification of EAT and PAT from cardiac MR (CMR) is time-consuming and prone to human bias. Leveraging the cardiac motion, we aimed to develop deep learning neural networks for automated segmentation and quantification of EAT and PAT in short-axis cine CMR. A modified U-Net equipped with modules of multi-resolution convolution, motion information extraction, feature fusion, and dual attention mechanisms, was developed. Multiple steps of ablation studies were performed to verify the efficacy of each module. The performance of different networks was also compared. The final network incorporating all modules achieved segmentation Dice indices of 77.72% ± 2.53% and 77.18% ± 3.54% for EAT and PAT, respectively, which were significantly higher than the baseline U-Net. It also achieved the highest performance compared to other networks. With our model, the determination coefficients of EAT and PAT volumes to the reference were 0.8550 and 0.8025, respectively. Our proposed network can provide accurate and quick quantification of EAT and PAT on routine short-axis cine CMR, which can potentially aid cardiologists in clinical settings.

ConvTNet fusion: A robust transformer-CNN framework for multi-class classification, multimodal feature fusion, and tissue heterogeneity handling.

Mahmood T, Saba T, Rehman A, Alamri FS

pubmed logopapersAug 22 2025
Medical imaging is crucial for clinical practice, providing insight into organ structure and function. Advancements in imaging technologies enable automated image segmentation, which is essential for accurate diagnosis and treatment planning. However, challenges like class imbalance, tissue boundary delineation, and tissue interaction complexity persist. The study introduces ConvTNet, a hybrid model that combines Transformer and CNN features to improve renal CT image segmentation. It uses attention mechanisms and feature fusion techniques to enhance precision. ConvTNet uses the KC module to focus on critical image regions, enabling precise tissue boundary delineation in noisy and ambiguous boundaries. The Mix-KFCA module enhances feature fusion by combining multi-scale features and distinguishing between healthy kidney tissue and surrounding structures. The study proposes innovative preprocessing strategies, including noise reduction, data augmentation, and image normalization, that significantly optimize image quality and ensure reliable inputs for accurate segmentation. ConvTNet employs transfer learning, fine-tuning five pre-trained models to bolster model performance further and leverage knowledge from a vast array of feature extraction techniques. Empirical evaluations demonstrate that ConvTNet performs exceptionally in multi-label classification and lesion segmentation, with an AUC of 0.9970, sensitivity of 0.9942, DSC of 0.9533, and accuracy of 0.9921, proving its efficacy for precise renal cancer diagnosis.

Advancements in deep learning for image-guided tumor ablation therapies: a comprehensive review.

Zhao Z, Hu Y, Xu LX, Sun J

pubmed logopapersAug 22 2025
Image-guided tumor ablation (IGTA) has revolutionized modern oncological treatments by providing minimally invasive options that ensure precise tumor eradication with minimal patient discomfort. Traditional techniques such as ultrasound (US), Computed Tomography (CT), and Magnetic Resonance Imaging (MRI) have been instrumental in the planning, execution, and evaluation of ablation therapies. However, these methods often face limitations, including poor contrast, susceptibility to artifacts, and variability in operator expertise, which can undermine the accuracy of tumor targeting and therapeutic outcomes. Incorporating deep learning (DL) into IGTA represents a significant advancement that addresses these challenges. This review explores the role and potential of DL in different phases of tumor ablation therapy: preoperative, intraoperative, and postoperative. In the preoperative stage, DL excels in advanced image segmentation, enhancement, and synthesis, facilitating precise surgical planning and optimized treatment strategies. During the intraoperative phase, DL supports image registration and fusion, and real-time surgical planning, enhancing navigation accuracy and ensuring precise ablation while safeguarding surrounding healthy tissues. In the postoperative phase, DL is pivotal in automating the monitoring of treatment responses and in the early detection of recurrences through detailed analyses of follow-up imaging. This review highlights the essential role of deep learning in modernizing IGTA, showcasing its significant implications for procedural safety, efficacy, and patient outcomes in oncology. As deep learning technologies continue to evolve, they are poised to redefine the standards of care in tumor ablation therapies, making treatments more accurate, personalized, and patient-friendly.

Deep Learning-based Automated Coronary Plaque Quantification: First Demonstration With Ultra-high Resolution Photon-counting Detector CT at Different Temporal Resolutions.

Klambauer K, Burger SD, Demmert TT, Mergen V, Moser LJ, Gulsun MA, Schöbinger M, Schwemmer C, Wels M, Allmendinger T, Eberhard M, Alkadhi H, Schmidt B

pubmed logopapersAug 22 2025
The aim of this study was to evaluate the feasibility and reproducibility of a novel deep learning (DL)-based coronary plaque quantification tool with automatic case preparation in patients undergoing ultra-high resolution (UHR) photon-counting detector CT coronary angiography (CCTA), and to assess the influence of temporal resolution on plaque quantification. In this retrospective single-center study, 45 patients undergoing clinically indicated UHR CCTA were included. In each scan, 2 image data sets were reconstructed: one in the dual-source mode with 66 ms temporal resolution and one simulating a single-source mode with 125 ms temporal resolution. A novel, DL-based algorithm for fully automated coronary segmentation and intensity-based plaque quantification was applied to both data sets in each patient. Plaque volume quantification was performed at the vessel-level for the entire left anterior descending artery (LAD), left circumflex artery (CX), and right coronary artery (RCA), as well as at the lesion-level for the largest coronary plaque in each vessel. Diameter stenosis grade was quantified for the coronary lesion with the greatest longitudinal extent in each vessel. To assess reproducibility, the algorithm was rerun 3 times in 10 randomly selected patients, and all outputs were visually reviewed and confirmed by an expert reader. Paired Wilcoxon signed-rank tests with Benjamini-Hochberg correction were used for statistical comparisons. One hundred nineteen out of 135 (88.1%) coronary arteries showed atherosclerotic plaques and were included in the analysis. In the reproducibility analysis, repeated runs of the algorithm yielded identical results across all plaque and lumen measurements (P > 0.999). All outputs were confirmed to be anatomically correct, visually consistent, and did not require manual correction. At the vessel level, total plaque volumes were higher in the 125 ms reconstructions compared with the 66 ms reconstructions in 28 of 45 patients (62%), with both calcified and noncalcified plaque volumes being higher in 32 (71%) and 28 (62%) patients, respectively. Total plaque volumes in the LAD, CX, and RCA were significantly higher in the 125 ms reconstructions (681.3 vs. 647.8  mm3, P < 0.05). At the lesion level, total plaque volumes were higher in the 125 ms reconstructions in 44 of 45 patients (98%; 447.3 vs. 414.9  mm3, P < 0.001), with both calcified and noncalcified plaque volumes being higher in 42 of 45 patients (93%). The median diameter stenosis grades for all vessels were significantly higher in the 125 ms reconstructions (35.4% vs. 28.1%, P < 0.01). This study evaluated a novel DL-based tool with automatic case preparation for quantitative coronary plaque in UHR CCTA data sets. The algorithm was technically robust and reproducible, delivering anatomically consistent outputs not requiring manual correction. Reconstructions with lower temporal resolution (125 ms) systematically overestimated plaque burden compared with higher temporal resolution (66 ms), underscoring that protocol standardization is essential for reliable DL-based plaque quantification.

Linking morphometric variations in human cranial bone to mechanical behavior using machine learning.

Guo W, Bhagavathula KB, Adanty K, Rabey KN, Ouellet S, Romanyk DL, Westover L, Hogan JD

pubmed logopapersAug 22 2025
With the development of increasingly detailed imaging techniques, there is a need to update the methodology and evaluation criteria for bone analysis to understand the influence of bone microarchitecture on mechanical response. The present study aims to develop a machine learning-based approach to investigate the link between morphology of the human calvarium and its mechanical response under quasi-static uniaxial compression. Micro-computed tomography is used to capture the microstructure at a resolution of 18μm of male (n=5) and female (n=5) formalin-fixed calvarium specimens of the frontal and parietal regions. Image processing-based machine learning methods using convolutional neural networks are developed to isolate and calculate specific morphometric properties, such as porosity, trabecular thickness and trabecular spacing. Then, an ensemble method using a gradient boosted decision tree (XGBoost) is used to predict the mechanical strength based on the morphological results, and found that mean and minimum porosity at diploë are the most relevant factors for the mechanical strength of cranial bones under the studied conditions. Overall, this study provides new tools that can predict the mechanical response of human calvarium a priori. Besides, the quantitative morphology of the human calvarium can be used as input data in finite element models, as well as contributing to efforts in the development of cranial simulant materials.

Edge-Aware Diffusion Segmentation Model with Hessian Priors for Automated Diaphragm Thickness Measurement in Ultrasound Imaging.

Miao CL, He Y, Shi B, Bian Z, Yu W, Chen Y, Zhou GQ

pubmed logopapersAug 22 2025
The thickness of the diaphragm serves as a crucial biometric indicator, particularly in assessing rehabilitation and respiratory dysfunction. However, measuring diaphragm thickness from ultrasound images mainly depends on manual delineation of the fascia, which is subjective, time-consuming, and sensitive to the inherent speckle noise. In this study, we introduce an edge-aware diffusion segmentation model (ESADiff), which incorporates prior structural knowledge of the fascia to improve the accuracy and reliability of diaphragm thickness measurements in ultrasound imaging. We first apply a diffusion model, guided by annotations, to learn the image features while preserving edge details through an iterative denoising process. Specifically, we design an anisotropic edge-sensitive annotation refinement module that corrects inaccurate labels by integrating Hessian geometric priors with a backtracking shortest-path connection algorithm, further enhancing model accuracy. Moreover, a curvature-aware deformable convolution and edge-prior ranking loss function are proposed to leverage the shape prior knowledge of the fascia, allowing the model to selectively focus on relevant linear structures while mitigating the influence of noise on feature extraction. We evaluated the proposed model on an in-house diaphragm ultrasound dataset, a public calf muscle dataset, and an internal tongue muscle dataset to demonstrate robust generalization. Extensive experimental results demonstrate that our method achieves finer fascia segmentation and significantly improves the accuracy of thickness measurements compared to other state-of-the-art techniques, highlighting its potential for clinical applications.

DPGNet: A Boundary-Aware Medical Image Segmentation Framework Via Uncertainty Perception.

Wang H, Qi Y, Liu W, Guo K, Lv W, Liang Z

pubmed logopapersAug 22 2025
Addressing the critical challenge of precise boundary delineation in medical image segmentation, we introduce DPGNet, an adaptive deep learning model engineered to emulate expert perception of intricate anatomical edges. Our key innovations drive its superior performance and clinical utility, encompassing: 1) a three-stage progressive refinement strategy that establishes global context, performs hierarchical feature enhancement, and precisely delineates local boundaries; 2) a novel Edge Difference Attention (EDA) module that implicitly learns and quantifies boundary uncertainties without requiring explicit ground truth supervision; and 3) a lightweight, transformer-based architecture ensuring an exceptional balance between performance and computational efficiency. Extensive experiments across diverse and challenging medical image datasets demonstrate DPGNet's consistent superiority over state-of-the-art methods, notably achieving this with significantly lower computational overhead (25.51 M parameters). Its exceptional boundary refinement is rigorously validated through comprehensive metrics (Boundary-IoU, HD95) and confirmed by rigorous clinical expert evaluations. Crucially, DPGNet generates an explicit uncertainty boundary map, providing clinicians with actionable insights to identify ambiguous regions, thereby enhancing diagnostic precision and facilitating more accurate clinical segmentation outcomes. Our code is available at: https://github.fangnengwuyou/DPGNet.

Development and verification of a convolutional neural network-based model for automatic mandibular canal localization on multicenter CBCT images.

Pan X, Wang C, Luo X, Dong Q, Sun H, Zhang W, Qu H, Deng R, Lin Z

pubmed logopapersAug 21 2025
Development and verification of a convolutional neural network (CNN)-based deep learning (DL) model for mandibular canal (MC) localization on multicenter cone beam computed tomography (CBCT) images. In this study, a total 1056 CBCT scans in multiple centers were collected. Of these, 836 CBCT scans of one manufacturer were used for development of CNN model (training set: validation set: internal testing set = 640:360:36) and an external testing dataset of 220 CBCT scans from other four manufacturers were tested. The convolution module was built using a stack of Conv + InstanceNorm + LeakyReLU. Average symmetric surface distance (ASSD) and symmetric mean curve distance (SMCD) were used for quantitative evaluation of this model for both internal testing data and partial external testing data. Visual scoring (1-5 points) were performed to evaluate the accuracy and generalizability of MC localization for all external testing data. The differences of ASSD, SMCD and visual scores among the four manufacturers were compared for external testing dataset. The time of manual and automatic MC localization were recorded. For the internal testing dataset, the average ASSD and SMCD was 0.486 mm and 0.298 mm respectively. For the external testing dataset, 86.8% CBCT scans' visual scores ≥ 4 points; the average ASSD and SMCD of 40 CBCT scans with visual scores ≥ 4 points were 0.438 mm and 0.185 mm respectively; there were significant differences among the four manufacturers for ASSD, SMCD and visual scores (p < 0.05). And the time for bilateral automatic MC localization was 8.52s (± 0.97s). In this study, a CNN model was developed for automatic MC localization, and external testing of large sample on multicenter CBCT images showed its excellent clinical application potential.
Page 29 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.