Sort by:
Page 62 of 1421416 results

Deep learning-based automatic detection of pancreatic ductal adenocarcinoma ≤ 2 cm with high-resolution computed tomography: impact of the combination of tumor mass detection and indirect indicator evaluation.

Ozawa M, Sone M, Hijioka S, Hara H, Wakatsuki Y, Ishihara T, Hattori C, Hirano R, Ambo S, Esaki M, Kusumoto M, Matsui Y

pubmed logopapersJul 18 2025
Detecting small pancreatic ductal adenocarcinomas (PDAC) is challenging owing to their difficulty in being identified as distinct tumor masses. This study assesses the diagnostic performance of a three-dimensional convolutional neural network for the automatic detection of small PDAC using both automatic tumor mass detection and indirect indicator evaluation. High-resolution contrast-enhanced computed tomography (CT) scans from 181 patients diagnosed with PDAC (diameter ≤ 2 cm) between January 2018 and December 2023 were analyzed. The D/P ratio, which is the cross-sectional area of the MPD to that of the pancreatic parenchyma, was identified as an indirect indicator. A total of 204 patient data sets including 104 normal controls were analyzed for automatic tumor mass detection and D/P ratio evaluation. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were evaluated to detect tumor mass. The sensitivity of PDAC detection was compared with that of the software and radiologists, and tumor localization accuracy was validated against endoscopic ultrasonography (EUS) findings. The sensitivity, specificity, PPV, and NPV for tumor mass detection were 77.0%, 76.0%, 75.5%, and 77.5%, respectively; for D/P ratio detection, 87.0%, 94.2%, 93.5%, and 88.3%, respectively; and for combined tumor mass and D/P ratio detections, 96.0%, 70.2%, 75.6%, and 94.8%, respectively. No significant difference was observed between the software's sensitivity and that of the radiologist's report (software, 96.0%; radiologist, 96.0%; p = 1). The concordance rate between software findings and EUS was 96.0%. Combining indirect indicator evaluation with tumor mass detection may improve small PDAC detection accuracy.

Deep learning reconstruction enhances image quality in contrast-enhanced CT venography for deep vein thrombosis.

Asari Y, Yasaka K, Kurashima J, Katayama A, Kurokawa M, Abe O

pubmed logopapersJul 18 2025
This study aimed to evaluate and compare the diagnostic performance and image quality of deep learning reconstruction (DLR) with hybrid iterative reconstruction (Hybrid IR) and filtered back projection (FBP) in contrast-enhanced CT venography for deep vein thrombosis (DVT). A retrospective analysis was conducted on 51 patients who underwent lower limb CT venography, including 20 with DVT lesions and 31 without DVT lesions. CT images were reconstructed using DLR, Hybrid IR, and FBP. Quantitative image quality metrics, such as contrast-to-noise ratio (CNR) and image noise, were measured. Three radiologists independently assessed DVT lesion detection, depiction of DVT lesions and normal structures, subjective image noise, artifacts, and overall image quality using scoring systems. Diagnostic performance was evaluated using sensitivity and area under the receiver operating characteristic curve (AUC). The paired t-test and Wilcoxon signed-rank test compared the results for continuous variables and ordinal scales, respectively, between DLR and Hybrid IR as well as between DLR and FBP. DLR significantly improved CNR and reduced image noise compared to Hybrid IR and FBP (p < 0.001). AUC and sensitivity for DVT detection were not statistically different across reconstruction methods. Two readers reported improved lesion visualization with DLR. DLR was also rated superior in image quality, normal structure depiction, and noise suppression by all readers (p < 0.001). DLR enhances image quality and anatomical clarity in CT venography. These findings support the utility of DLR in improving diagnostic confidence and image interpretability in DVT assessment.

Explainable CT-based deep learning model for predicting hematoma expansion including intraventricular hemorrhage growth.

Zhao X, Zhang Z, Shui J, Xu H, Yang Y, Zhu L, Chen L, Chang S, Du C, Yao Z, Fang X, Shi L

pubmed logopapersJul 18 2025
Hematoma expansion (HE), including intraventricular hemorrhage (IVH) growth, significantly affects outcomes in patients with intracerebral hemorrhage (ICH). This study aimed to develop, validate, and interpret a deep learning model, HENet, for predicting three definitions of HE. Using CT scans and clinical data from 718 ICH patients across three hospitals, the multicenter retrospective study focused on revised hematoma expansion (RHE) definitions 1 and 2, and conventional HE (CHE). HENet's performance was compared with 2D models and physician predictions using two external validation sets. Results showed that HENet achieved high AUC values for RHE1, RHE2, and CHE predictions, surpassing physicians' predictions and 2D models in net reclassification index and integrated discrimination index for RHE1 and RHE2 outcomes. The Grad-CAM technique provided visual insights into the model's decision-making process. These findings suggest that integrating HENet into clinical practice could improve prediction accuracy and patient outcomes in ICH cases.

Software architecture and manual for novel versatile CT image analysis toolbox -- AnatomyArchive

Lei Xu, Torkel B Brismar

arxiv logopreprintJul 18 2025
We have developed a novel CT image analysis package named AnatomyArchive, built on top of the recent full body segmentation model TotalSegmentator. It provides automatic target volume selection and deselection capabilities according to user-configured anatomies for volumetric upper- and lower-bounds. It has a knowledge graph-based and time efficient tool for anatomy segmentation mask management and medical image database maintenance. AnatomyArchive enables automatic body volume cropping, as well as automatic arm-detection and exclusion, for more precise body composition analysis in both 2D and 3D formats. It provides robust voxel-based radiomic feature extraction, feature visualization, and an integrated toolchain for statistical tests and analysis. A python-based GPU-accelerated nearly photo-realistic segmentation-integrated composite cinematic rendering is also included. We present here its software architecture design, illustrate its workflow and working principle of algorithms as well provide a few examples on how the software can be used to assist development of modern machine learning models. Open-source codes will be released at https://github.com/lxu-medai/AnatomyArchive for only research and educational purposes.

Task based evaluation of sparse view CT reconstruction techniques for intracranial hemorrhage diagnosis using an AI observer model.

Tivnan M, Kikkert ID, Wu D, Yang K, Wolterink JM, Li Q, Gupta R

pubmed logopapersJul 17 2025
Sparse-view computed tomography (CT) holds promise for reducing radiation exposure and enabling novel system designs. Traditional reconstruction algorithms, including Filtered Backprojection (FBP) and Model-Based Iterative Reconstruction (MBIR), often produce artifacts in sparse-view data. Deep Learning Reconstruction (DLR) offers potential improvements, but task-based evaluations of DLR in sparse-view CT remain limited. This study employs an Artificial Intelligence (AI) observer to evaluate the diagnostic accuracy of FBP, MBIR, and DLR for intracranial hemorrhage detection and classification, offering a cost-effective alternative to human radiologist studies. A public brain CT dataset with labeled intracranial hemorrhages was used to train an AI observer model. Sparse-view CT data were simulated, with reconstructions performed using FBP, MBIR, and DLR. Reconstruction quality was assessed using metrics such as Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). Diagnostic utility was evaluated using Receiver Operating Characteristic (ROC) analysis and Area Under the Curve (AUC) values for One-vs-Rest and One-vs-One classification tasks. DLR outperformed FBP and MBIR in all quality metrics, demonstrating reduced noise, improved structural similarity, and fewer artifacts. The AI observer achieved the highest classification accuracy with DLR, while FBP surpassed MBIR in task-based accuracy despite inferior image quality metrics, emphasizing the value of task-based evaluations. DLR provides an effective balance of artifact reduction and anatomical detail in sparse-view CT brain imaging. This proof-of-concept study highlights AI observer models as a viable, cost-effective alternative for evaluating CT reconstruction techniques.

Integrative radiomics of intra- and peri-tumoral features for enhanced risk prediction in thymic tumors: a multimodal analysis of tumor microenvironment contributions.

Zhu L, Li J, Wang X, He Y, Li S, He S, Deng B

pubmed logopapersJul 17 2025
This study aims to explore the role of intra- and peri-tumoral radiomics features in tumor risk prediction, with a particular focus on the impact of peri-tumoral characteristics on the tumor microenvironment. A total of 133 patients, including 128 with thymomas and 5 with thymic carcinomas, were ultimately enrolled in this study. Based on the high- and low-risk classification, the cohort was divided into a training set (n = 93) and a testing set (n = 40) for subsequent analysis.Based on imaging data from these 133 patients, multiple radiomics prediction models integrating intra-tumoral and peritumoral features were developed. The data were sourced from patients treated at the Affiliated Hospital of Guangdong Medical University between 2015 and 2023, with all imaging obtained through preoperative CT scans. Radiomics feature extraction involved three primary categories: first-order features, shape features, and high-order features. Initially, the tumor's region of interest (ROI) was manually delineated using ITK-SNAP software. A custom Python algorithm was then used to automatically expand the peri-tumoral area, extracting features within 1 mm, 2 mm, and 3 mm zones surrounding the tumor. Additionally, considering the multimodal nature of the imaging data, image fusion techniques were incorporated to further enhance the model's ability to capture the tumor microenvironment. To build the radiomics models, selected features were first standardized using z-scores. Initial feature selection was performed using a t-test (p < 0.05), followed by Spearman correlation analysis to remove redundancy by retaining only one feature from each pair with a correlation coefficient ≥ 0.90. Subsequently, hierarchical clustering and the LASSO algorithm were applied to identify the most predictive features. These selected features were then used to train machine learning models, which were optimized on the training dataset and assessed for predictive performance. To further evaluate the effectiveness of these models, various statistical methods were applied, including DeLong's test, NRI, and IDI, to compare predictive differences among models. Decision curve analysis (DCA) was also conducted to assess the clinical applicability of the models. The results indicate that the IntraPeri1mm model performed the best, achieving an AUC of 0.837, with sensitivity and specificity at 0.846 and 0.84, respectively, significantly outperforming other models. SHAP value analysis identified several key features, such as peri_log_sigma_2_0_mm 3D_firstorder RootMeanSquared and intra_wavelet_LLL_firstorder Skewness, which made substantial contributions to the model's predictive accuracy. NRI and IDI analyses further confirmed the model's superior clinical applicability, and the DCA curve demonstrated robust performance across different thresholds. DeLong's test highlighted the statistical significance of the IntraPeri1mm model, underscoring its potential utility in radiomics research. Overall, this study provides a new perspective on tumor risk assessment, highlighting the importance of peri-tumoral features in the analysis of the tumor microenvironment. It aims to offer valuable insights for the development of personalized treatment plans. Not applicable.

From Variability To Accuracy: Conditional Bernoulli Diffusion Models with Consensus-Driven Correction for Thin Structure Segmentation

Jinseo An, Min Jin Lee, Kyu Won Shim, Helen Hong

arxiv logopreprintJul 17 2025
Accurate segmentation of orbital bones in facial computed tomography (CT) images is essential for the creation of customized implants for reconstruction of defected orbital bones, particularly challenging due to the ambiguous boundaries and thin structures such as the orbital medial wall and orbital floor. In these ambiguous regions, existing segmentation approaches often output disconnected or under-segmented results. We propose a novel framework that corrects segmentation results by leveraging consensus from multiple diffusion model outputs. Our approach employs a conditional Bernoulli diffusion model trained on diverse annotation patterns per image to generate multiple plausible segmentations, followed by a consensus-driven correction that incorporates position proximity, consensus level, and gradient direction similarity to correct challenging regions. Experimental results demonstrate that our method outperforms existing methods, significantly improving recall in ambiguous regions while preserving the continuity of thin structures. Furthermore, our method automates the manual process of segmentation result correction and can be applied to image-guided surgical planning and surgery.

DiffOSeg: Omni Medical Image Segmentation via Multi-Expert Collaboration Diffusion Model

Han Zhang, Xiangde Luo, Yong Chen, Kang Li

arxiv logopreprintJul 17 2025
Annotation variability remains a substantial challenge in medical image segmentation, stemming from ambiguous imaging boundaries and diverse clinical expertise. Traditional deep learning methods producing single deterministic segmentation predictions often fail to capture these annotator biases. Although recent studies have explored multi-rater segmentation, existing methods typically focus on a single perspective -- either generating a probabilistic ``gold standard'' consensus or preserving expert-specific preferences -- thus struggling to provide a more omni view. In this study, we propose DiffOSeg, a two-stage diffusion-based framework, which aims to simultaneously achieve both consensus-driven (combining all experts' opinions) and preference-driven (reflecting experts' individual assessments) segmentation. Stage I establishes population consensus through a probabilistic consensus strategy, while Stage II captures expert-specific preference via adaptive prompts. Demonstrated on two public datasets (LIDC-IDRI and NPC-170), our model outperforms existing state-of-the-art methods across all evaluated metrics. Source code is available at https://github.com/string-ellipses/DiffOSeg .

FSS-ULivR: a clinically-inspired few-shot segmentation framework for liver imaging using unified representations and attention mechanisms.

Debnath RK, Rahman MA, Azam S, Zhang Y, Jonkman M

pubmed logopapersJul 17 2025
Precise liver segmentation is critical for accurate diagnosis and effective treatment planning, serving as a foundation for medical image analysis. However, existing methods struggle with limited labeled data, poor generalizability, and insufficient integration of anatomical and clinical features. To address these limitations, we propose a novel Few-Shot Segmentation model with Unified Liver Representation (FSS-ULivR), which employs a ResNet-based encoder enhanced with Squeeze-and-Excitation modules to improve feature learning, an enhanced prototype module that utilizes a transformer block and channel attention for dynamic feature refinement, and a decoder with improved attention gates and residual refinement strategies to recover spatial details from encoder skip connections. Through extensive experiments, our FSS-ULivR model achieved an outstanding Dice coefficient of 98.94%, Intersection over Union (IoU) of 97.44% and a specificity of 93.78% on the Liver Tumor Segmentation Challenge dataset. Cross-dataset evaluations further demonstrated its generalizability, with Dice scores of 95.43%, 92.98%, 90.72%, and 94.05% on 3DIRCADB01, Colorectal Liver Metastases, Computed Tomography Organs (CT-ORG), and Medical Segmentation Decathlon Task 3: Liver datasets, respectively. In multi-organ segmentation on CT-ORG, it delivered Dice scores ranging from 85.93% to 94.26% across bladder, bones, kidneys, and lungs. For brain tumor segmentation on BraTS 2019 and 2020 datasets, average Dice scores were 90.64% and 89.36% across whole tumor, tumor core, and enhancing tumor regions. These results emphasize the clinical importance of our model by demonstrating its ability to deliver precise and reliable segmentation through artificial intelligence techniques and engineering solutions, even in scenarios with scarce annotated data.

A multi-stage training and deep supervision based segmentation approach for 3D abdominal multi-organ segmentation.

Wu P, An P, Zhao Z, Guo R, Ma X, Qu Y, Xu Y, Yu H

pubmed logopapersJul 17 2025
Accurate X-ray Computed tomography (CT) image segmentation of the abdominal organs is fundamental for diagnosing abdominal diseases, planning cancer treatment, and formulating radiotherapy strategies. However, the existing deep learning based models for three-dimensional (3D) CT image abdominal multi-organ segmentation face challenges, including complex organ distribution, scarcity of labeled data, and diversity of organ structures, leading to difficulties in model training and convergence and low segmentation accuracy. To address these issues, a novel multi-stage training and a deep supervision model based segmentation approach is proposed. It primary integrates multi-stage training, pseudo- labeling technique, and a developed deep supervision model with attention mechanism (DLAU-Net), specifically designed for 3D abdominal multi-organ segmentation. The DLAU-Net enhances segmentation performance and model adaptability through an improved network architecture. The multi-stage training strategy accelerates model convergence and enhances generalizability, effectively addressing the diversity of abdominal organ structures. The introduction of pseudo-labeling training alleviates the bottleneck of labeled data scarcity and further improves the model's generalization performance and training efficiency. Experiments were conducted on a large dataset provided by the FLARE 2023 Challenge. Comprehensive ablation studies and comparative experiments were conducted to validate the effectiveness of the proposed method. Our method achieves an average organ accuracy (AVG) of 90.5% and a Dice Similarity Coefficient (DSC) of 89.05% and exhibits exceptional performance in terms of training speed and handling data diversity, particularly in the segmentation tasks of critical abdominal organs such as the liver, spleen, and kidneys, significantly outperforming existing comparative methods.
Page 62 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.