Sort by:
Page 69 of 100991 results

Artificial Intelligence-Assisted Segmentation of Prostate Tumors and Neurovascular Bundles: Applications in Precision Surgery for Prostate Cancer.

Mei H, Yang R, Huang J, Jiao P, Liu X, Chen Z, Chen H, Zheng Q

pubmed logopapersJun 18 2025
The aim of this study was to guide prostatectomy by employing artificial intelligence for the segmentation of tumor gross tumor volume (GTV) and neurovascular bundles (NVB). The preservation and dissection of NVB differ between intrafascial and extrafascial robot-assisted radical prostatectomy (RARP), impacting postoperative urinary control. We trained the nnU-Net v2 neural network using data from 220 patients in the PI-CAI cohort for the segmentation of prostate GTV and NVB in biparametric magnetic resonance imaging (bpMRI). The model was then validated in an external cohort of 209 patients from Renmin Hospital of Wuhan University (RHWU). Utilizing three-dimensional reconstruction and point cloud analysis, we explored the spatial distribution of GTV and NVB in relation to intrafascial and extrafascial approaches. We also prospectively included 40 patients undergoing intrafascial and extrafascial RARP, applying the aforementioned procedure to classify the surgical approach. Additionally, 3D printing was employed to guide surgery, and follow-ups on short- and long-term urinary function in patients were conducted. The nnU-Net v2 neural network demonstrated precise segmentation of GTV, NVB, and prostate, achieving Dice scores of 0.5573 ± 0.0428, 0.7679 ± 0.0178, and 0.7483 ± 0.0290, respectively. By establishing the distance from GTV to NVB, we successfully predicted the surgical approach. Urinary control analysis revealed that the extrafascial approach yielded better postoperative urinary function, facilitating more refined management of patients with prostate cancer and personalized medical care. Artificial intelligence technology can accurately identify GTV and NVB in preoperative bpMRI of patients with prostate cancer and guide the choice between intrafascial and extrafascial RARP. Patients undergoing intrafascial RARP with preserved NVB demonstrate improved postoperative urinary control.

Comparison of publicly available artificial intelligence models for pancreatic segmentation on T1-weighted Dixon images.

Sonoda Y, Fujisawa S, Kurokawa M, Gonoi W, Hanaoka S, Yoshikawa T, Abe O

pubmed logopapersJun 18 2025
This study aimed to compare three publicly available deep learning models (TotalSegmentator, TotalVibeSegmentator, and PanSegNet) for automated pancreatic segmentation on magnetic resonance images and to evaluate their performance against human annotations in terms of segmentation accuracy, volumetric measurement, and intrapancreatic fat fraction (IPFF) assessment. Twenty upper abdominal T1-weighted magnetic resonance series acquired using the two-point Dixon method were randomly selected. Three radiologists manually segmented the pancreas, and a ground-truth mask was constructed through a majority vote per voxel. Pancreatic segmentation was also performed using the three artificial intelligence models. Performance was evaluated using the Dice similarity coefficient (DSC), 95th-percentile Hausdorff distance, average symmetric surface distance, positive predictive value, sensitivity, Bland-Altman plots, and concordance correlation coefficient (CCC) for pancreatic volume and IPFF. PanSegNet achieved the highest DSC (mean ± standard deviation, 0.883 ± 0.095) and showed no statistically significant difference from the human interobserver DSC (0.896 ± 0.068; p = 0.24). In contrast, TotalVibeSegmentator (0.731 ± 0.105) and TotalSegmentator (0.707 ± 0.142) had significantly lower DSC values compared with the human interobserver average (p < 0.001). For pancreatic volume and IPFF, PanSegNet demonstrated the best agreement with the ground truth (CCC values of 0.958 and 0.993, respectively), followed by TotalSegmentator (0.834 and 0.980) and TotalVibeSegmentator (0.720 and 0.672). PanSegNet demonstrated the highest segmentation accuracy and the best agreement with human measurements for both pancreatic volume and IPFF on T1-weighted Dixon images. This model appears to be the most suitable for large-scale studies requiring automated pancreatic segmentation and intrapancreatic fat evaluation.

Pediatric Pancreas Segmentation from MRI Scans with Deep Learning

Elif Keles, Merve Yazol, Gorkem Durak, Ziliang Hong, Halil Ertugrul Aktas, Zheyuan Zhang, Linkai Peng, Onkar Susladkar, Necati Guzelyel, Oznur Leman Boyunaga, Cemal Yazici, Mark Lowe, Aliye Uc, Ulas Bagci

arxiv logopreprintJun 18 2025
Objective: Our study aimed to evaluate and validate PanSegNet, a deep learning (DL) algorithm for pediatric pancreas segmentation on MRI in children with acute pancreatitis (AP), chronic pancreatitis (CP), and healthy controls. Methods: With IRB approval, we retrospectively collected 84 MRI scans (1.5T/3T Siemens Aera/Verio) from children aged 2-19 years at Gazi University (2015-2024). The dataset includes healthy children as well as patients diagnosed with AP or CP based on clinical criteria. Pediatric and general radiologists manually segmented the pancreas, then confirmed by a senior pediatric radiologist. PanSegNet-generated segmentations were assessed using Dice Similarity Coefficient (DSC) and 95th percentile Hausdorff distance (HD95). Cohen's kappa measured observer agreement. Results: Pancreas MRI T2W scans were obtained from 42 children with AP/CP (mean age: 11.73 +/- 3.9 years) and 42 healthy children (mean age: 11.19 +/- 4.88 years). PanSegNet achieved DSC scores of 88% (controls), 81% (AP), and 80% (CP), with HD95 values of 3.98 mm (controls), 9.85 mm (AP), and 15.67 mm (CP). Inter-observer kappa was 0.86 (controls), 0.82 (pancreatitis), and intra-observer agreement reached 0.88 and 0.81. Strong agreement was observed between automated and manual volumes (R^2 = 0.85 in controls, 0.77 in diseased), demonstrating clinical reliability. Conclusion: PanSegNet represents the first validated deep learning solution for pancreatic MRI segmentation, achieving expert-level performance across healthy and diseased states. This tool, algorithm, along with our annotated dataset, are freely available on GitHub and OSF, advancing accessible, radiation-free pediatric pancreatic imaging and fostering collaborative research in this underserved domain.

NERO: Explainable Out-of-Distribution Detection with Neuron-level Relevance

Anju Chhetri, Jari Korhonen, Prashnna Gyawali, Binod Bhattarai

arxiv logopreprintJun 18 2025
Ensuring reliability is paramount in deep learning, particularly within the domain of medical imaging, where diagnostic decisions often hinge on model outputs. The capacity to separate out-of-distribution (OOD) samples has proven to be a valuable indicator of a model's reliability in research. In medical imaging, this is especially critical, as identifying OOD inputs can help flag potential anomalies that might otherwise go undetected. While many OOD detection methods rely on feature or logit space representations, recent works suggest these approaches may not fully capture OOD diversity. To address this, we propose a novel OOD scoring mechanism, called NERO, that leverages neuron-level relevance at the feature layer. Specifically, we cluster neuron-level relevance for each in-distribution (ID) class to form representative centroids and introduce a relevance distance metric to quantify a new sample's deviation from these centroids, enhancing OOD separability. Additionally, we refine performance by incorporating scaled relevance in the bias term and combining feature norms. Our framework also enables explainable OOD detection. We validate its effectiveness across multiple deep learning architectures on the gastrointestinal imaging benchmarks Kvasir and GastroVision, achieving improvements over state-of-the-art OOD detection methods.

A Digital Twin Framework for Adaptive Treatment Planning in Radiotherapy

Chih-Wei Chang, Sri Akkineni, Mingzhe Hu, Keyur D. Shah, Jun Zhou, Xiaofeng Yang

arxiv logopreprintJun 17 2025
This study aims to develop and evaluate a digital twin (DT) framework to enhance adaptive proton therapy for prostate stereotactic body radiotherapy (SBRT), focusing on improving treatment precision for dominant intraprostatic lesions (DILs) while minimizing organ-at-risk (OAR) toxicity. We propose a decision-theoretic (DT) framework combining deep learning (DL)-based deformable image registration (DIR) with a prior treatment database to generate synthetic CTs (sCTs) for predicting interfractional anatomical changes. Using daily CBCT from five prostate SBRT patients with DILs, the framework precomputes multiple plans with high (DT-H) and low (DT-L) similarity sCTs. Plan optimization is performed in RayStation 2023B, assuming a constant RBE of 1.1 and robustly accounting for positional and range uncertainties. Plan quality is evaluated via a modified ProKnow score across two fractions, with reoptimization limited to 10 minutes. Daily CBCT evaluation showed clinical plans often violated OAR constraints (e.g., bladder V20.8Gy, rectum V23Gy), with DIL V100 < 90% in 2 patients, indicating SIFB failure. DT-H plans, using high-similarity sCTs, achieved better or comparable DIL/CTV coverage and lower OAR doses, with reoptimization completed within 10 min (e.g., DT-H-REopt-A score: 154.3-165.9). DT-L plans showed variable outcomes; lower similarity correlated with reduced DIL coverage (e.g., Patient 4: 84.7%). DT-H consistently outperformed clinical plans within time limits, while extended optimization brought DT-L and clinical plans closer to DT-H quality. This DT framework enables rapid, personalized adaptive proton therapy, improving DIL targeting and reducing toxicity. By addressing geometric uncertainties, it supports outcome gains in ultra-hypofractionated prostate RT and lays groundwork for future multimodal anatomical prediction.

Deep learning based colorectal cancer detection in medical images: A comprehensive analysis of datasets, methods, and future directions.

Gülmez B

pubmed logopapersJun 17 2025
This comprehensive review examines the current state and evolution of artificial intelligence applications in colorectal cancer detection through medical imaging from 2019 to 2025. The study presents a quantitative analysis of 110 high-quality publications and 9 publicly accessible medical image datasets used for training and validation. Various convolutional neural network architectures-including ResNet (40 implementations), VGG (18 implementations), and emerging transformer-based models (12 implementations)-for classification, object detection, and segmentation tasks are systematically categorized and evaluated. The investigation encompasses hyperparameter optimization techniques utilized to enhance model performance, with particular focus on genetic algorithms and particle swarm optimization approaches. The role of explainable AI methods in medical diagnosis interpretation is analyzed through visualization techniques such as Grad-CAM and SHAP. Technical limitations, including dataset scarcity, computational constraints, and standardization challenges, are identified through trend analysis. Research gaps in current methodologies are highlighted through comparative assessment of performance metrics across different architectural implementations. Potential future research directions, including multimodal learning and federated learning approaches, are proposed based on publication trend analysis. This review serves as a comprehensive reference for researchers in medical image analysis and clinical practitioners implementing AI-based colorectal cancer detection systems.

2nd trimester ultrasound (anomaly).

Carocha A, Vicente M, Bernardeco J, Rijo C, Cohen Á, Cruz J

pubmed logopapersJun 17 2025
The second-trimester ultrasound is a crucial tool in prenatal care, typically conducted between 18 and 24 weeks of gestation to evaluate fetal anatomy, growth, and mid-trimester screening. This article provides a comprehensive overview of the best practices and guidelines for performing this examination, with a focus on detecting fetal anomalies. The ultrasound assesses key structures and evaluates fetal growth by measuring biometric parameters, which are essential for estimating fetal weight. Additionally, the article discusses the importance of placental evaluation, amniotic fluid levels measurement, and the risk of preterm birth through cervical length measurements. Factors that can affect the accuracy of the scan, such as the skill of the operator, the quality of the equipment, and maternal conditions such as obesity, are discussed. The article also addresses the limitations of the procedure, including variability in detection. Despite these challenges, the second-trimester ultrasound remains a valuable screening and diagnostic tool, providing essential information for managing pregnancies, especially in high-risk cases. Future directions include improving imaging technology, integrating artificial intelligence for anomaly detection, and standardizing ultrasound protocols to enhance diagnostic accuracy and ensure consistent prenatal care.

Effects of patient and imaging factors on small bowel motility scores derived from deep learning-based segmentation of cine MRI.

Heo S, Yun J, Kim DW, Park SY, Choi SH, Kim K, Jung KW, Myung SJ, Park SH

pubmed logopapersJun 17 2025
Small bowel motility can be quantified using cine MRI, but the influence of patient and imaging factors on motility scores remains unclear. This study evaluated whether patient and imaging factors affect motility scores derived from deep learning-based segmentation of cine MRI. Fifty-four patients (mean age 53.6 ± 16.4 years; 34 women) with chronic constipation or suspected colonic pseudo-obstruction who underwent cine MRI covering the entire small bowel between 2022 and 2023 were included. A deep learning algorithm was developed to segment small bowel regions, and motility was quantified with an optical flow-based algorithm, producing a motility score for each slice. Associations of motility scores with patient factors (age, sex, body mass index, symptoms, and bowel distension) and MRI slice-related factors (anatomical location, bowel area, and anteroposterior position) were analyzed using linear mixed models. Deep learning-based small bowel segmentation achieved a mean volumetric Dice similarity coefficient of 75.4 ± 18.9%, with a manual correction time of 26.5 ± 13.5 s. Median motility scores per patient ranged from 26.4 to 64.4, with an interquartile range of 3.1-26.6. Multivariable analysis revealed that MRI slice-related factors, including anatomical location with mixed ileum and jejunum (β = -4.9; p = 0.01, compared with ileum dominant), bowel area (first order β = -0.2, p < 0.001; second order β = 5.7 × 10<sup>-4</sup>, p < 0.001), and anteroposterior position (first order β = -51.5, p < 0.001; second order β = 28.8, p = 0.004) were significantly associated with motility scores. Patient factors showed no association with motility scores. Small bowel motility scores were significantly associated with MRI slice-related factors. Determining global motility without adjusting for these factors may be limited. Question Global small bowel motility can be quantified from cine MRI; however, the confounding factors affecting motility scores remain unclear. Findings Motility scores were significantly influenced by MRI slice-related factors, including anatomical location, bowel area, and anteroposterior position. Clinical relevance Adjusting for slice-related factors is essential for accurate interpretation of small bowel motility scores on cine MRI.

Latent Anomaly Detection: Masked VQ-GAN for Unsupervised Segmentation in Medical CBCT

Pengwei Wang

arxiv logopreprintJun 17 2025
Advances in treatment technology now allow for the use of customizable 3D-printed hydrogel wound dressings for patients with osteoradionecrosis (ORN) of the jaw (ONJ). Meanwhile, deep learning has enabled precise segmentation of 3D medical images using tools like nnUNet. However, the scarcity of labeled data in ONJ imaging makes supervised training impractical. This study aims to develop an unsupervised training approach for automatically identifying anomalies in imaging scans. We propose a novel two-stage training pipeline. In the first stage, a VQ-GAN is trained to accurately reconstruct normal subjects. In the second stage, random cube masking and ONJ-specific masking are applied to train a new encoder capable of recovering the data. The proposed method achieves successful segmentation on both simulated and real patient data. This approach provides a fast initial segmentation solution, reducing the burden of manual labeling. Additionally, it has the potential to be directly used for 3D printing when combined with hand-tuned post-processing.
Page 69 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.