Sort by:
Page 91 of 1341332 results

Artificial Intelligence-Assisted Segmentation of Prostate Tumors and Neurovascular Bundles: Applications in Precision Surgery for Prostate Cancer.

Mei H, Yang R, Huang J, Jiao P, Liu X, Chen Z, Chen H, Zheng Q

pubmed logopapersJun 18 2025
The aim of this study was to guide prostatectomy by employing artificial intelligence for the segmentation of tumor gross tumor volume (GTV) and neurovascular bundles (NVB). The preservation and dissection of NVB differ between intrafascial and extrafascial robot-assisted radical prostatectomy (RARP), impacting postoperative urinary control. We trained the nnU-Net v2 neural network using data from 220 patients in the PI-CAI cohort for the segmentation of prostate GTV and NVB in biparametric magnetic resonance imaging (bpMRI). The model was then validated in an external cohort of 209 patients from Renmin Hospital of Wuhan University (RHWU). Utilizing three-dimensional reconstruction and point cloud analysis, we explored the spatial distribution of GTV and NVB in relation to intrafascial and extrafascial approaches. We also prospectively included 40 patients undergoing intrafascial and extrafascial RARP, applying the aforementioned procedure to classify the surgical approach. Additionally, 3D printing was employed to guide surgery, and follow-ups on short- and long-term urinary function in patients were conducted. The nnU-Net v2 neural network demonstrated precise segmentation of GTV, NVB, and prostate, achieving Dice scores of 0.5573 ± 0.0428, 0.7679 ± 0.0178, and 0.7483 ± 0.0290, respectively. By establishing the distance from GTV to NVB, we successfully predicted the surgical approach. Urinary control analysis revealed that the extrafascial approach yielded better postoperative urinary function, facilitating more refined management of patients with prostate cancer and personalized medical care. Artificial intelligence technology can accurately identify GTV and NVB in preoperative bpMRI of patients with prostate cancer and guide the choice between intrafascial and extrafascial RARP. Patients undergoing intrafascial RARP with preserved NVB demonstrate improved postoperative urinary control.

A Deep Learning Lung Cancer Segmentation Pipeline to Facilitate CT-based Radiomics

So, A. C. P., Cheng, D., Aslani, S., Azimbagirad, M., Yamada, D., Dunn, R., Josephides, E., McDowall, E., Henry, A.-R., Bille, A., Sivarasan, N., Karapanagiotou, E., Jacob, J., Pennycuick, A.

medrxiv logopreprintJun 18 2025
BackgroundCT-based radio-biomarkers could provide non-invasive insights into tumour biology to risk-stratify patients. One of the limitations is laborious manual segmentation of regions-of-interest (ROI). We present a deep learning auto-segmentation pipeline for radiomic analysis. Patients and Methods153 patients with resected stage 2A-3B non-small cell lung cancer (NSCLCs) had tumours segmented using nnU-Net with review by two clinicians. The nnU-Net was pretrained with anatomical priors in non-cancerous lungs and finetuned on NSCLCs. Three ROIs were segmented: intra-tumoural, peri-tumoural, and whole lung. 1967 features were extracted using PyRadiomics. Feature reproducibility was tested using segmentation perturbations. Features were selected using minimum-redundancy-maximum-relevance with Random Forest-recursive feature elimination nested in 500 bootstraps. ResultsAuto-segmentation time was [~]36 seconds/series. Mean volumetric and surface Dice-Sorensen coefficient (DSC) scores were 0.84 ({+/-}0.28), and 0.79 ({+/-}0.34) respectively. DSC were significantly correlated with tumour shape (sphericity, diameter) and location (worse with chest wall adherence), but not batch effects (e.g. contrast, reconstruction kernel). 6.5% cases had missed segmentations; 6.5% required major changes. Pre-training on anatomical priors resulted in better segmentations compared to training on tumour-labels alone (p<0.001) and tumour with anatomical labels (p<0.001). Most radiomic features were not reproducible following perturbations and resampling. Adding radiomic features, however, did not significantly improve the clinical model in predicting 2-year disease-free survival: AUCs 0.67 (95%CI 0.59-0.75) vs 0.63 (95%CI 0.54-0.71) respectively (p=0.28). ConclusionOur study demonstrates that integrating auto-segmentation into radio-biomarker discovery is feasible with high efficiency and accuracy. Whilst radiomic analysis show limited reproducibility, our auto-segmentation may allow more robust radio-biomarker analysis using deep learning features.

Comparison of publicly available artificial intelligence models for pancreatic segmentation on T1-weighted Dixon images.

Sonoda Y, Fujisawa S, Kurokawa M, Gonoi W, Hanaoka S, Yoshikawa T, Abe O

pubmed logopapersJun 18 2025
This study aimed to compare three publicly available deep learning models (TotalSegmentator, TotalVibeSegmentator, and PanSegNet) for automated pancreatic segmentation on magnetic resonance images and to evaluate their performance against human annotations in terms of segmentation accuracy, volumetric measurement, and intrapancreatic fat fraction (IPFF) assessment. Twenty upper abdominal T1-weighted magnetic resonance series acquired using the two-point Dixon method were randomly selected. Three radiologists manually segmented the pancreas, and a ground-truth mask was constructed through a majority vote per voxel. Pancreatic segmentation was also performed using the three artificial intelligence models. Performance was evaluated using the Dice similarity coefficient (DSC), 95th-percentile Hausdorff distance, average symmetric surface distance, positive predictive value, sensitivity, Bland-Altman plots, and concordance correlation coefficient (CCC) for pancreatic volume and IPFF. PanSegNet achieved the highest DSC (mean ± standard deviation, 0.883 ± 0.095) and showed no statistically significant difference from the human interobserver DSC (0.896 ± 0.068; p = 0.24). In contrast, TotalVibeSegmentator (0.731 ± 0.105) and TotalSegmentator (0.707 ± 0.142) had significantly lower DSC values compared with the human interobserver average (p < 0.001). For pancreatic volume and IPFF, PanSegNet demonstrated the best agreement with the ground truth (CCC values of 0.958 and 0.993, respectively), followed by TotalSegmentator (0.834 and 0.980) and TotalVibeSegmentator (0.720 and 0.672). PanSegNet demonstrated the highest segmentation accuracy and the best agreement with human measurements for both pancreatic volume and IPFF on T1-weighted Dixon images. This model appears to be the most suitable for large-scale studies requiring automated pancreatic segmentation and intrapancreatic fat evaluation.

MDEANet: A multi-scale deep enhanced attention net for popliteal fossa segmentation in ultrasound images.

Chen F, Fang W, Wu Q, Zhou M, Guo W, Lin L, Chen Z, Zou Z

pubmed logopapersJun 18 2025
Popliteal sciatic nerve block is a widely used technique for lower limb anesthesia. However, despite ultrasound guidance, the complex anatomical structures of the popliteal fossa can present challenges, potentially leading to complications. To accurately identify the bifurcation of the sciatic nerve for nerve blockade, we propose MDEANet, a deep learning-based segmentation network designed for the precise localization of nerves, muscles, and arteries in ultrasound images of the popliteal region. MDEANet incorporates Cascaded Multi-scale Atrous Convolutions (CMAC) to enhance multi-scale feature extraction, Enhanced Spatial Attention Mechanism (ESAM) to focus on key anatomical regions, and Cross-level Feature Fusion (CLFF) to improve contextual representation. This integration markedly improves segmentation of nerves, muscles, and arteries. Experimental results demonstrate that MDEANet achieves an average Intersection over Union (IoU) of 88.60% and a Dice coefficient of 93.95% across all target structures, outperforming state-of-the-art models by 1.68% in IoU and 1.66% in Dice coefficient. Specifically, for nerve segmentation, the Dice coefficient reaches 93.31%, underscoring the effectiveness of our approach. MDEANet has the potential to provide decision-support assistance for anesthesiologists, thereby enhancing the accuracy and efficiency of ultrasound-guided nerve blockade procedures.

Image-based AI tools in peripheral nerves assessment: Current status and integration strategies - A narrative review.

Martín-Noguerol T, Díaz-Angulo C, Luna A, Segovia F, Gómez-Río M, Górriz JM

pubmed logopapersJun 18 2025
Peripheral Nerves (PNs) are traditionally evaluated using US or MRI, allowing radiologists to identify and classify them as normal or pathological based on imaging findings, symptoms, and electrophysiological tests. However, the anatomical complexity of PNs, coupled with their proximity to surrounding structures like vessels and muscles, presents significant challenges. Advanced imaging techniques, including MR-neurography and Diffusion-Weighted Imaging (DWI) neurography, have shown promise but are hindered by steep learning curves, operator dependency, and limited accessibility. Discrepancies between imaging findings and patient symptoms further complicate the evaluation of PNs, particularly in cases where imaging appears normal despite clinical indications of pathology. Additionally, demographic and clinical factors such as age, sex, comorbidities, and physical activity influence PN health but remain unquantifiable with current imaging methods. Artificial Intelligence (AI) solutions have emerged as a transformative tool in PN evaluation. AI-based algorithms offer the potential to transition from qualitative to quantitative assessments, enabling precise segmentation, characterization, and threshold determination to distinguish healthy from pathological nerves. These advances could improve diagnostic accuracy and treatment monitoring. This review highlights the latest advances in AI applications for PN imaging, discussing their potential to overcome the current limitations and opportunities to improve their integration into routine radiological practice.

Interactive prototype learning and self-learning for few-shot medical image segmentation.

Song Y, Xu C, Wang B, Du X, Chen J, Zhang Y, Li S

pubmed logopapersJun 18 2025
Few-shot learning alleviates the heavy dependence of medical image segmentation on large-scale labeled data, but it shows strong performance gaps when dealing with new tasks compared with traditional deep learning. Existing methods mainly learn the class knowledge of a few known (support) samples and extend it to unknown (query) samples. However, the large distribution differences between the support image and the query image lead to serious deviations in the transfer of class knowledge, which can be specifically summarized as two segmentation challenges: Intra-class inconsistency and Inter-class similarity, blurred and confused boundaries. In this paper, we propose a new interactive prototype learning and self-learning network to solve the above challenges. First, we propose a deep encoding-decoding module to learn the high-level features of the support and query images to build peak prototypes with the greatest semantic information and provide semantic guidance for segmentation. Then, we propose an interactive prototype learning module to improve intra-class feature consistency and reduce inter-class feature similarity by conducting mid-level features-based mean prototype interaction and high-level features-based peak prototype interaction. Last, we propose a query features-guided self-learning module to separate foreground and background at the feature level and combine low-level feature maps to complement boundary information. Our model achieves competitive segmentation performance on benchmark datasets and shows substantial improvement in generalization ability.

Pediatric Pancreas Segmentation from MRI Scans with Deep Learning

Elif Keles, Merve Yazol, Gorkem Durak, Ziliang Hong, Halil Ertugrul Aktas, Zheyuan Zhang, Linkai Peng, Onkar Susladkar, Necati Guzelyel, Oznur Leman Boyunaga, Cemal Yazici, Mark Lowe, Aliye Uc, Ulas Bagci

arxiv logopreprintJun 18 2025
Objective: Our study aimed to evaluate and validate PanSegNet, a deep learning (DL) algorithm for pediatric pancreas segmentation on MRI in children with acute pancreatitis (AP), chronic pancreatitis (CP), and healthy controls. Methods: With IRB approval, we retrospectively collected 84 MRI scans (1.5T/3T Siemens Aera/Verio) from children aged 2-19 years at Gazi University (2015-2024). The dataset includes healthy children as well as patients diagnosed with AP or CP based on clinical criteria. Pediatric and general radiologists manually segmented the pancreas, then confirmed by a senior pediatric radiologist. PanSegNet-generated segmentations were assessed using Dice Similarity Coefficient (DSC) and 95th percentile Hausdorff distance (HD95). Cohen's kappa measured observer agreement. Results: Pancreas MRI T2W scans were obtained from 42 children with AP/CP (mean age: 11.73 +/- 3.9 years) and 42 healthy children (mean age: 11.19 +/- 4.88 years). PanSegNet achieved DSC scores of 88% (controls), 81% (AP), and 80% (CP), with HD95 values of 3.98 mm (controls), 9.85 mm (AP), and 15.67 mm (CP). Inter-observer kappa was 0.86 (controls), 0.82 (pancreatitis), and intra-observer agreement reached 0.88 and 0.81. Strong agreement was observed between automated and manual volumes (R^2 = 0.85 in controls, 0.77 in diseased), demonstrating clinical reliability. Conclusion: PanSegNet represents the first validated deep learning solution for pancreatic MRI segmentation, achieving expert-level performance across healthy and diseased states. This tool, algorithm, along with our annotated dataset, are freely available on GitHub and OSF, advancing accessible, radiation-free pediatric pancreatic imaging and fostering collaborative research in this underserved domain.

Pixel-wise Modulated Dice Loss for Medical Image Segmentation

Seyed Mohsen Hosseini

arxiv logopreprintJun 17 2025
Class imbalance and the difficulty imbalance are the two types of data imbalance that affect the performance of neural networks in medical segmentation tasks. In class imbalance the loss is dominated by the majority classes and in difficulty imbalance the loss is dominated by easy to classify pixels. This leads to an ineffective training. Dice loss, which is based on a geometrical metric, is very effective in addressing the class imbalance compared to the cross entropy (CE) loss, which is adopted directly from classification tasks. To address the difficulty imbalance, the common approach is employing a re-weighted CE loss or a modified Dice loss to focus the training on difficult to classify areas. The existing modification methods are computationally costly and with limited success. In this study we propose a simple modification to the Dice loss with minimal computational cost. With a pixel level modulating term, we take advantage of the effectiveness of Dice loss in handling the class imbalance to also handle the difficulty imbalance. Results on three commonly used medical segmentation tasks show that the proposed Pixel-wise Modulated Dice loss (PM Dice loss) outperforms other methods, which are designed to tackle the difficulty imbalance problem.

BRISC: Annotated Dataset for Brain Tumor Segmentation and Classification with Swin-HAFNet

Amirreza Fateh, Yasin Rezvani, Sara Moayedi, Sadjad Rezvani, Fatemeh Fateh, Mansoor Fateh

arxiv logopreprintJun 17 2025
Accurate segmentation and classification of brain tumors from Magnetic Resonance Imaging (MRI) remain key challenges in medical image analysis, largely due to the lack of high-quality, balanced, and diverse datasets. In this work, we present a new curated MRI dataset designed specifically for brain tumor segmentation and classification tasks. The dataset comprises 6,000 contrast-enhanced T1-weighted MRI scans annotated by certified radiologists and physicians, spanning three major tumor types-glioma, meningioma, and pituitary-as well as non-tumorous cases. Each sample includes high-resolution labels and is categorized across axial, sagittal, and coronal imaging planes to facilitate robust model development and cross-view generalization. To demonstrate the utility of the dataset, we propose a transformer-based segmentation model and benchmark it against established baselines. Our method achieves the highest weighted mean Intersection-over-Union (IoU) of 82.3%, with improvements observed across all tumor categories. Importantly, this study serves primarily as an introduction to the dataset, establishing foundational benchmarks for future research. We envision this dataset as a valuable resource for advancing machine learning applications in neuro-oncology, supporting both academic research and clinical decision-support development. datasetlink: https://www.kaggle.com/datasets/briscdataset/brisc2025/

Transformer-augmented lightweight U-Net (UAAC-Net) for accurate MRI brain tumor segmentation.

Varghese NE, John A, C UDA, Pillai MJ

pubmed logopapersJun 17 2025
Accurate segmentation of brain tumor images, particularly gliomas in MRI scans, is crucial for early diagnosis, monitoring progression, and evaluating tumor structure and therapeutic response. A novel lightweight, transformer-based U-Net model for brain tumor segmentation, integrating attention mechanisms and multi-layer feature extraction via atrous convolution to capture long-range relationships and contextual information across image regions is proposed in this work. The model performance is evaluated on the publicly accessible BraTS 2020 dataset using evaluation metrics such as the Dice coefficient, accuracy, mean Intersection over Union (IoU), sensitivity, and specificity. The proposed model outperforms many of the existing methods, such as MimicNet, Swin Transformer-based UNet and hybrid multiresolution-based UNet, and is capable of handling a variety of segmentation issues. The experimental results demonstrate that the proposed model acheives an accuracy of 98.23%, a Dice score of 0.9716, and a mean IoU of 0.8242 during training when compared to the current state-of-the-art methods.
Page 91 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.