Sort by:
Page 3 of 1331322 results

Automated deep U-Net model for ischemic stroke lesion segmentation in the sub-acute phase.

E R, Bevi AR

pubmed logopapersSep 29 2025
Manual segmentation of sub-acute ischemic stroke lesions in fluid-attenuated inversion recovery magnetic resonance imaging (FLAIR MRI) is time-consuming and subject to inter-observer variability, limiting clinical workflow efficiency. To develop and validate an automated deep learning framework for accurate segmentation of sub-acute ischemic stroke lesions in FLAIR MRI using rigorous validation methodology. We propose a novel multi-path residual U-Net(U-shaped network) architecture with six parallel pathways per block (depths 0-5 convolutional layers) and 2.34 million trainable parameters. Hyperparameters were systematically optimized using 5-fold cross-validation across 60 configurations. We addressed intensity inhomogeneity using N4 bias field correction and employed strict patient-level data partitioning (18 training, 5 validation, 5 test patients) to prevent data leakage. Statistical analysis utilized bias-corrected bootstrap confidence intervals and Bonferroni correction for multiple comparisons. Our model achieved a validation dice similarity coefficient (DSC) of 0.85 ± 0.12 (95% CI: 0.79-0.91), a sensitivity of 0.82 ± 0.15, a specificity of 0.95 ± 0.04, and a Hausdorff distance of 14.1 ± 5.8 mm. Test set performance remained consistent (DSC: 0.89 ± 0.07), confirming generalizability. Computational efficiency was demonstrated with 45 ms inference time per slice. The architecture demonstrated statistically significant improvements over DRANet (p = 0.003), 2D CNN (p = 0.001), and Attention U-Net (p = 0.001), while achieving competitive performance comparable to CSNet (p = 0.68). The proposed framework demonstrates robust performance for automated stroke lesion segmentation with rigorous statistical validation. However, multi-site validation across diverse clinical environments remains essential before clinical implementation.

An Enhanced Pyramid Feature Network Based on Long-Range Dependencies for Multi-Organ Medical Image Segmentation

Dayu Tan, Cheng Kong, Yansen Su, Hai Chen, Dongliang Yang, Junfeng Xia, Chunhou Zheng

arxiv logopreprintSep 29 2025
In the field of multi-organ medical image segmentation, recent methods frequently employ Transformers to capture long-range dependencies from image features. However, these methods overlook the high computational cost of Transformers and their deficiencies in extracting local detailed information. To address high computational costs and inadequate local detail information, we reassess the design of feature extraction modules and propose a new deep-learning network called LamFormer for fine-grained segmentation tasks across multiple organs. LamFormer is a novel U-shaped network that employs Linear Attention Mamba (LAM) in an enhanced pyramid encoder to capture multi-scale long-range dependencies. We construct the Parallel Hierarchical Feature Aggregation (PHFA) module to aggregate features from different layers of the encoder, narrowing the semantic gap among features while filtering information. Finally, we design the Reduced Transformer (RT), which utilizes a distinct computational approach to globally model up-sampled features. RRT enhances the extraction of detailed local information and improves the network's capability to capture long-range dependencies. LamFormer outperforms existing segmentation methods on seven complex and diverse datasets, demonstrating exceptional performance. Moreover, the proposed network achieves a balance between model performance and model complexity.

BALR-SAM: Boundary-Aware Low-Rank Adaptation of SAM for Resource-Efficient Medical Image Segmentation

Zelin Liu, Sicheng Dong, Bocheng Li, Yixuan Yang, Jiacheng Ruan, Chenxu Zhou, Suncheng Xiang

arxiv logopreprintSep 29 2025
Vision foundation models like the Segment Anything Model (SAM), pretrained on large-scale natural image datasets, often struggle in medical image segmentation due to a lack of domain-specific adaptation. In clinical practice, fine-tuning such models efficiently for medical downstream tasks with minimal resource demands, while maintaining strong performance, is challenging. To address these issues, we propose BALR-SAM, a boundary-aware low-rank adaptation framework that enhances SAM for medical imaging. It combines three tailored components: (1) a Complementary Detail Enhancement Network (CDEN) using depthwise separable convolutions and multi-scale fusion to capture boundary-sensitive features essential for accurate segmentation; (2) low-rank adapters integrated into SAM's Vision Transformer blocks to optimize feature representation and attention for medical contexts, while simultaneously significantly reducing the parameter space; and (3) a low-rank tensor attention mechanism in the mask decoder, cutting memory usage by 75% and boosting inference speed. Experiments on standard medical segmentation datasets show that BALR-SAM, without requiring prompts, outperforms several state-of-the-art (SOTA) methods, including fully fine-tuned MedSAM, while updating just 1.8% (11.7M) of its parameters.

Evaluation of a commercial deep-learning-based contouring software for CT-based gynecological brachytherapy.

Yang HJ, Patrick J, Vickress J, D'Souza D, Velker V, Mendez L, Starling MM, Fenster A, Hoover D

pubmed logopapersSep 29 2025
To evaluate a commercial deep-learning based auto-contouring software specifically trained for high-dose-rate gynecological brachytherapy. We collected CT images from 30 patients treated with gynecological brachytherapy (19.5-28 Gy in 3-4 fractions) at our institution from January 2018 to December 2022. Clinical and artificial intelligence (AI) generated contours for bladder, bowel, rectum, and sigmoid were obtained. Five patients were randomly selected from the test set and manually re-contoured by 4 radiation oncologists. Contouring was repeated 2 weeks later using AI contours as the starting point ("AI-assisted" approach). Comparisons amongst clinical, AI, AI-assisted, and manual retrospective contours were made using various metrics, including Dice similarity coefficient (DSC) and unsigned D2cc difference. Between clinical and AI contours, DSC was 0.92, 0.79, 0.62, 0.66, for bladder, rectum, sigmoid, and bowel, respectively. Rectum and sigmoid had the lowest median unsigned D2cc difference of 0.20 and 0.21 Gy/fraction respectively between clinical and AI contours, while bowel had the largest median difference of 0.38 Gy/fraction. Agreement between fully automated AI and clinical contours was generally not different compared to agreement between AI-assisted and clinical contours. AI-assisted interobserver agreement was better than manual interobserver agreement for all organs and metrics. The median time to contour all organs for manual and AI-assisted approaches was 14.8 and 6.9 minutes/patient (p < 0.001), respectively. The agreement between AI or AI-assisted contours against the clinical contours was similar to manual interobserver agreement. Implementation of the AI-assisted contouring approach could enhance clinical workflow by decreasing both contouring time and interobserver variability.

Towards population scale testis volume segmentation in DIXON MRI.

Ernsting J, Beeken PN, Ogoniak L, Kockwelp J, Roll W, Hahn T, Busch AS, Risse B

pubmed logopapersSep 29 2025
Testis size is known to be one of the main predictors of male fertility, usually assessed in clinical workup via palpation or imaging. Despite its potential, population-level evaluation of testicular volume using imaging remains underexplored. Previous studies, limited by small and biased datasets, have demonstrated the feasibility of machine learning for testis volume segmentation. This paper presents an evaluation of segmentation methods for testicular volume using Magnetic Resonance Imaging data from the UKBiobank. The best model achieves a median dice score of 0.89, compared to median dice score of 0.85 for human interrater reliability on the same dataset, enabling large-scale annotation on a population scale for the first time. Our overall aim is to provide a trained model, comparative baseline methods, and annotated training data to enhance accessibility and reproducibility in testis MRI segmentation research.

DCM-Net: dual-encoder CNN-Mamba network with cross-branch fusion for robust medical image segmentation.

Atabansi CC, Wang S, Li H, Nie J, Xiang L, Zhang C, Liu H, Zhou X, Li D

pubmed logopapersSep 29 2025
Medical image segmentation is a critical task for the early detection and diagnosis of various conditions, such as skin cancer, polyps, thyroid nodules, and pancreatic tumors. Recently, deep learning architectures have achieved significant success in this field. However, they face a critical trade-off between local feature extraction and global context modeling. To address this limitation, we present DCM-Net, a dual-encoder architecture that integrates pretrained CNN layers with Visual State Space (VSS) blocks through a Cross-Branch Feature Fusion Module (CBFFM). A Decoder Feature Enhancement Module (DFEM) combines depth-wise separable convolutions with MLP-based semantic rectification to extract enhanced decoded features and improve the segmentation performance. Additionally, we present a new 2D pancreas and pancreatic tumor dataset (CCH-PCT-CT) collected from Chongqing University Cancer Hospital, comprising 3,547 annotated CT slices, which is used to validate the proposed model. The proposed DCM-Net architecture achieves competitive performance across all datasets investigated in this study. We develop a novel DCM-Net architecture that generates robust features for tumor and organ segmentation in medical images. DCM-Net significantly outperforms all baseline models in segmentation tasks, with higher Dice Similarity Coefficient (DSC) and mean Intersection over Union (mIoU) scores. Its robustness confirms strong potential for clinical use.

Artificial intelligence in carotid computed tomography angiography plaque detection: Decade of progress and future perspectives.

Wang DY, Yang T, Zhang CT, Zhan PC, Miao ZX, Li BL, Yang H

pubmed logopapersSep 28 2025
The application of artificial intelligence (AI) in carotid atherosclerotic plaque detection <i>via</i> computed tomography angiography (CTA) has significantly advanced over the past decade. This mini-review consolidates recent innovations in deep learning architectures, domain adaptation techniques, and automated plaque characterization methodologies. Hybrid models, such as residual U-Net-Pyramid Scene Parsing Network, exhibit a remarkable precision of 80.49% in plaque segmentation, outperforming radiologists in diagnostic efficiency by reducing analysis time from minutes to mere seconds. Domain-adaptive frameworks, such as Lesion Assessment through Tracklet Evaluation, demonstrate robust performance across heterogeneous imaging datasets, achieving an area under the curve (AUC) greater than 0.88. Furthermore, novel approaches integrating U-Net and Efficient-Net architectures, enhanced by Bayesian optimization, have achieved impressive correlation coefficients (0.89) for plaque quantification. AI-powered CTA also enables high-precision three-dimensional vascular segmentation, with a Dice coefficient of 0.9119, and offers superior cardiovascular risk stratification compared to traditional Agatston scoring, yielding AUC values of 0.816 <i>vs</i> 0.729 at a 15-year follow-up. These breakthroughs address key challenges in plaque motion analysis, with systolic retractive motion biomarkers successfully identifying 80% of vulnerable plaques. Looking ahead, future directions focus on enhancing the interpretability of AI models through explainable AI and leveraging federated learning to mitigate data heterogeneity. This mini-review underscores the transformative potential of AI in carotid plaque assessment, offering substantial implications for stroke prevention and personalized cerebrovascular management strategies.

Advances in ultrasound-based imaging for diagnosis of endometrial cancer.

Tlais M, Hamze H, Hteit A, Haddad K, El Fassih I, Zalzali I, Mahmoud S, Karaki S, Jabbour D

pubmed logopapersSep 28 2025
Endometrial cancer (EC) is the most common gynecological malignancy in high-income countries, with incidence rates rising globally. Early and accurate diagnosis is essential for improving outcomes. Transvaginal ultrasound (TVUS) remains a cost-effective first-line tool, and emerging techniques such as three-dimensional (3D) ultrasound (US), contrast-enhanced US (CEUS), elastography, and artificial intelligence (AI)-enhanced imaging may further improve diagnostic performance. To systematically review recent advances in US-based imaging techniques for the diagnosis and staging of EC, and to compare their performance with magnetic resonance imaging (MRI). A systematic search of PubMed, Scopus, Web of Science, and Google Scholar was performed to identify studies published between January 2010 and March 2025. Eligible studies evaluated TVUS, 3D-US, CEUS, elastography, or AI-enhanced US in EC diagnosis and staging. Methodological quality was assessed using the QUADAS-2 tool. Sensitivity, specificity, and area under the curve (AUC) were extracted where available, with narrative synthesis due to heterogeneity. Forty-one studies met the inclusion criteria. TVUS demonstrated high sensitivity (76%-96%) but moderate specificity (61%-86%), while MRI achieved higher specificity (84%-95%) and superior staging accuracy. 3D-US yielded accuracy comparable to MRI in selected early-stage cases. CEUS and elastography enhanced tissue characterization, and AI-enhanced US achieved pooled AUCs up to 0.91 for risk prediction and lesion segmentation. Variability in performance was noted across modalities due to patient demographics, equipment differences, and operator experience. TVUS remains a highly sensitive initial screening tool, with MRI preferred for definitive staging. 3D-US, CEUS, elastography, and AI-enhanced techniques show promise as complementary or alternative approaches, particularly in low-resource settings. Standardization, multicenter validation, and integration of multi-modal imaging are needed to optimize diagnostic pathways for EC.

Adversarial Versus Federated: An Adversarial Learning based Multi-Modality Cross-Domain Federated Medical Segmentation

You Zhou, Lijiang Chen, Shuchang Lyu, Guangxia Cui, Wenpei Bai, Zheng Zhou, Meng Li, Guangliang Cheng, Huiyu Zhou, Qi Zhao

arxiv logopreprintSep 28 2025
Federated learning enables collaborative training of machine learning models among different clients while ensuring data privacy, emerging as the mainstream for breaking data silos in the healthcare domain. However, the imbalance of medical resources, data corruption or improper data preservation may lead to a situation where different clients possess medical images of different modality. This heterogeneity poses a significant challenge for cross-domain medical image segmentation within the federated learning framework. To address this challenge, we propose a new Federated Domain Adaptation (FedDA) segmentation training framework. Specifically, we propose a feature-level adversarial learning among clients by aligning feature maps across clients through embedding an adversarial training mechanism. This design can enhance the model's generalization on multiple domains and alleviate the negative impact from domain-shift. Comprehensive experiments on three medical image datasets demonstrate that our proposed FedDA substantially achieves cross-domain federated aggregation, endowing single modality client with cross-modality processing capabilities, and consistently delivers robust performance compared to state-of-the-art federated aggregation algorithms in objective and subjective assessment. Our code are available at https://github.com/GGbond-study/FedDA.

Evaluating the Impact of Radiographic Noise on Chest X-ray Semantic Segmentation and Disease Classification Using a Scalable Noise Injection Framework

Derek Jiu, Kiran Nijjer, Nishant Chinta, Ryan Bui, Ben Liu, Kevin Zhu

arxiv logopreprintSep 28 2025
Deep learning models are increasingly used for radiographic analysis, but their reliability is challenged by the stochastic noise inherent in clinical imaging. A systematic, cross-task understanding of how different noise types impact these models is lacking. Here, we evaluate the robustness of state-of-the-art convolutional neural networks (CNNs) to simulated quantum (Poisson) and electronic (Gaussian) noise in two key chest X-ray tasks: semantic segmentation and pulmonary disease classification. Using a novel, scalable noise injection framework, we applied controlled, clinically-motivated noise severities to common architectures (UNet, DeepLabV3, FPN; ResNet, DenseNet, EfficientNet) on public datasets (Landmark, ChestX-ray14). Our results reveal a stark dichotomy in task robustness. Semantic segmentation models proved highly vulnerable, with lung segmentation performance collapsing under severe electronic noise (Dice Similarity Coefficient drop of 0.843), signifying a near-total model failure. In contrast, classification tasks demonstrated greater overall resilience, but this robustness was not uniform. We discovered a differential vulnerability: certain tasks, such as distinguishing Pneumothorax from Atelectasis, failed catastrophically under quantum noise (AUROC drop of 0.355), while others were more susceptible to electronic noise. These findings demonstrate that while classification models possess a degree of inherent robustness, pixel-level segmentation tasks are far more brittle. The task- and noise-specific nature of model failure underscores the critical need for targeted validation and mitigation strategies before the safe clinical deployment of diagnostic AI.
Page 3 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.