Sort by:
Page 49 of 1341332 results

Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study

Ashkan Moradi, Fadila Zerka, Joeran S. Bosma, Mohammed R. S. Sunoqrot, Bendik S. Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot

arxiv logopreprintJul 30 2025
Purpose: To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods: A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. P-values for performance differences were calculated using permutation testing. Results: The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation and csPCa detection on the independent test set. The optimized FL model showed higher lesion detection performance compared to the FL-baseline model, but no evidence of a difference was observed for prostate segmentation. Conclusions: FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance.

Automated Brain Tumor Segmentation using Hybrid YOLO and SAM.

M PJ, M SK

pubmed logopapersJul 30 2025
Early-stage Brain tumor detection is critical for timely diagnosis and effective treatment. We propose a hybrid deep learning method, Convolutional Neural Network (CNN) integrated with YOLO (You Only Look once) and SAM (Segment Anything Model) for diagnosing tumors. A novel hybrid deep learning framework combining a CNN with YOLOv11 for real-time object detection and the SAM for precise segmentation. Enhancing the CNN backbone with deeper convolutional layers to enable robust feature extraction, while YOLOv11 localizes tumor regions, SAM is used to refine the tumor boundaries through detailed mask generation. A dataset of 896 MRI brain images is used for training, testing, and validating the model, including images of both tumors and healthy brains. Additionally, CNN-based YOLO+SAM methods were utilized successfully to segment and diagnose brain tumors. Our suggested model achieves good performance of Precision as 94.2%, Recall as 95.6% and mAP50(B) score as 96.5% demonstrating and highlighting the effectiveness of the proposed approach for early-stage brain tumor diagnosis Conclusion: The validation is demonstrated through a comprehensive ablation study. The robustness of the system makes it more suitable for clinical deployment.

Efficacy of image similarity as a metric for augmenting small dataset retinal image segmentation.

Wallace T, Heng IS, Subasic S, Messenger C

pubmed logopapersJul 30 2025
Synthetic images are an option for augmenting limited medical imaging datasets to improve the performance of various machine learning models. A common metric for evaluating synthetic image quality is the Fréchet Inception Distance (FID) which measures the similarity of two image datasets. In this study we evaluate the relationship between this metric and the improvement which synthetic images, generated by a Progressively Growing Generative Adversarial Network (PGGAN), grant when augmenting Diabetes-related Macular Edema (DME) intraretinal fluid segmentation performed by a U-Net model with limited amounts of training data. We find that the behaviour of augmenting with standard and synthetic images agrees with previously conducted experiments. Additionally, we show that dissimilar (high FID) datasets do not improve segmentation significantly. As FID between the training and augmenting datasets decreases, the augmentation datasets are shown to contribute to significant and robust improvements in image segmentation. Finally, we find that there is significant evidence to suggest that synthetic and standard augmentations follow separate log-normal trends between FID and improvements in model performance, with synthetic data proving more effective than standard augmentation techniques. Our findings show that more similar datasets (lower FID) will be more effective at improving U-Net performance, however, the results also suggest that this improvement may only occur when images are sufficiently dissimilar.

A Segmentation Framework for Accurate Diagnosis of Amyloid Positivity without Structural Images

Penghan Zhu, Shurui Mei, Shushan Chen, Xiaobo Chu, Shanbo He, Ziyi Liu

arxiv logopreprintJul 30 2025
This study proposes a deep learning-based framework for automated segmentation of brain regions and classification of amyloid positivity using positron emission tomography (PET) images alone, without the need for structural MRI or CT. A 3D U-Net architecture with four layers of depth was trained and validated on a dataset of 200 F18-florbetapir amyloid-PET scans, with an 130/20/50 train/validation/test split. Segmentation performance was evaluated using Dice similarity coefficients across 30 brain regions, with scores ranging from 0.45 to 0.88, demonstrating high anatomical accuracy, particularly in subcortical structures. Quantitative fidelity of PET uptake within clinically relevant regions. Precuneus, prefrontal cortex, gyrus rectus, and lateral temporal cortex was assessed using normalized root mean square error, achieving values as low as 0.0011. Furthermore, the model achieved a classification accuracy of 0.98 for amyloid positivity based on regional uptake quantification, with an area under the ROC curve (AUC) of 0.99. These results highlight the model's potential for integration into PET only diagnostic pipelines, particularly in settings where structural imaging is not available. This approach reduces dependence on coregistration and manual delineation, enabling scalable, reliable, and reproducible analysis in clinical and research applications. Future work will focus on clinical validation and extension to diverse PET tracers including C11 PiB and other F18 labeled compounds.

High-Resolution Ultrasound Data for AI-Based Segmentation in Mouse Brain Tumor.

Dorosti S, Landry T, Brewer K, Forbes A, Davis C, Brown J

pubmed logopapersJul 30 2025
Glioblastoma multiforme (GBM) is the most aggressive type of brain cancer, making effective treatments essential to improve patient survival. To advance the understanding of GBM and develop more effective therapies, preclinical studies commonly use mouse models due to their genetic and physiological similarities to humans. In particular, the GL261 mouse glioma model is employed for its reproducible tumor growth and ability to mimic key aspects of human gliomas. Ultrasound imaging is a valuable modality in preclinical studies, offering real-time, non-invasive tumor monitoring and facilitating treatment response assessment. Furthermore, its potential therapeutic applications, such as in tumor ablation, expand its utility in preclinical studies. However, real-time segmentation of GL261 tumors during surgery introduces significant complexities, such as precise tumor boundary delineation and maintaining processing efficiency. Automated segmentation offers a solution, but its success relies on high-quality datasets with precise labeling. Our study introduces the first publicly available ultrasound dataset specifically developed to improve tumor segmentation in GL261 glioblastomas, providing 1,856 annotated images to support AI model development in preclinical research. This dataset bridges preclinical insights and clinical practice, laying the foundation for developing more accurate and effective tumor resection techniques.

Bridging the Gap in Missing Modalities: Leveraging Knowledge Distillation and Style Matching for Brain Tumor Segmentation

Shenghao Zhu, Yifei Chen, Weihong Chen, Yuanhan Wang, Chang Liu, Shuo Jiang, Feiwei Qin, Changmiao Wang

arxiv logopreprintJul 30 2025
Accurate and reliable brain tumor segmentation, particularly when dealing with missing modalities, remains a critical challenge in medical image analysis. Previous studies have not fully resolved the challenges of tumor boundary segmentation insensitivity and feature transfer in the absence of key imaging modalities. In this study, we introduce MST-KDNet, aimed at addressing these critical issues. Our model features Multi-Scale Transformer Knowledge Distillation to effectively capture attention weights at various resolutions, Dual-Mode Logit Distillation to improve the transfer of knowledge, and a Global Style Matching Module that integrates feature matching with adversarial learning. Comprehensive experiments conducted on the BraTS and FeTS 2024 datasets demonstrate that MST-KDNet surpasses current leading methods in both Dice and HD95 scores, particularly in conditions with substantial modality loss. Our approach shows exceptional robustness and generalization potential, making it a promising candidate for real-world clinical applications. Our source code is available at https://github.com/Quanato607/MST-KDNet.

Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study

Ashkan Moradi, Fadila Zerka, Joeran S. Bosma, Mohammed R. S. Sunoqrot, Bendik S. Abrahamsen, Derya Yakar, Jeroen Geerdink, Henkjan Huisman, Tone Frost Bathen, Mattijs Elschot

arxiv logopreprintJul 30 2025
Purpose: To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods: A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. P-values for performance differences were calculated using permutation testing. Results: The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation and csPCa detection on the independent test set. The optimized FL model showed higher lesion detection performance compared to the FL-baseline model, but no evidence of a difference was observed for prostate segmentation. Conclusions: FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance.

Optimizing Federated Learning Configurations for MRI Prostate Segmentation and Cancer Detection: A Simulation Study.

Moradi A, Zerka F, Bosma JS, Sunoqrot MRS, Abrahamsen BS, Yakar D, Geerdink J, Huisman H, Bathen TF, Elschot M

pubmed logopapersJul 30 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and optimize a federated learning (FL) framework across multiple clients for biparametric MRI prostate segmentation and clinically significant prostate cancer (csPCa) detection. Materials and Methods A retrospective study was conducted using Flower FL to train a nnU-Net-based architecture for MRI prostate segmentation and csPCa detection, using data collected from January 2010 to August 2021. Model development included training and optimizing local epochs, federated rounds, and aggregation strategies for FL-based prostate segmentation on T2-weighted MRIs (four clients, 1294 patients) and csPCa detection using biparametric MRIs (three clients, 1440 patients). Performance was evaluated on independent test sets using the Dice score for segmentation and the Prostate Imaging: Cancer Artificial Intelligence (PI-CAI) score, defined as the average of the area under the receiver operating characteristic curve and average precision, for csPCa detection. <i>P</i> values for performance differences were calculated using permutation testing. Results The FL configurations were independently optimized for both tasks, showing improved performance at 1 epoch 300 rounds using FedMedian for prostate segmentation and 5 epochs 200 rounds using FedAdagrad, for csPCa detection. Compared with the average performance of the clients, the optimized FL model significantly improved performance in prostate segmentation (Dice score increase from 0.73 ± 0.06 to 0.88 ± 0.03; <i>P</i> ≤ .01) and csPCa detection (PI-CAI score increase from 0.63 ± 0.07 to 0.74 ± 0.06; <i>P</i> ≤ .01) on the independent test set. The optimized FL model showed higher lesion detection performance compared with the FL-baseline model (PICAI score increase from 0.72 ± 0.06 to 0.74 ± 0.06; <i>P</i> ≤ .01), but no evidence of a difference was observed for prostate segmentation (Dice scores, 0.87 ± 0.03 vs 0.88 ± 03; <i>P</i> > .05). Conclusion FL enhanced the performance and generalizability of MRI prostate segmentation and csPCa detection compared with local models, and optimizing its configuration further improved lesion detection performance. ©RSNA, 2025.

Clinician Perspectives of a Magnetic Resonance Imaging-Based 3D Volumetric Analysis Tool for Neurofibromatosis Type 2-Related Schwannomatosis: Qualitative Pilot Study.

Desroches ST, Huang A, Ghankot R, Tommasini SM, Wiznia DH, Buono FD

pubmed logopapersJul 30 2025
Accurate monitoring of tumor progression is crucial for optimizing outcomes in neurofibromatosis type 2-related schwannomatosis. Standard 2D linear analysis on magnetic resonance imaging is less accurate than 3D volumetric analysis, but since 3D volumetric analysis is time-consuming, it is not widely used. To shorten the time required for 3D volumetric analysis, our lab has been developing an automated artificial intelligence-driven 3D volumetric tool. The objective of the study was to survey and interview clinicians treating neurofibromatosis type 2-related schwannomatosis to understand their views on current 2D analysis and to gather insights for the design of an artificial intelligence-driven 3D volumetric analysis tool. Interviews examined for the following themes: (1) shortcomings of the currently used linear analysis, (2) utility of 3D visualizations, (3) features of an interactive 3D modeling software, and (4) lack of a gold standard to assess the accuracy of 3D volumetric analysis. A Likert scale questionnaire was used to survey clinicians' levels of agreement with 25 statements related to 2D and 3D tumor analyses. A total of 14 clinicians completed a survey, and 12 clinicians were interviewed. Specialties ranged across neurosurgery, neuroradiology, neurology, oncology, and pediatrics. Overall, clinicians expressed concerns with current linear techniques, with clinicians agreeing that linear measurements can be variable with the possibility of two different clinicians calculating 2 different tumor sizes (mean 4.64, SD 0.49) and that volumetric measurements would be more helpful for determining clearer thresholds of tumor growth (mean 4.50, SD 0.52). For statements discussing the capabilities of a 3D volumetric analysis and visualization software, clinicians expressed strong interest in being able to visualize tumors with respect to critical brain structures (mean 4.36, SD 0.74) and in forecasting tumor growth (mean 4.77, SD 0.44). Clinicians were overall in favor of the adoption of 3D volumetric analysis techniques for measuring vestibular schwannoma tumors but expressed concerns regarding the novelty and inexperience surrounding these techniques. However, clinicians felt that the ability to visualize tumors with reference to critical structures, to overlay structures, to interact with 3D models, and to visualize areas of slow versus rapid growth in 3D would be valuable contributions to clinical practice. Overall, clinicians provided valuable insights for designing a 3D volumetric analysis tool for vestibular schwannoma tumor growth. These findings may also apply to other central nervous system tumors, offering broader utility in tumor growth assessments.

segcsvdPVS: A convolutional neural network-based tool for quantification of enlarged perivascular spaces (PVS) on T1-weighted images

Gibson, E., Ramirez, J., Woods, L. A., Berberian, S., Ottoy, J., Scott, C., Yhap, V., Gao, F., Coello, r. D., Valdes-Hernandez, m., Lange, A., Tartaglia, C., Kumar, S., Binns, M. A., Bartha, R., Symons, S., Swartz, R. H., Masellis, M., Singh, N., MacIntosh, B. J., Wardlaw, J. M., Black, S. E., Lim, A. S., Goubran, M.

medrxiv logopreprintJul 29 2025
IntroductionEnlarged perivascular spaces (PVS) are imaging markers of cerebral small vessel disease (CSVD) that are associated with age, disease phenotypes, and overall health. Quantification of PVS is challenging but necessary to expand an understanding of their role in cerebrovascular pathology. Accurate and automated segmentation of PVS on T1-weighted images would be valuable given the widespread use of T1-weighted imaging protocols in multisite clinical and research datasets. MethodsWe introduce segcsvdPVS, a convolutional neural network (CNN)-based tool for automated PVS segmentation on T1-weighted images. segcsvdPVS was developed using a novel hierarchical approach that builds on existing tools and incorporates robust training strategies to enhance the accuracy and consistency of PVS segmentation. Performance was evaluated using a comprehensive evaluation strategy that included comparison to existing benchmark methods, ablation-based validation, accuracy validation against manual ground truth annotations, correlation with age-related PVS burden as a biological benchmark, and extensive robustness testing. ResultssegcsvdPVS achieved strong object-level performance for basal ganglia PVS (DSC = 0.78), exhibiting both high sensitivity (SNS = 0.80) and precision (PRC = 0.78). Although voxel-level precision was lower (PRC = 0.57), manual correction improved this by only ~3%, indicating that the additional voxels reflected primary boundary- or extent-related differences rather than correctable false positive error. For non-basal ganglia PVS, segcsvdPVS outperformed benchmark methods, exhibiting higher voxel-level performance across several metrics (DSC = 0.60, SNS = 0.67, PRC = 0.57, NSD = 0.77), despite overall lower performance relative to basal ganglia PVS. Additionally, the association between age and segmentation-derived measures of PVS burden were consistently stronger and more reliable for segcsvdPVS compared to benchmark methods across three cohorts (test6, ADNI, CAHHM), providing further evidence of the accuracy and consistency of its segmentation output. ConclusionssegcsvdPVS demonstrates robust performance across diverse imaging conditions and improved sensitivity to biologically meaningful associations, supporting its utility as a T1-based PVS segmentation tool.
Page 49 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.