Sort by:
Page 11 of 41408 results

Generalist medical foundation model improves prostate cancer segmentation from multimodal MRI images.

Zhang Y, Ma X, Li M, Huang K, Zhu J, Wang M, Wang X, Wu M, Heng PA

pubmed logopapersJun 18 2025
Prostate cancer (PCa) is one of the most common types of cancer, seriously affecting adult male health. Accurate and automated PCa segmentation is essential for radiologists to confirm the location of cancer, evaluate its severity, and design appropriate treatments. This paper presents PCaSAM, a fully automated PCa segmentation model that allows us to input multi-modal MRI images into the foundation model to improve performance significantly. We collected multi-center datasets to conduct a comprehensive evaluation. The results showed that PCaSAM outperforms the generalist medical foundation model and the other representative segmentation models, with the average DSC of 0.721 and 0.706 in the internal and external datasets, respectively. Furthermore, with the assistance of segmentation, the PI-RADS scoring of PCa lesions was improved significantly, leading to a substantial increase in average AUC by 8.3-8.9% on two external datasets. Besides, PCaSAM achieved superior efficiency, making it highly suitable for real-world deployment scenarios.

Pediatric Pancreas Segmentation from MRI Scans with Deep Learning

Elif Keles, Merve Yazol, Gorkem Durak, Ziliang Hong, Halil Ertugrul Aktas, Zheyuan Zhang, Linkai Peng, Onkar Susladkar, Necati Guzelyel, Oznur Leman Boyunaga, Cemal Yazici, Mark Lowe, Aliye Uc, Ulas Bagci

arxiv logopreprintJun 18 2025
Objective: Our study aimed to evaluate and validate PanSegNet, a deep learning (DL) algorithm for pediatric pancreas segmentation on MRI in children with acute pancreatitis (AP), chronic pancreatitis (CP), and healthy controls. Methods: With IRB approval, we retrospectively collected 84 MRI scans (1.5T/3T Siemens Aera/Verio) from children aged 2-19 years at Gazi University (2015-2024). The dataset includes healthy children as well as patients diagnosed with AP or CP based on clinical criteria. Pediatric and general radiologists manually segmented the pancreas, then confirmed by a senior pediatric radiologist. PanSegNet-generated segmentations were assessed using Dice Similarity Coefficient (DSC) and 95th percentile Hausdorff distance (HD95). Cohen's kappa measured observer agreement. Results: Pancreas MRI T2W scans were obtained from 42 children with AP/CP (mean age: 11.73 +/- 3.9 years) and 42 healthy children (mean age: 11.19 +/- 4.88 years). PanSegNet achieved DSC scores of 88% (controls), 81% (AP), and 80% (CP), with HD95 values of 3.98 mm (controls), 9.85 mm (AP), and 15.67 mm (CP). Inter-observer kappa was 0.86 (controls), 0.82 (pancreatitis), and intra-observer agreement reached 0.88 and 0.81. Strong agreement was observed between automated and manual volumes (R^2 = 0.85 in controls, 0.77 in diseased), demonstrating clinical reliability. Conclusion: PanSegNet represents the first validated deep learning solution for pancreatic MRI segmentation, achieving expert-level performance across healthy and diseased states. This tool, algorithm, along with our annotated dataset, are freely available on GitHub and OSF, advancing accessible, radiation-free pediatric pancreatic imaging and fostering collaborative research in this underserved domain.

Effects of patient and imaging factors on small bowel motility scores derived from deep learning-based segmentation of cine MRI.

Heo S, Yun J, Kim DW, Park SY, Choi SH, Kim K, Jung KW, Myung SJ, Park SH

pubmed logopapersJun 17 2025
Small bowel motility can be quantified using cine MRI, but the influence of patient and imaging factors on motility scores remains unclear. This study evaluated whether patient and imaging factors affect motility scores derived from deep learning-based segmentation of cine MRI. Fifty-four patients (mean age 53.6 ± 16.4 years; 34 women) with chronic constipation or suspected colonic pseudo-obstruction who underwent cine MRI covering the entire small bowel between 2022 and 2023 were included. A deep learning algorithm was developed to segment small bowel regions, and motility was quantified with an optical flow-based algorithm, producing a motility score for each slice. Associations of motility scores with patient factors (age, sex, body mass index, symptoms, and bowel distension) and MRI slice-related factors (anatomical location, bowel area, and anteroposterior position) were analyzed using linear mixed models. Deep learning-based small bowel segmentation achieved a mean volumetric Dice similarity coefficient of 75.4 ± 18.9%, with a manual correction time of 26.5 ± 13.5 s. Median motility scores per patient ranged from 26.4 to 64.4, with an interquartile range of 3.1-26.6. Multivariable analysis revealed that MRI slice-related factors, including anatomical location with mixed ileum and jejunum (β = -4.9; p = 0.01, compared with ileum dominant), bowel area (first order β = -0.2, p < 0.001; second order β = 5.7 × 10<sup>-4</sup>, p < 0.001), and anteroposterior position (first order β = -51.5, p < 0.001; second order β = 28.8, p = 0.004) were significantly associated with motility scores. Patient factors showed no association with motility scores. Small bowel motility scores were significantly associated with MRI slice-related factors. Determining global motility without adjusting for these factors may be limited. Question Global small bowel motility can be quantified from cine MRI; however, the confounding factors affecting motility scores remain unclear. Findings Motility scores were significantly influenced by MRI slice-related factors, including anatomical location, bowel area, and anteroposterior position. Clinical relevance Adjusting for slice-related factors is essential for accurate interpretation of small bowel motility scores on cine MRI.

A Digital Twin Framework for Adaptive Treatment Planning in Radiotherapy

Chih-Wei Chang, Sri Akkineni, Mingzhe Hu, Keyur D. Shah, Jun Zhou, Xiaofeng Yang

arxiv logopreprintJun 17 2025
This study aims to develop and evaluate a digital twin (DT) framework to enhance adaptive proton therapy for prostate stereotactic body radiotherapy (SBRT), focusing on improving treatment precision for dominant intraprostatic lesions (DILs) while minimizing organ-at-risk (OAR) toxicity. We propose a decision-theoretic (DT) framework combining deep learning (DL)-based deformable image registration (DIR) with a prior treatment database to generate synthetic CTs (sCTs) for predicting interfractional anatomical changes. Using daily CBCT from five prostate SBRT patients with DILs, the framework precomputes multiple plans with high (DT-H) and low (DT-L) similarity sCTs. Plan optimization is performed in RayStation 2023B, assuming a constant RBE of 1.1 and robustly accounting for positional and range uncertainties. Plan quality is evaluated via a modified ProKnow score across two fractions, with reoptimization limited to 10 minutes. Daily CBCT evaluation showed clinical plans often violated OAR constraints (e.g., bladder V20.8Gy, rectum V23Gy), with DIL V100 < 90% in 2 patients, indicating SIFB failure. DT-H plans, using high-similarity sCTs, achieved better or comparable DIL/CTV coverage and lower OAR doses, with reoptimization completed within 10 min (e.g., DT-H-REopt-A score: 154.3-165.9). DT-L plans showed variable outcomes; lower similarity correlated with reduced DIL coverage (e.g., Patient 4: 84.7%). DT-H consistently outperformed clinical plans within time limits, while extended optimization brought DT-L and clinical plans closer to DT-H quality. This DT framework enables rapid, personalized adaptive proton therapy, improving DIL targeting and reducing toxicity. By addressing geometric uncertainties, it supports outcome gains in ultra-hypofractionated prostate RT and lays groundwork for future multimodal anatomical prediction.

Latent Anomaly Detection: Masked VQ-GAN for Unsupervised Segmentation in Medical CBCT

Pengwei Wang

arxiv logopreprintJun 17 2025
Advances in treatment technology now allow for the use of customizable 3D-printed hydrogel wound dressings for patients with osteoradionecrosis (ORN) of the jaw (ONJ). Meanwhile, deep learning has enabled precise segmentation of 3D medical images using tools like nnUNet. However, the scarcity of labeled data in ONJ imaging makes supervised training impractical. This study aims to develop an unsupervised training approach for automatically identifying anomalies in imaging scans. We propose a novel two-stage training pipeline. In the first stage, a VQ-GAN is trained to accurately reconstruct normal subjects. In the second stage, random cube masking and ONJ-specific masking are applied to train a new encoder capable of recovering the data. The proposed method achieves successful segmentation on both simulated and real patient data. This approach provides a fast initial segmentation solution, reducing the burden of manual labeling. Additionally, it has the potential to be directly used for 3D printing when combined with hand-tuned post-processing.

2nd trimester ultrasound (anomaly).

Carocha A, Vicente M, Bernardeco J, Rijo C, Cohen Á, Cruz J

pubmed logopapersJun 17 2025
The second-trimester ultrasound is a crucial tool in prenatal care, typically conducted between 18 and 24 weeks of gestation to evaluate fetal anatomy, growth, and mid-trimester screening. This article provides a comprehensive overview of the best practices and guidelines for performing this examination, with a focus on detecting fetal anomalies. The ultrasound assesses key structures and evaluates fetal growth by measuring biometric parameters, which are essential for estimating fetal weight. Additionally, the article discusses the importance of placental evaluation, amniotic fluid levels measurement, and the risk of preterm birth through cervical length measurements. Factors that can affect the accuracy of the scan, such as the skill of the operator, the quality of the equipment, and maternal conditions such as obesity, are discussed. The article also addresses the limitations of the procedure, including variability in detection. Despite these challenges, the second-trimester ultrasound remains a valuable screening and diagnostic tool, providing essential information for managing pregnancies, especially in high-risk cases. Future directions include improving imaging technology, integrating artificial intelligence for anomaly detection, and standardizing ultrasound protocols to enhance diagnostic accuracy and ensure consistent prenatal care.

Deep learning based colorectal cancer detection in medical images: A comprehensive analysis of datasets, methods, and future directions.

Gülmez B

pubmed logopapersJun 17 2025
This comprehensive review examines the current state and evolution of artificial intelligence applications in colorectal cancer detection through medical imaging from 2019 to 2025. The study presents a quantitative analysis of 110 high-quality publications and 9 publicly accessible medical image datasets used for training and validation. Various convolutional neural network architectures-including ResNet (40 implementations), VGG (18 implementations), and emerging transformer-based models (12 implementations)-for classification, object detection, and segmentation tasks are systematically categorized and evaluated. The investigation encompasses hyperparameter optimization techniques utilized to enhance model performance, with particular focus on genetic algorithms and particle swarm optimization approaches. The role of explainable AI methods in medical diagnosis interpretation is analyzed through visualization techniques such as Grad-CAM and SHAP. Technical limitations, including dataset scarcity, computational constraints, and standardization challenges, are identified through trend analysis. Research gaps in current methodologies are highlighted through comparative assessment of performance metrics across different architectural implementations. Potential future research directions, including multimodal learning and federated learning approaches, are proposed based on publication trend analysis. This review serves as a comprehensive reference for researchers in medical image analysis and clinical practitioners implementing AI-based colorectal cancer detection systems.

Imaging-Based AI for Predicting Lymphovascular Space Invasion in Cervical Cancer: Systematic Review and Meta-Analysis.

She L, Li Y, Wang H, Zhang J, Zhao Y, Cui J, Qiu L

pubmed logopapersJun 16 2025
The role of artificial intelligence (AI) in enhancing the accuracy of lymphovascular space invasion (LVSI) detection in cervical cancer remains debated. This meta-analysis aimed to evaluate the diagnostic accuracy of imaging-based AI for predicting LVSI in cervical cancer. We conducted a comprehensive literature search across multiple databases, including PubMed, Embase, and Web of Science, identifying studies published up to November 9, 2024. Studies were included if they evaluated the diagnostic performance of imaging-based AI models in detecting LVSI in cervical cancer. We used a bivariate random-effects model to calculate pooled sensitivity and specificity with corresponding 95% confidence intervals. Study heterogeneity was assessed using the I2 statistic. Of 403 studies identified, 16 studies (2514 patients) were included. For the interval validation set, the pooled sensitivity, specificity, and area under the curve (AUC) for detecting LVSI were 0.84 (95% CI 0.79-0.87), 0.78 (95% CI 0.75-0.81), and 0.87 (95% CI 0.84-0.90). For the external validation set, the pooled sensitivity, specificity, and AUC for detecting LVSI were 0.79 (95% CI 0.70-0.86), 0.76 (95% CI 0.67-0.83), and 0.84 (95% CI 0.81-0.87). Using the likelihood ratio test for subgroup analysis, deep learning demonstrated significantly higher sensitivity compared to machine learning (P=.01). Moreover, AI models based on positron emission tomography/computed tomography exhibited superior sensitivity relative to those based on magnetic resonance imaging (P=.01). Imaging-based AI, particularly deep learning algorithms, demonstrates promising diagnostic performance in predicting LVSI in cervical cancer. However, the limited external validation datasets and the retrospective nature of the research may introduce potential biases. These findings underscore AI's potential as an auxiliary diagnostic tool, necessitating further large-scale prospective validation.

Interpretable deep fuzzy network-aided detection of central lymph node metastasis status in papillary thyroid carcinoma.

Wang W, Ning Z, Zhang J, Zhang Y, Wang W

pubmed logopapersJun 16 2025
The non-invasive assessment of central lymph node metastasis (CLNM) in patients with papillary thyroid carcinoma (PTC) plays a crucial role in assisting treatment decision and prognosis planning. This study aims to use an interpretable deep fuzzy network guided by expert knowledge to predict the CLNM status of patients with PTC from ultrasound images. A total of 1019 PTC patients were enrolled in this study, comprising 465 CLNM patients and 554 non-CLNM patients. Pathological diagnosis served as the gold standard to determine metastasis status. Clinical and morphological features of thyroid were collected as expert knowledge to guide the deep fuzzy network in predicting CLNM status. The network consisted of a region of interest (ROI) segmentation module, a knowledge-aware feature extraction module, and a fuzzy prediction module. The network was trained on 652 patients, validated on 163 patients and tested on 204 patients. The model exhibited promising performance in predicting CLNM status, achieving the area under the receiver operating characteristic curve (AUC), accuracy, precision, sensitivity and specificity of 0.786 (95% CI 0.720-0.846), 0.745 (95% CI 0.681-0.799), 0.727 (95% CI 0.636-0.819), 0.696 (95% CI 0.594-0.789), and 0.786 (95% CI 0.712-0.864), respectively. In addition, the rules of the fuzzy system in the model are easy to understand and explain, and have good interpretability. The deep fuzzy network guided by expert knowledge predicted CLNM status of PTC patients with high accuracy and good interpretability, and may be considered as an effective tool to guide preoperative clinical decision-making.
Page 11 of 41408 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.