Sort by:
Page 3 of 435 results

FF-PNet: A Pyramid Network Based on Feature and Field for Brain Image Registration

Ying Zhang, Shuai Guo, Chenxi Sun, Yuchen Zhu, Jinhai Xiang

arxiv logopreprintMay 8 2025
In recent years, deformable medical image registration techniques have made significant progress. However, existing models still lack efficiency in parallel extraction of coarse and fine-grained features. To address this, we construct a new pyramid registration network based on feature and deformation field (FF-PNet). For coarse-grained feature extraction, we design a Residual Feature Fusion Module (RFFM), for fine-grained image deformation, we propose a Residual Deformation Field Fusion Module (RDFFM). Through the parallel operation of these two modules, the model can effectively handle complex image deformations. It is worth emphasizing that the encoding stage of FF-PNet only employs traditional convolutional neural networks without any attention mechanisms or multilayer perceptrons, yet it still achieves remarkable improvements in registration accuracy, fully demonstrating the superior feature decoding capabilities of RFFM and RDFFM. We conducted extensive experiments on the LPBA and OASIS datasets. The results show our network consistently outperforms popular methods in metrics like the Dice Similarity Coefficient.

Artificial intelligence applied to ultrasound diagnosis of pelvic gynecological tumors: a systematic review and meta-analysis.

Geysels A, Garofalo G, Timmerman S, Barreñada L, De Moor B, Timmerman D, Froyman W, Van Calster B

pubmed logopapersMay 8 2025
To perform a systematic review on artificial intelligence (AI) studies focused on identifying and differentiating pelvic gynecological tumors on ultrasound scans. Studies developing or validating AI models for diagnosing gynecological pelvic tumors on ultrasound scans were eligible for inclusion. We systematically searched PubMed, Embase, Web of Science, and Cochrane Central from their database inception until April 30th, 2024. To assess the quality of the included studies, we adapted the QUADAS-2 risk of bias tool to address the unique challenges of AI in medical imaging. Using multi-level random effects models, we performed a meta-analysis to generate summary estimates of the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. To provide a reference point of current diagnostic support tools for ultrasound examiners, we descriptively compared the pooled performance to that of the well-recognized ADNEX model on external validation. Subgroup analyses were performed to explore sources of heterogeneity. From 9151 records retrieved, 44 studies were eligible: 40 on ovarian, three on endometrial, and one on myometrial pathology. Overall, 95% were at high risk of bias - primarily due to inappropriate study inclusion criteria, the absence of a patient-level split of training and testing image sets, and no calibration assessment. For ovarian tumors, the summary AUC for AI models distinguishing benign from malignant tumors was 0.89 (95% CI: 0.85-0.92). In lower-risk studies (at least three low-risk domains), the summary AUC dropped to 0.87 (0.83-0.90), with deep learning models outperforming radiomics-based machine learning approaches in this subset. Only five studies included an external validation, and six evaluated calibration performance. In a recent systematic review of external validation studies, the ADNEX model had a pooled AUC of 0.93 (0.91-0.94) in studies at low risk of bias. Studies on endometrial and myometrial pathologies were reported individually. Although AI models show promising discriminative performances for diagnosing gynecological tumors on ultrasound, most studies have methodological shortcomings that result in a high risk of bias. In addition, the ADNEX model appears to outperform most AI approaches for ovarian tumors. Future research should emphasize robust study designs - ideally large, multicenter, and prospective cohorts that mirror real-world populations - along with external validation, proper calibration, and standardized reporting. This study was pre-registered with Open Science Framework (OSF): https://doi.org/10.17605/osf.io/bhkst.

Radiomics-based machine learning in prediction of response to neoadjuvant chemotherapy in osteosarcoma: A systematic review and meta-analysis.

Salimi M, Houshi S, Gholamrezanezhad A, Vadipour P, Seifi S

pubmed logopapersMay 8 2025
Osteosarcoma (OS) is the most common primary bone malignancy, and neoadjuvant chemotherapy (NAC) improves survival rates. However, OS heterogeneity results in variable treatment responses, highlighting the need for reliable, non-invasive tools to predict NAC response. Radiomics-based machine learning (ML) offers potential for identifying imaging biomarkers to predict treatment outcomes. This systematic review and meta-analysis evaluated the accuracy and reliability of radiomics models for predicting NAC response in OS. A systematic search was conducted in PubMed, Embase, Scopus, and Web of Science up to November 2024. Studies using radiomics-based ML for NAC response prediction in OS were included. Pooled sensitivity, specificity, and AUC for training and validation cohorts were calculated using bivariate random-effects modeling, with clinical-combined models analyzed separately. Quality assessment was performed using the QUADAS-2 tool, radiomics quality score (RQS), and METRICS scores. Sixteen studies were included, with 63 % using MRI and 37 % using CT. Twelve studies, comprising 1639 participants, were included in the meta-analysis. Pooled metrics for training cohorts showed an AUC of 0.93, sensitivity of 0.89, and specificity of 0.85. Validation cohorts achieved an AUC of 0.87, sensitivity of 0.81, and specificity of 0.82. Clinical-combined models outperformed radiomics-only models. The mean RQS score was 9.44 ± 3.41, and the mean METRICS score was 60.8 % ± 17.4 %. Radiomics-based ML shows promise for predicting NAC response in OS, especially when combined with clinical indicators. However, limitations in external validation and methodological consistency must be addressed.

Automated Emergent Large Vessel Occlusion Detection Using Viz.ai Software and Its Impact on Stroke Workflow Metrics and Patient Outcomes in Stroke Centers: A Systematic Review and Meta-analysis.

Sarhan K, Azzam AY, Moawad MHED, Serag I, Abbas A, Sarhan AE

pubmed logopapersMay 8 2025
The implementation of artificial intelligence (AI), particularly Viz.ai software in stroke care, has emerged as a promising tool to enhance the detection of large vessel occlusion (LVO) and to improve stroke workflow metrics and patient outcomes. The aim of this systematic review and meta-analysis is to evaluate the impact of Viz.ai on stroke workflow efficiency in hospitals and on patients' outcomes. Following the PRISMA guidelines, we conducted a comprehensive search on electronic databases, including PubMed, Web of Science, and Scopus databases, to obtain relevant studies until 25 October 2024. Our primary outcomes were door-to-groin puncture (DTG) time, CT scan-to-start of endovascular treatment (EVT) time, CT scan-to-recanalization time, and door-in-door-out time. Secondary outcomes included symptomatic intracranial hemorrhage (ICH), any ICH, mortality, mRS score < 2 at 90 days, and length of hospital stay. A total of 12 studies involving 15,595 patients were included in our analysis. The pooled analysis demonstrated that the implementation of the Viz.ai algorithm was associated with lesser CT scan to EVT time (SMD -0.71, 95% CI [-0.98, -0.44], p < 0.001) and DTG time (SMD -0.50, 95% CI [-0.66, -0.35], p < 0.001) as well as CT to recanalization time (SMD -0.55, 95% CI [-0.76, -0.33], p < 0.001). Additionally, patients in the post-AI group had significantly lower door-in door-out time than the pre-AI group (SMD -0.49, 95% CI [-0.71, -0.28], p < 0.001). Despite the workflow metrics improvement, our analysis did not reveal statistically significant differences in patient clinical outcomes (p > 0.05). Our results suggest that the integration of the Viz.ai platform in stroke care holds significant potential for reducing EVT delays in patients with LVO and optimizing stroke flow metrics in comprehensive stroke centers. Further studies are required to validate its efficacy in improving clinical outcomes in patients with LVO.

Systematic review and epistemic meta-analysis to advance binomial AI-radiomics integration for predicting high-grade glioma progression and enhancing patient management.

Chilaca-Rosas MF, Contreras-Aguilar MT, Pallach-Loose F, Altamirano-Bustamante NF, Salazar-Calderon DR, Revilla-Monsalve C, Heredia-Gutiérrez JC, Conde-Castro B, Medrano-Guzmán R, Altamirano-Bustamante MM

pubmed logopapersMay 8 2025
High-grade gliomas, particularly glioblastoma (MeSH:Glioblastoma), are among the most aggressive and lethal central nervous system tumors, necessitating advanced diagnostic and prognostic strategies. This systematic review and epistemic meta-analysis explore the integration of Artificial Intelligence (AI) and Radiomics Inter-field (AIRI) to enhance predictive modeling for tumor progression. A comprehensive literature search identified 19 high-quality studies, which were analyzed to evaluate radiomic features and machine learning models in predicting overall survival (OS) and progression-free survival (PFS). Key findings highlight the predictive strength of specific MRI-derived radiomic features such as log-filter and Gabor textures and the superior performance of Support Vector Machines (SVM) and Random Forest (RF) models, achieving high accuracy and AUC scores (e.g., 98% AUC and 98.7% accuracy for OS). This research demonstrates the current state of the AIRI field and shows that current articles report their results with different performance indicators and metrics, making outcomes heterogenous and hard to integrate knowledge. Additionally, it was explored that today some articles use biased methodologies. This study proposes a structured AIRI development roadmap and guidelines, to avoid bias and make results comparable, emphasizing standardized feature extraction and AI model training to improve reproducibility across clinical settings. By advancing precision medicine, AIRI integration has the potential to refine clinical decision-making and enhance patient outcomes.

Automated Detection of Black Hole Sign for Intracerebral Hemorrhage Patients Using Self-Supervised Learning.

Wang H, Schwirtlich T, Houskamp EJ, Hutch MR, Murphy JX, do Nascimento JS, Zini A, Brancaleoni L, Giacomozzi S, Luo Y, Naidech AM

pubmed logopapersMay 7 2025
Intracerebral Hemorrhage (ICH) is a devastating form of stroke. Hematoma expansion (HE), growth of the hematoma on interval scans, predicts death and disability. Accurate prediction of HE is crucial for targeted interventions to improve patient outcomes. The black hole sign (BHS) on non-contrast computed tomography (CT) scans is a predictive marker for HE. An automated method to recognize the BHS and predict HE could speed precise patient selection for treatment. In. this paper, we presented a novel framework leveraging self-supervised learning (SSL) techniques for BHS identification on head CT images. A ResNet-50 encoder model was pre-trained on over 1.7 million unlabeled head CT images. Layers for binary classification were added on top of the pre-trained model. The resulting model was fine-tuned using the training data and evaluated on the held-out test set to collect AUC and F1 scores. The evaluations were performed on scan and slice levels. We ran different panels, one using two multi-center datasets for external validation and one including parts of them in the pre-training RESULTS: Our model demonstrated strong performance in identifying BHS when compared with the baseline model. Specifically, the model achieved scan-level AUC scores between 0.75-0.89 and F1 scores between 0.60-0.70. Furthermore, it exhibited robustness and generalizability across an external dataset, achieving a scan-level AUC score of up to 0.85 and an F1 score of up to 0.60, while it performed less well on another dataset with more heterogeneous samples. The negative effects could be mitigated after including parts of the external datasets in the fine-tuning process. This study introduced a novel framework integrating SSL into medical image classification, particularly on BHS identification from head CT scans. The resulting pre-trained head CT encoder model showed potential to minimize manual annotation, which would significantly reduce labor, time, and costs. After fine-tuning, the framework demonstrated promising performance for a specific downstream task, identifying the BHS to predict HE, upon comprehensive evaluation on diverse datasets. This approach holds promise for enhancing medical image analysis, particularly in scenarios with limited data availability. ICH = Intracerebral Hemorrhage; HE = Hematoma Expansion; BHS = Black Hole Sign; CT = Computed Tomography; SSL = Self-supervised Learning; AUC = Area Under the receiver operator Curve; CNN = Convolutional Neural Network; SimCLR = Simple framework for Contrastive Learning of visual Representation; HU = Hounsfield Unit; CLAIM = Checklist for Artificial Intelligence in Medical Imaging; VNA = Vendor Neutral Archive; DICOM = Digital Imaging and Communications in Medicine; NIfTI = Neuroimaging Informatics Technology Initiative; INR = International Normalized Ratio; GPU= Graphics Processing Unit; NIH= National Institutes of Health.

Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation.

Hossain KF, Kamran SA, Ong J, Tavakkoli A

pubmed logopapersMay 7 2025
The rapid evolution of deep learning has dramatically enhanced the field of medical image segmentation, leading to the development of models with unprecedented accuracy in analyzing complex medical images. Deep learning-based segmentation holds significant promise for advancing clinical care and enhancing the precision of medical interventions. However, these models' high computational demand and complexity present significant barriers to their application in resource-constrained clinical settings. To address this challenge, we introduce Teach-Former, a novel knowledge distillation (KD) framework that leverages a Transformer backbone to effectively condense the knowledge of multiple teacher models into a single, streamlined student model. Moreover, it excels in the contextual and spatial interpretation of relationships across multimodal images for more accurate and precise segmentation. Teach-Former stands out by harnessing multimodal inputs (CT, PET, MRI) and distilling the final predictions and the intermediate attention maps, ensuring a richer spatial and contextual knowledge transfer. Through this technique, the student model inherits the capacity for fine segmentation while operating with a significantly reduced parameter set and computational footprint. Additionally, introducing a novel training strategy optimizes knowledge transfer, ensuring the student model captures the intricate mapping of features essential for high-fidelity segmentation. The efficacy of Teach-Former has been effectively tested on two extensive multimodal datasets, HECKTOR21 and PI-CAI22, encompassing various image types. The results demonstrate that our KD strategy reduces the model complexity and surpasses existing state-of-the-art methods to achieve superior performance. The findings of this study indicate that the proposed methodology could facilitate efficient segmentation of complex multimodal medical images, supporting clinicians in achieving more precise diagnoses and comprehensive monitoring of pathological conditions ( https://github.com/FarihaHossain/TeachFormer ).

MRI-based multimodal AI model enables prediction of recurrence risk and adjuvant therapy in breast cancer.

Yu Y, Ren W, Mao L, Ouyang W, Hu Q, Yao Q, Tan Y, He Z, Ban X, Hu H, Lin R, Wang Z, Chen Y, Wu Z, Chen K, Ouyang J, Li T, Zhang Z, Liu G, Chen X, Li Z, Duan X, Wang J, Yao H

pubmed logopapersMay 7 2025
Timely intervention and improved prognosis for breast cancer patients rely on early metastasis risk detection and accurate treatment predictions. This study introduces an advanced multimodal MRI and AI-driven 3D deep learning model, termed the 3D-MMR-model, designed to predict recurrence risk in non-metastatic breast cancer patients. We conducted a multicenter study involving 1199 non-metastatic breast cancer patients from four institutions in China, with comprehensive MRI and clinical data retrospectively collected. Our model employed multimodal-data fusion, utilizing contrast-enhanced T1-weighted imaging (T1 + C) and T2-weighted imaging (T2WI) volumes, processed through a modified 3D-UNet for tumor segmentation and a DenseNet121-based architecture for disease-free survival (DFS) prediction. Additionally, we performed RNA-seq analysis to delve further into the relationship between concentrated hotspots within the tumor region and the tumor microenvironment. The 3D-MR-model demonstrated superior predictive performance, with time-dependent ROC analysis yielding AUC values of 0.90, 0.89, and 0.88 for 2-, 3-, and 4-year DFS predictions, respectively, in the training cohort. External validation cohorts corroborated these findings, highlighting the model's robustness across diverse clinical settings. Integration of clinicopathological features further enhanced the model's accuracy, with a multimodal approach significantly improving risk stratification and decision-making in clinical practice. Visualization techniques provided insights into the decision-making process, correlating predictions with tumor microenvironment characteristics. In summary, the 3D-MMR-model represents a significant advancement in breast cancer prognosis, combining cutting-edge AI technology with multimodal imaging to deliver precise and clinically relevant predictions of recurrence risk. This innovative approach holds promise for enhancing patient outcomes and guiding individualized treatment plans in breast cancer care.

Rethinking Boundary Detection in Deep Learning-Based Medical Image Segmentation

Yi Lin, Dong Zhang, Xiao Fang, Yufan Chen, Kwang-Ting Cheng, Hao Chen

arxiv logopreprintMay 6 2025
Medical image segmentation is a pivotal task within the realms of medical image analysis and computer vision. While current methods have shown promise in accurately segmenting major regions of interest, the precise segmentation of boundary areas remains challenging. In this study, we propose a novel network architecture named CTO, which combines Convolutional Neural Networks (CNNs), Vision Transformer (ViT) models, and explicit edge detection operators to tackle this challenge. CTO surpasses existing methods in terms of segmentation accuracy and strikes a better balance between accuracy and efficiency, without the need for additional data inputs or label injections. Specifically, CTO adheres to the canonical encoder-decoder network paradigm, with a dual-stream encoder network comprising a mainstream CNN stream for capturing local features and an auxiliary StitchViT stream for integrating long-range dependencies. Furthermore, to enhance the model's ability to learn boundary areas, we introduce a boundary-guided decoder network that employs binary boundary masks generated by dedicated edge detection operators to provide explicit guidance during the decoding process. We validate the performance of CTO through extensive experiments conducted on seven challenging medical image segmentation datasets, namely ISIC 2016, PH2, ISIC 2018, CoNIC, LiTS17, and BTCV. Our experimental results unequivocally demonstrate that CTO achieves state-of-the-art accuracy on these datasets while maintaining competitive model complexity. The codes have been released at: https://github.com/xiaofang007/CTO.

STG: Spatiotemporal Graph Neural Network with Fusion and Spatiotemporal Decoupling Learning for Prognostic Prediction of Colorectal Cancer Liver Metastasis

Yiran Zhu, Wei Yang, Yan su, Zesheng Li, Chengchang Pan, Honggang Qi

arxiv logopreprintMay 6 2025
We propose a multimodal spatiotemporal graph neural network (STG) framework to predict colorectal cancer liver metastasis (CRLM) progression. Current clinical models do not effectively integrate the tumor's spatial heterogeneity, dynamic evolution, and complex multimodal data relationships, limiting their predictive accuracy. Our STG framework combines preoperative CT imaging and clinical data into a heterogeneous graph structure, enabling joint modeling of tumor distribution and temporal evolution through spatial topology and cross-modal edges. The framework uses GraphSAGE to aggregate spatiotemporal neighborhood information and leverages supervised and contrastive learning strategies to enhance the model's ability to capture temporal features and improve robustness. A lightweight version of the model reduces parameter count by 78.55%, maintaining near-state-of-the-art performance. The model jointly optimizes recurrence risk regression and survival analysis tasks, with contrastive loss improving feature representational discriminability and cross-modal consistency. Experimental results on the MSKCC CRLM dataset show a time-adjacent accuracy of 85% and a mean absolute error of 1.1005, significantly outperforming existing methods. The innovative heterogeneous graph construction and spatiotemporal decoupling mechanism effectively uncover the associations between dynamic tumor microenvironment changes and prognosis, providing reliable quantitative support for personalized treatment decisions.
Page 3 of 435 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.