Sort by:
Page 68 of 71706 results

Improving Generalization of Medical Image Registration Foundation Model

Jing Hu, Kaiwei Yu, Hongjiang Xian, Shu Hu, Xin Wang

arxiv logopreprintMay 10 2025
Deformable registration is a fundamental task in medical image processing, aiming to achieve precise alignment by establishing nonlinear correspondences between images. Traditional methods offer good adaptability and interpretability but are limited by computational efficiency. Although deep learning approaches have significantly improved registration speed and accuracy, they often lack flexibility and generalizability across different datasets and tasks. In recent years, foundation models have emerged as a promising direction, leveraging large and diverse datasets to learn universal features and transformation patterns for image registration, thus demonstrating strong cross-task transferability. However, these models still face challenges in generalization and robustness when encountering novel anatomical structures, varying imaging conditions, or unseen modalities. To address these limitations, this paper incorporates Sharpness-Aware Minimization (SAM) into foundation models to enhance their generalization and robustness in medical image registration. By optimizing the flatness of the loss landscape, SAM improves model stability across diverse data distributions and strengthens its ability to handle complex clinical scenarios. Experimental results show that foundation models integrated with SAM achieve significant improvements in cross-dataset registration performance, offering new insights for the advancement of medical image registration technology. Our code is available at https://github.com/Promise13/fm_sam}{https://github.com/Promise13/fm\_sam.

Evaluating an information theoretic approach for selecting multimodal data fusion methods.

Zhang T, Ding R, Luong KD, Hsu W

pubmed logopapersMay 10 2025
Interest has grown in combining radiology, pathology, genomic, and clinical data to improve the accuracy of diagnostic and prognostic predictions toward precision health. However, most existing works choose their datasets and modeling approaches empirically and in an ad hoc manner. A prior study proposed four partial information decomposition (PID)-based metrics to provide a theoretical understanding of multimodal data interactions: redundancy, uniqueness of each modality, and synergy. However, these metrics have only been evaluated in a limited collection of biomedical data, and the existing work does not elucidate the effect of parameter selection when calculating the PID metrics. In this work, we evaluate PID metrics on a wider range of biomedical data, including clinical, radiology, pathology, and genomic data, and propose potential improvements to the PID metrics. We apply the PID metrics to seven different modality pairs across four distinct cohorts (datasets). We compare and interpret trends in the resulting PID metrics and downstream model performance in these multimodal cohorts. The downstream tasks being evaluated include predicting the prognosis (either overall survival or recurrence) of patients with non-small cell lung cancer, prostate cancer, and glioblastoma. We found that, while PID metrics are informative, solely relying on these metrics to decide on a fusion approach does not always yield a machine learning model with optimal performance. Of the seven different modality pairs, three had poor (0%), three had moderate (66%-89%), and only one had perfect (100%) consistency between the PID values and model performance. We propose two improvements to the PID metrics (determining the optimal parameters and uncertainty estimation) and identified areas where PID metrics could be further improved. The current PID metrics are not accurate enough for estimating the multimodal data interactions and need to be improved before they can serve as a reliable tool. We propose improvements and provide suggestions for future work. Code: https://github.com/zhtyolivia/pid-multimodal.

Preoperative radiomics models using CT and MRI for microsatellite instability in colorectal cancer: a systematic review and meta-analysis.

Capello Ingold G, Martins da Fonseca J, Kolenda Zloić S, Verdan Moreira S, Kago Marole K, Finnegan E, Yoshikawa MH, Daugėlaitė S, Souza E Silva TX, Soato Ratti MA

pubmed logopapersMay 10 2025
Microsatellite instability (MSI) is a novel predictive biomarker for chemotherapy and immunotherapy response, as well as prognostic indicator in colorectal cancer (CRC). The current standard for MSI identification is polymerase chain reaction (PCR) testing or the immunohistochemical analysis of tumor biopsy samples. However, tumor heterogeneity and procedure complications pose challenges to these techniques. CT and MRI-based radiomics models offer a promising non-invasive approach for this purpose. A systematic search of PubMed, Embase, Cochrane Library and Scopus was conducted to identify studies evaluating the diagnostic performance of CT and MRI-based radiomics models for detecting MSI status in CRC. Pooled area under the curve (AUC), sensitivity, and specificity were calculated in RStudio using a random-effects model. Forest plots and a summary ROC curve were generated. Heterogeneity was assessed using I² statistics and explored through sensitivity analyses, threshold effect assessment, subgroup analyses and meta-regression. 17 studies with a total of 6,045 subjects were included in the analysis. All studies extracted radiomic features from CT or MRI images of CRC patients with confirmed MSI status to train machine learning models. The pooled AUC was 0.815 (95% CI: 0.784-0.840) for CT-based studies and 0.900 (95% CI: 0.819-0.943) for MRI-based studies. Significant heterogeneity was identified and addressed through extensive analysis. Radiomics models represent a novel and promising tool for predicting MSI status in CRC patients. These findings may serve as a foundation for future studies aimed at developing and validating improved models, ultimately enhancing the diagnosis, treatment, and prognosis of colorectal cancer.

Application of artificial intelligence-based three dimensional digital reconstruction technology in precision treatment of complex total hip arthroplasty.

Zheng Q, She H, Zhang Y, Zhao P, Liu X, Xiang B

pubmed logopapersMay 10 2025
To evaluate the predictive ability of AI HIP in determining the size and position of prostheses during complex total hip arthroplasty (THA). Additionally, it investigates the factors influencing the accuracy of preoperative planning predictions. From April 2021 to December 2023, patients with complex hip joint diseases were divided into the AI preoperative planning group (n = 29) and the X-ray preoperative planning group (n = 27). Postoperative X-rays were used to measure acetabular anteversion angle, abduction angle, tip-to-sternum distance, intraoperative duration, blood loss, planning time, postoperative Harris Hip Scores (at 2 weeks, 3 months, and 6 months), and visual analogue scale (VAS) pain scores (at 2 weeks and at final follow-up) to analyze clinical outcomes. On the acetabular side, the accuracy of AI preoperative planning was higher compared to X-ray preoperative planning (75.9% vs. 44.4%, P = 0.016). On the femoral side, AI preoperative planning also showed higher accuracy compared to X-ray preoperative planning (85.2% vs. 59.3%, P = 0.033). The AI preoperative planning group showed superior outcomes in terms of reducing bilateral leg length discrepancy (LLD), decreasing operative time and intraoperative blood loss, early postoperative recovery, and pain control compared to the X-ray preoperative planning group (P < 0.05). No significant differences were observed between the groups regarding bilateral femoral offset (FO) differences, bilateral combined offset (CO) differences, abduction angle, anteversion angle, or tip-to-sternum distance. Factors such as gender, age, affected side, comorbidities, body mass index (BMI) classification, bone mineral density did not affect the prediction accuracy of AI HIP preoperative planning. Artificial intelligence-based 3D planning can be effectively utilized for preoperative planning in complex THA. Compared to X-ray templating, AI demonstrates superior accuracy in prosthesis measurement and provides significant clinical benefits, particularly in early postoperative recovery.

Adherence to SVS Abdominal Aortic Aneurysm Guidelines Among Pati ents Detected by AI-Based Algorithm.

Wilson EM, Yao K, Kostiuk V, Bader J, Loh S, Mojibian H, Fischer U, Ochoa Chaar CI, Aboian E

pubmed logopapersMay 9 2025
This study evaluates adherence to the latest Society for Vascular Surgery (SVS) guidelines on imaging surveillance, physician evaluation, and surgical intervention for abdominal aortic aneurysm (AAA). AI-based natural language processing applied retrospectively identified AAA patients from imaging scans at a tertiary care center between January-March 2019 and 2021, excluding the pandemic period. Retrospective chart review assessed demographics, comorbidities, imaging, and follow-up adherence. Statistical significance was set at p<0.05. Among 479 identified patients, 279 remained in the final cohort following exclusion of deceased patients. Imaging surveillance adherence was 67.7% (189/279), with males comprising 72.5% (137/189) (Figure 1). The mean age for adherent patients was 73.9 (SD ±9.5) vs. 75.2 (SD ±10.8) for non-adherent patients (Table 1). Adherent females were significantly younger than non-adherent females (76.7 vs. 81.1 years; p=0.003) with no significant age difference in adherent males. Adherent patients were more likely to be evaluated by a vascular provider within six months (p<0.001), but aneurysm size did not affect imaging adherence: 3.0-4.0cm (p=0.24), 4.0-5.0cm (p=0.88), >5.0cm (p=0.29). Based on SVS surgical criteria, 18 males (AAA >5.5cm) and 17 females (AAA >5.0cm) qualified for intervention and repair rates increased in 2021. 34 males (20 in 2019 v. 14 in 2021) and 7 females (2021 only) received surgical intervention below the threshold for repair. Despite consistent SVS guidelines, adherence remains moderate. AI-based detection and follow-up algorithms may enhance adherence and long-term AAA patient outcomes, however further research is needed to assess the specific impacts of AI.

Deep learning for Parkinson's disease classification using multimodal and multi-sequences PET/MR images.

Chang Y, Liu J, Sun S, Chen T, Wang R

pubmed logopapersMay 9 2025
We aimed to use deep learning (DL) techniques to accurately differentiate Parkinson's disease (PD) from multiple system atrophy (MSA), which share similar clinical presentations. In this retrospective analysis, 206 patients who underwent PET/MR imaging at the Chinese PLA General Hospital were included, having been clinically diagnosed with either PD or MSA; an additional 38 healthy volunteers served as normal controls (NC). All subjects were randomly assigned to the training and test sets at a ratio of 7:3. The input to the model consists of 10 two-dimensional (2D) slices in axial, coronal, and sagittal planes from multi-modal images. A modified Residual Block Network with 18 layers (ResNet18) was trained with different modal images, to classify PD, MSA, and NC. A four-fold cross-validation method was applied in the training set. Performance evaluations included accuracy, precision, recall, F1 score, Receiver operating characteristic (ROC), and area under the ROC curve (AUC). Six single-modal models and seven multi-modal models were trained and tested. The PET models outperformed MRI models. The <sup>11</sup>C-methyl-N-2β-carbomethoxy-3β-(4-fluorophenyl)-tropanel (<sup>11</sup>C-CFT) -Apparent Diffusion Coefficient (ADC) model showed the best classification, which resulted in 0.97 accuracy, 0.93 precision, 0.95 recall, 0.92 F1, and 0.96 AUC. In the test set, the accuracy, precision, recall, and F1 score of the CFT-ADC model were 0.70, 0.73, 0.93, and 0.82, respectively. The proposed DL method shows potential as a high-performance assisting tool for the accurate diagnosis of PD and MSA. A multi-modal and multi-sequence model could further enhance the ability to classify PD.

Computationally enabled polychromatic polarized imaging enables mapping of matrix architectures that promote pancreatic ductal adenocarcinoma dissemination.

Qian G, Zhang H, Liu Y, Shribak M, Eliceiri KW, Provenzano PP

pubmed logopapersMay 9 2025
Pancreatic ductal adenocarcinoma (PDA) is an extremely metastatic and lethal disease. In PDA, extracellular matrix (ECM) architectures known as Tumor-Associated Collagen Signatures (TACS) regulate invasion and metastatic spread in both early dissemination and in late-stage disease. As such, TACS has been suggested as a biomarker to aid in pathologic assessment. However, despite its significance, approaches to quantitatively capture these ECM patterns currently require advanced optical systems with signaling processing analysis. Here we present an expansion of polychromatic polarized microscopy (PPM) with inherent angular information coupled to machine learning and computational pixel-wise analysis of TACS. Using this platform, we are able to accurately capture TACS architectures in H&E stained histology sections directly through PPM contrast. Moreover, PPM facilitated identification of transitions to dissemination architectures, i.e., transitions from sequestration through expansion to dissemination from both PanINs and throughout PDA. Lastly, PPM evaluation of architectures in liver metastases, the most common metastatic site for PDA, demonstrates TACS-mediated focal and local invasion as well as identification of unique patterns anchoring aligned fibers into normal-adjacent tumor, suggesting that these patterns may be precursors to metastasis expansion and local spread from micrometastatic lesions. Combined, these findings demonstrate that PPM coupled to computational platforms is a powerful tool for analyzing ECM architecture that can be employed to advance cancer microenvironment studies and provide clinically relevant diagnostic information.

Application of Artificial Intelligence in Cardio-Oncology Imaging for Cancer Therapy-Related Cardiovascular Toxicity: Systematic Review.

Mushcab H, Al Ramis M, AlRujaib A, Eskandarani R, Sunbul T, AlOtaibi A, Obaidan M, Al Harbi R, Aljabri D

pubmed logopapersMay 9 2025
Artificial intelligence (AI) is a revolutionary tool yet to be fully integrated into several health care sectors, including medical imaging. AI can transform how medical imaging is conducted and interpreted, especially in cardio-oncology. This study aims to systematically review the available literature on the use of AI in cardio-oncology imaging to predict cardiotoxicity and describe the possible improvement of different imaging modalities that can be achieved if AI is successfully deployed to routine practice. We conducted a database search in PubMed, Ovid MEDLINE, Cochrane Library, CINAHL, and Google Scholar from inception to 2023 using the AI research assistant tool (Elicit) to search for original studies reporting AI outcomes in adult patients diagnosed with any cancer and undergoing cardiotoxicity assessment. Outcomes included incidence of cardiotoxicity, left ventricular ejection fraction, risk factors associated with cardiotoxicity, heart failure, myocardial dysfunction, signs of cancer therapy-related cardiovascular toxicity, echocardiography, and cardiac magnetic resonance imaging. Descriptive information about each study was recorded, including imaging technique, AI model, outcomes, and limitations. The systematic search resulted in 7 studies conducted between 2018 and 2023, which are included in this review. Most of these studies were conducted in the United States (71%), included patients with breast cancer (86%), and used magnetic resonance imaging as the imaging modality (57%). The quality assessment of the studies had an average of 86% compliance in all of the tool's sections. In conclusion, this systematic review demonstrates the potential of AI to enhance cardio-oncology imaging for predicting cardiotoxicity in patients with cancer. Our findings suggest that AI can enhance the accuracy and efficiency of cardiotoxicity assessments. However, further research through larger, multicenter trials is needed to validate these applications and refine AI technologies for routine use, paving the way for improved patient outcomes in cancer survivors at risk of cardiotoxicity.

DFEN: Dual Feature Equalization Network for Medical Image Segmentation

Jianjian Yin, Yi Chen, Chengyu Li, Zhichao Zheng, Yanhui Gu, Junsheng Zhou

arxiv logopreprintMay 9 2025
Current methods for medical image segmentation primarily focus on extracting contextual feature information from the perspective of the whole image. While these methods have shown effective performance, none of them take into account the fact that pixels at the boundary and regions with a low number of class pixels capture more contextual feature information from other classes, leading to misclassification of pixels by unequal contextual feature information. In this paper, we propose a dual feature equalization network based on the hybrid architecture of Swin Transformer and Convolutional Neural Network, aiming to augment the pixel feature representations by image-level equalization feature information and class-level equalization feature information. Firstly, the image-level feature equalization module is designed to equalize the contextual information of pixels within the image. Secondly, we aggregate regions of the same class to equalize the pixel feature representations of the corresponding class by class-level feature equalization module. Finally, the pixel feature representations are enhanced by learning weights for image-level equalization feature information and class-level equalization feature information. In addition, Swin Transformer is utilized as both the encoder and decoder, thereby bolstering the ability of the model to capture long-range dependencies and spatial correlations. We conducted extensive experiments on Breast Ultrasound Images (BUSI), International Skin Imaging Collaboration (ISIC2017), Automated Cardiac Diagnosis Challenge (ACDC) and PH$^2$ datasets. The experimental results demonstrate that our method have achieved state-of-the-art performance. Our code is publicly available at https://github.com/JianJianYin/DFEN.

Noise-Consistent Siamese-Diffusion for Medical Image Synthesis and Segmentation

Kunpeng Qiu, Zhiqiang Gao, Zhiying Zhou, Mingjie Sun, Yongxin Guo

arxiv logopreprintMay 9 2025
Deep learning has revolutionized medical image segmentation, yet its full potential remains constrained by the paucity of annotated datasets. While diffusion models have emerged as a promising approach for generating synthetic image-mask pairs to augment these datasets, they paradoxically suffer from the same data scarcity challenges they aim to mitigate. Traditional mask-only models frequently yield low-fidelity images due to their inability to adequately capture morphological intricacies, which can critically compromise the robustness and reliability of segmentation models. To alleviate this limitation, we introduce Siamese-Diffusion, a novel dual-component model comprising Mask-Diffusion and Image-Diffusion. During training, a Noise Consistency Loss is introduced between these components to enhance the morphological fidelity of Mask-Diffusion in the parameter space. During sampling, only Mask-Diffusion is used, ensuring diversity and scalability. Comprehensive experiments demonstrate the superiority of our method. Siamese-Diffusion boosts SANet's mDice and mIoU by 3.6% and 4.4% on the Polyps, while UNet improves by 1.52% and 1.64% on the ISIC2018. Code is available at GitHub.
Page 68 of 71706 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.