Sort by:
Page 226 of 6546537 results

Soumen Ghosh, Christine Jestin Hannan, Rajat Vashistha, Parveen Kundu, Sandra Brosda, Lauren G. Aoude, James Lonie, Andrew Nathanson, Jessica Ng, Andrew P. Barbour, Viktor Vegh

arxiv logopreprintAug 26 2025
Robust generalization is essential for deploying deep learning based tumor segmentation in clinical PET-CT workflows, where anatomical sites, scanners, and patient populations vary widely. This study presents the first cross cancer evaluation of nnU-Net on PET-CT, introducing two novel, expert-annotated whole-body datasets. 279 patients with oesophageal cancer (Australian cohort) and 54 with lung cancer (Indian cohort). These cohorts complement the public AutoPET dataset and enable systematic stress-testing of cross domain performance. We trained and tested 3D nnUNet models under three paradigms. Target only (oesophageal), public only (AutoPET), and combined training. For the tested sets, the oesophageal only model achieved the best in-domain accuracy (mean DSC, 57.8) but failed on external Indian lung cohort (mean DSC less than 3.4), indicating severe overfitting. The public only model generalized more broadly (mean DSC, 63.5 on AutoPET, 51.6 on Indian lung cohort) but underperformed in oesophageal Australian cohort (mean DSC, 26.7). The combined approach provided the most balanced results (mean DSC, lung (52.9), oesophageal (40.7), AutoPET (60.9)), reducing boundary errors and improving robustness across all cohorts. These findings demonstrate that dataset diversity, particularly multi demographic, multi center and multi cancer integration, outweighs architectural novelty as the key driver of robust generalization. This work presents the demography based cross cancer deep learning segmentation evaluation and highlights dataset diversity, rather than model complexity, as the foundation for clinically robust segmentation.

Jaivardhan Kapoor, Jakob H. Macke, Christian F. Baumgartner

arxiv logopreprintAug 26 2025
Simulating aging in 3D brain MRI scans can reveal disease progression patterns in neurological disorders such as Alzheimer's disease. Current deep learning-based generative models typically approach this problem by predicting future scans from a single observed scan. We investigate modeling brain aging via linear models in the latent space of convolutional autoencoders (MRExtrap). Our approach, MRExtrap, is based on our observation that autoencoders trained on brain MRIs create latent spaces where aging trajectories appear approximately linear. We train autoencoders on brain MRIs to create latent spaces, and investigate how these latent spaces allow predicting future MRIs through linear extrapolation based on age, using an estimated latent progression rate $\boldsymbol{\beta}$. For single-scan prediction, we propose using population-averaged and subject-specific priors on linear progression rates. We also demonstrate that predictions in the presence of additional scans can be flexibly updated using Bayesian posterior sampling, providing a mechanism for subject-specific refinement. On the ADNI dataset, MRExtrap predicts aging patterns accurately and beats a GAN-based baseline for single-volume prediction of brain aging. We also demonstrate and analyze multi-scan conditioning to incorporate subject-specific progression rates. Finally, we show that the latent progression rates in MRExtrap's linear framework correlate with disease and age-based aging patterns from previously studied structural atrophy rates. MRExtrap offers a simple and robust method for the age-based generation of 3D brain MRIs, particularly valuable in scenarios with multiple longitudinal observations.

Julian Suk, Jolanda J. Wentzel, Patryk Rygiel, Joost Daemen, Daniel Rueckert, Jelmer M. Wolterink

arxiv logopreprintAug 26 2025
Leveraging big data for patient care is promising in many medical fields such as cardiovascular health. For example, hemodynamic biomarkers like wall shear stress could be assessed from patient-specific medical images via machine learning algorithms, bypassing the need for time-intensive computational fluid simulation. However, it is extremely challenging to amass large-enough datasets to effectively train such models. We could address this data scarcity by means of self-supervised pre-training and foundations models given large datasets of geometric artery models. In the context of coronary arteries, leveraging learned representations to improve hemodynamic biomarker assessment has not yet been well studied. In this work, we address this gap by investigating whether a large dataset (8449 shapes) consisting of geometric models of 3D blood vessels can benefit wall shear stress assessment in coronary artery models from a small-scale clinical trial (49 patients). We create a self-supervised target for the 3D blood vessels by computing the heat kernel signature, a quantity obtained via Laplacian eigenvectors, which captures the very essence of the shapes. We show how geometric representations learned from this datasets can boost segmentation of coronary arteries into regions of low, mid and high (time-averaged) wall shear stress even when trained on limited data.

Mahdieh Behjat Khatooni, Mohsen Soryani

arxiv logopreprintAug 26 2025
Alzheimer's disease (AD) is one of the most prevalent neurodegenerative disorders worldwide. As it progresses, it leads to the deterioration of cognitive functions. Since AD is irreversible, early diagnosis is crucial for managing its progression. Mild Cognitive Impairment (MCI) represents an intermediate stage between Cognitively Normal (CN) individuals and those with AD, and is considered a transitional phase from normal cognition to Alzheimer's disease. Diagnosing MCI is particularly challenging due to the subtle differences between adjacent diagnostic categories. In this study, we propose EffNetViTLoRA, a generalized end-to-end model for AD diagnosis using the whole Alzheimer's Disease Neuroimaging Initiative (ADNI) Magnetic Resonance Imaging (MRI) dataset. Our model integrates a Convolutional Neural Network (CNN) with a Vision Transformer (ViT) to capture both local and global features from MRI images. Unlike previous studies that rely on limited subsets of data, our approach is trained on the full T1-weighted MRI dataset from ADNI, resulting in a more robust and unbiased model. This comprehensive methodology enhances the model's clinical reliability. Furthermore, fine-tuning large pretrained models often yields suboptimal results when source and target dataset domains differ. To address this, we incorporate Low-Rank Adaptation (LoRA) to effectively adapt the pretrained ViT model to our target domain. This method enables efficient knowledge transfer and reduces the risk of overfitting. Our model achieves a classification accuracy of 92.52% and an F1-score of 92.76% across three diagnostic categories: AD, MCI, and CN for full ADNI dataset.

Haoyang Su, Jin-Yi Xiang, Shaohao Rui, Yifan Gao, Xingyu Chen, Tingxuan Yin, Xiaosong Wang, Lian-Ming Wu

arxiv logopreprintAug 26 2025
Accurate prediction of major adverse cardiac events (MACE) remains a central challenge in cardiovascular prognosis. We present PRISM (Prompt-guided Representation Integration for Survival Modeling), a self-supervised framework that integrates visual representations from non-contrast cardiac cine magnetic resonance imaging with structured electronic health records (EHRs) for survival analysis. PRISM extracts temporally synchronized imaging features through motion-aware multi-view distillation and modulates them using medically informed textual prompts to enable fine-grained risk prediction. Across four independent clinical cohorts, PRISM consistently surpasses classical survival prediction models and state-of-the-art (SOTA) deep learning baselines under internal and external validation. Further clinical findings demonstrate that the combined imaging and EHR representations derived from PRISM provide valuable insights into cardiac risk across diverse cohorts. Three distinct imaging signatures associated with elevated MACE risk are uncovered, including lateral wall dyssynchrony, inferior wall hypersensitivity, and anterior elevated focus during diastole. Prompt-guided attribution further identifies hypertension, diabetes, and smoking as dominant contributors among clinical and physiological EHR factors.

Junyu Yan, Feng Chen, Yuyang Xue, Yuning Du, Konstantinos Vilouras, Sotirios A. Tsaftaris, Steven McDonagh

arxiv logopreprintAug 26 2025
Recent studies have shown that Machine Learning (ML) models can exhibit bias in real-world scenarios, posing significant challenges in ethically sensitive domains such as healthcare. Such bias can negatively affect model fairness, model generalization abilities and further risks amplifying social discrimination. There is a need to remove biases from trained models. Existing debiasing approaches often necessitate access to original training data and need extensive model retraining; they also typically exhibit trade-offs between model fairness and discriminative performance. To address these challenges, we propose Soft-Mask Weight Fine-Tuning (SWiFT), a debiasing framework that efficiently improves fairness while preserving discriminative performance with much less debiasing costs. Notably, SWiFT requires only a small external dataset and only a few epochs of model fine-tuning. The idea behind SWiFT is to first find the relative, and yet distinct, contributions of model parameters to both bias and predictive performance. Then, a two-step fine-tuning process updates each parameter with different gradient flows defined by its contribution. Extensive experiments with three bias sensitive attributes (gender, skin tone, and age) across four dermatological and two chest X-ray datasets demonstrate that SWiFT can consistently reduce model bias while achieving competitive or even superior diagnostic accuracy under common fairness and accuracy metrics, compared to the state-of-the-art. Specifically, we demonstrate improved model generalization ability as evidenced by superior performance on several out-of-distribution (OOD) datasets.

Sarker MMK, Mishra D, Alsharid M, Hernandez-Cruz N, Ahuja R, Patey O, Papageorghiou AT, Noble JA

pubmed logopapersAug 26 2025
Fetal echocardiography offers non-invasive and real-time imaging acquisition of fetal heart images to identify congenital heart conditions. Manual acquisition of standard heart views is time-consuming, whereas automated detection remains challenging due to high spatial similarity across anatomical views with subtle local image appearance variations. To address these challenges, we introduce a very lightweight frequency-guided deep learning-based model named HarmonicEchoNet that can automatically detect heart standard views in a transverse sweep or freehand ultrasound scan of the fetal heart. HarmonicEchoNet uses harmonic convolution blocks (HCBs) and a harmonic spatial and channel squeeze-and-excitation (hscSE) module. The HCBs apply a Discrete Cosine Transform (DCT)-based harmonic decomposition to input features, which are then combined using learned weights. The hscSE module identifies significant regions in the spatial domain to improve feature extraction of the fetal heart anatomical structures, capturing both spatial and channel-wise dependencies in an ultrasound image. The combination of these modules improves model performance relative to recent CNN-based, transformer-based, and CNN+transformer-based image classification models. We use four datasets from two private studies, PULSE (Perception Ultrasound by Learning Sonographic Experience) and CAIFE (Clinical Artificial Intelligence in Fetal Echocardiography), to develop and evaluate HarmonicEchoNet models. Experimental results show that HarmonicEchoNet is 10-15 times faster than ConvNeXt, DeiT, and VOLO, with an inference time of just 3.9 ms. It also achieves 2%-7% accuracy improvement in classifying fetal heart standard planes compared to these baselines. Furthermore, with just 19.9 million parameters compared to ConvNeXt's 196.24 million, HarmonicEchoNet is nearly ten times more parameter-efficient.

Mahmoud MA, Wu S, Su R, Wen Y, Liu S, Guan Y

pubmed logopapersAug 26 2025
Lung cancer remains a leading cause of cancer-related mortality worldwide. While deep learning approaches show promise in medical imaging, comprehensive comparisons of classifier combinations with DenseNet architectures for lung cancer classification are limited. The study investigates the performance of different classifier combinations, Support Vector Machine (SVM), Artificial Neural Network (ANN), and Multi-Layer Perceptron (MLP), with DenseNet architectures for lung cancer classification using chest CT scan images. A comparative analysis was conducted on 1,000 chest CT scan images comprising Adenocarcinoma, Large Cell Carcinoma, Squamous Cell Carcinoma, and normal tissue samples. Three DenseNet variants (DenseNet-121, DenseNet-169, DenseNet-201) were combined with three classifiers: SVM, ANN, and MLP. Performance was evaluated using accuracy, Area Under the Curve (AUC), precision, recall, specificity, and F1- score with an 80-20 train-test split. The optimal model achieved 92% training accuracy and 83% test accuracy. Performance across models ranged from 81% to 92% for training accuracy and 73% to 83% for test accuracy. The most balanced combination demonstrated robust results (training: 85% accuracy, 0.99 AUC; test: 79% accuracy, 0.95 AUC) with minimal overfitting. Deep learning approaches effectively categorize chest CT scans for lung cancer detection. The MLP-DenseNet-169 combination's 83% test accuracy represents a promising benchmark. Limitations include retrospective design and a limited sample size from a single source. This evaluation demonstrates the effectiveness of combining DenseNet architectures with different classifiers for lung cancer CT classification. The MLP-DenseNet-169 achieved optimal performance, while SVM-DenseNet-169 showed superior stability, providing valuable benchmarks for automated lung cancer detection systems.

Wei QY, Li Y, Huang XL, Wei YC, Yang CZ, Yu YY, Liao JY

pubmed logopapersAug 26 2025
To develop a nomogram integrating clinical and multimodal MRI features for non-invasive prediction of microsatellite instability (MSI) in endometrial cancer (EC), and to evaluate its diagnostic performance. This retrospective multicenter study included 216 EC patients (mean age, 54.68 ± 8.72 years) from two institutions (2017-2023). Patients were classified as MSI (n=59) or microsatellite stable (MSS, n=157) based on immunohistochemistry. Institution A data were randomly split into training (n=132) and testing (n=33) sets (8:2 ratio), while Institution B data (n=51) served as external validation. Eight machine learning algorithms were used to construct models. A nomogram combining radiomics score and clinical predictors was developed. Performance was evaluated via receiver operating characteristic (ROC) curves, calibration, and decision curve analysis (DCA). The T2-weighted imaging (T2WI) radiomics model showed the highest area under the receiver operating characteristic curve (AUC) among single sequences (training set:0.908; test set:0.838). The combined-sequence radiomics model achieved superior performance (AUC: training set=0.983, test set=0.862). The support vector machine (SVM) outperformed other algorithms. The nomogram integrating rad-score and clinical features demonstrated higher predictive efficacy than the clinical model (test set: AUC=0.904 vs. 0.654; p < 0.05) and comparable to the multimodal radiomics model. DCA indicated significant clinical utility for both nomogram and radiomics models. The clinical-radiomics nomogram effectively predicts MSI status in EC, offering a non-invasive tool for guiding immunotherapy decisions.

Zhou L, Zheng L, Hong C, Hu Y, Wang Z, Guo X, Du Z, Feng Y, Mei J, Zhu Z, Zhao Z, Xu M, Lu C, Chen M, Ji J

pubmed logopapersAug 26 2025
To develop and validate a novel model based on multiparametric MRI (mpMRI) and whole slide images (WSIs) for predicting microsatellite instability (MSI) status in endometrial cancer (EC) patients. A total of 136 surgically confirmed EC patients were included in this retrospective study. Patients were randomly divided into a training set (96 patients) and a validation set (40 patients) in a 7:3 ratio. Deep learning with ResNet50 was used to extract deep-learning pathomics features, while Pyradiomics was applied to extract radiomics features specifically from sequences including T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and late arterial phase (AP). we developed a deep learning pathoradiomics model (DLPRM) by multilayer perceptron (MLP) based on radiomics features and pathomics features. Furthermore, we validated the DLPRM comprehensively, and compared it with two single-scale signatures-including the area under the receiver operating characteristic (ROC) curve, accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and F1-score. Finally, we employed shapley additive explanations (SHAP) to elucidate the mechanism of prediction model. After undergoing feature selection, a final set of nine radiomics features and 27 pathomics features were selected to construct the radiomics signature (RS) and the deep learning pathomics signature (DLPS). The DLPRM combining the RS and DLPS had favorable performance for the prediction of MSI status in the training set (AUC 0.960 [95% CI 0.936-0.984]), and in the validation set (AUC 0.917 [95% CI 0.824-1.000]). The AUCs of DLPS and RS ranged from 0.817 to 0.943 across the training and validation sets. The decision curve analysis indicated the DLPRM had relatively higher clinical net benefits. DLPRM can effectively predict MSI status in EC patients based on pretreatment pathoradiomics images with high accuracy and robustness, could provide a novel tool to assist clinicians in individualized management of EC.
Page 226 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.