Sort by:
Page 24 of 2352341 results

ProtoMedX: Towards Explainable Multi-Modal Prototype Learning for Bone Health Classification

Alvaro Lopez Pellicer, Andre Mariucci, Plamen Angelov, Marwan Bukhari, Jemma G. Kerns

arxiv logopreprintSep 18 2025
Bone health studies are crucial in medical practice for the early detection and treatment of Osteopenia and Osteoporosis. Clinicians usually make a diagnosis based on densitometry (DEXA scans) and patient history. The applications of AI in this field are ongoing research. Most successful methods rely on deep learning models that use vision alone (DEXA/X-ray imagery) and focus on prediction accuracy, while explainability is often disregarded and left to post hoc assessments of input contributions. We propose ProtoMedX, a multi-modal model that uses both DEXA scans of the lumbar spine and patient records. ProtoMedX's prototype-based architecture is explainable by design, which is crucial for medical applications, especially in the context of the upcoming EU AI Act, as it allows explicit analysis of model decisions, including incorrect ones. ProtoMedX demonstrates state-of-the-art performance in bone health classification while also providing explanations that can be visually understood by clinicians. Using a dataset of 4,160 real NHS patients, the proposed ProtoMedX achieves 87.58% accuracy in vision-only tasks and 89.8% in its multi-modal variant, both surpassing existing published methods.

An Efficient Neuro-framework for Brain Tumor Classification Using a CNN-based Self-supervised Learning Approach with Genetic Optimizations.

Ravali P, Reddy PCS, Praveen P

pubmed logopapersSep 18 2025
Accurate and non-invasive grading of glioma brain tumors from MRI scans is challenging due to limited labeled data and the complexity of clinical evaluation. This study aims to develop a robust and efficient deep learning framework for improved glioma classification using MRI images. A multi-stage framework is proposed, starting with SimCLR-based self-supervised learning for representation learning without labels, followed by Deep Embedded Clustering to extract and group features effectively. EfficientNet-B7 is used for initial classification due to its parameter efficiency. A weighted ensemble of EfficientNet-B7, ResNet-50, and DenseNet-121 is employed for the final classification. Hyperparameters are fine-tuned using a Differential Evolution-optimized Genetic Algorithm to enhance accuracy and training efficiency. EfficientNet-B7 achieved approximately 88-90% classification accuracy. The weighted ensemble improved this to approximately 93%. Genetic optimization further enhanced accuracy by 3-5% and reduced training time by 15%. The framework overcomes data scarcity and limited feature extraction issues in traditional CNNs. The combination of self-supervised learning, clustering, ensemble modeling, and evolutionary optimization provides improved performance and robustness, though it requires significant computational resources and further clinical validation. The proposed framework offers an accurate and scalable solution for glioma classification from MRI images. It supports faster, more reliable clinical decision-making and holds promise for real-world diagnostic applications.

Machine Learning based Radiomics from Multi-parametric Magnetic Resonance Imaging for Predicting Lymph Node Metastasis in Cervical Cancer.

Liu J, Zhu M, Li L, Zang L, Luo L, Zhu F, Zhang H, Xu Q

pubmed logopapersSep 18 2025
Construct and compare multiple machine learning models to predict lymph node (LN) metastasis in cervical cancer, utilizing radiomic features extracted from preoperative multi-parametric magnetic resonance imaging (MRI). This study retrospectively enrolled 407 patients with cervical cancer who were randomly divided into a training cohort (n=284) and a validation cohort (n=123). A total of 4065 radiomic features were extracted from the tumor regions of interest on contrast-enhanced T1-weighted imaging, T2-weighted imaging, and diffusion-weighted imaging for each patient. The Mann-Whitney U test, Spearman correlation analysis, and selection operator Cox regression analysis were employed for radiomic feature selection. The relationship between MRI radiomic features and LN status was analyzed using five machine-learning algorithms. Model performance was evaluated by measuring the area under the receiver-operating characteristic curve (AUC) and accuracy (ACC). Moreover, Kaplan-Meier analysis was used to validate the prognostic value of selected clinical and radiomic characteristics. LN metastasis was pathologically detected in 24.3% (99/407) of patients. Following a three-step feature selection, 18 radiomic features were employed for model construction. The XGBoost model exhibited superior performance compared to other models, achieving an AUC, accuracy, sensitivity, specificity, and F1 score of 0.9268, 0.8969, 0.7419, 0.9891, and 0.8364, respectively, on the validation set. Additionally, Kaplan-Meier curves indicated a significant correlation between radiomic scores and progression-free survival in cervical cancer patients (p < 0.05). Among the machine learning models, XGBoost demonstrated the best predictive ability for LN metastasis and showed prognostic value through its radiomic score, highlighting its clinical potential. Machine learning-based multi-parametric MRI radiomic analysis demonstrated promising performance in the preoperative prediction of LN metastasis and clinical prognosis in cervical cancer.

Entropy in Clinical Decision-Making: A Narrative Review Through the Lens of Decision Theory.

Rohlfsen C, Shannon K, Parsons AS

pubmed logopapersSep 18 2025
Navigating uncertainty is fundamental to sound clinical decision-making. With the advent of artificial intelligence, mathematical approximations of disease states-expressed as entropy-offer a novel approach to quantify and communicate uncertainty. Although entropy is well established in fields like physics and computer science, its technical complexity has delayed its routine adoption in clinical reasoning. In this narrative review, we adhere to Shannon's definition of entropy from information processing theory and examine how it has been used in clinical decision-making over the last 15 years. Grounding our analysis in decision theory-which frames decisions in terms of states, acts, consequences, and preferences-we evaluated 20 studies that employed entropy. Our findings reveal that entropy is predominantly used to quantify uncertainty rather than directly guiding clinical actions. High-stakes fields such as oncology and radiology have led the way, using entropy to improve diagnostic accuracy and support risk assessment, while applications in neurology and hematology remain largely exploratory. Notably, no study has yet translated entropy into an operational, evidence-based decision-support framework. These results point to entropy's value as a quantitative tool in clinical reasoning, while also highlighting the need for prospective validation and the development of integrated clinical tools.

Fusion of X-Ray Images and Clinical Data for a Multimodal Deep Learning Prediction Model of Osteoporosis: Algorithm Development and Validation Study.

Tang J, Yin X, Lai J, Luo K, Wu D

pubmed logopapersSep 18 2025
Osteoporosis is a bone disease characterized by reduced bone mineral density and mass, which increase the risk of fragility fractures in patients. Artificial intelligence can mine imaging features specific to different bone densities, shapes, and structures and fuse other multimodal features for synergistic diagnosis to improve prediction accuracy. This study aims to develop a multimodal model that fuses chest X-rays and clinical parameters for opportunistic screening of osteoporosis and to compare and analyze the experimental results with existing methods. We used multimodal data, including chest X-ray images and clinical data, from a total of 1780 patients at Chongqing Daping Hospital from January 2019 to August 2024. We adopted a probability fusion strategy to construct a multimodal model. In our model, we used a convolutional neural network as the backbone network for image processing and fine-tuned it using a transfer learning technique to suit the specific task of this study. In addition, we introduced a gradient-based wavelet feature extraction method. We combined it with an attention mechanism to assist in feature fusion, which enhanced the model's focus on key regions of the image and further improved its ability to extract image features. The multimodal model proposed in this paper outperforms the traditional methods in the 4 evaluation metrics of area under the curve value, accuracy, sensitivity, and specificity. Compared with using only the X-ray image model, the multimodal model improved the area under the curve value significantly from 0.951 to 0.975 (P=.004), the accuracy from 89.32% to 92.36% (P=.045), the sensitivity from 89.82% to 91.23% (P=.03), and the specificity from 88.64% to 93.92% (P=.008). While the multimodal model that fuses chest X-ray images and clinical data demonstrated superior performance compared to unimodal models and traditional methods, this study has several limitations. The dataset size may not be sufficient to capture the full diversity of the population. The retrospective nature of the study may introduce selection bias, and the lack of external validation limits the generalizability of the findings. Future studies should address these limitations by incorporating larger, more diverse datasets and conducting rigorous external validation to further establish the model's clinical use.

Deep Learning Integration of Endoscopic Ultrasound Features and Serum Data Reveals <i>LTB4</i> as a Diagnostic and Therapeutic Target in ESCC.

Huo S, Zhang W, Wang Y, Qi J, Wang Y, Bai C

pubmed logopapersSep 18 2025
<b><i>Background:</i></b> Early diagnosis and accurate prediction of treatment response in esophageal squamous cell carcinoma (ESCC) remain major clinical challenges due to the lack of reliable and noninvasive biomarkers. Recently, artificial intelligence-driven endoscopic ultrasound image analysis has shown great promise in revealing genomic features associated with imaging phenotypes. <b><i>Methods:</i></b> A prospective study of 115 patients with ESCC was conducted. Deep features were extracted from endoscopic ultrasound using a ResNet50 convolutional neural network. Important features shared across three machine learning models (NN, GLM, DT) were used to construct an image-derived signature. Plasma levels of leukotriene B4 (<i>LTB4</i>) and other inflammatory markers were measured using enzyme-linked immunosorbent assay. Correlations between signature and inflammation markers were analyzed, followed by logistic regression and subgroup analyses. <b><i>Results:</i></b> The endoscopic ultrasound image-derived signature, generated using deep learning algorithms, effectively distinguished esophageal cancer from normal esophageal tissue. Among all inflammatory markers, <i>LTB4</i> exhibited the strongest negative correlation with the image signature and showed significantly higher expression in the healthy control group. Multivariate logistic regression analysis identified <i>LTB4</i> as an independent risk factor for ESCC (odds ratio = 1.74, <i>p</i> = 0.037). Furthermore, <i>LTB4</i> expression was significantly associated with patient sex, age, and chemotherapy response. Notably, higher <i>LTB4</i> levels were linked to an increased likelihood of achieving a favorable therapeutic response. <b><i>Conclusions:</i></b> This study demonstrates that deep learning-derived endoscopic ultrasound image features can effectively distinguish ESCC from normal esophageal tissue. By integrating image features with serological data, the authors identified <i>LTB4</i> as a key inflammation-related biomarker with significant diagnostic and therapeutic predictive value.

NeuroRAD-FM: A Foundation Model for Neuro-Oncology with Distributionally Robust Training

Moinak Bhattacharya, Angelica P. Kurtz, Fabio M. Iwamoto, Prateek Prasanna, Gagandeep Singh

arxiv logopreprintSep 18 2025
Neuro-oncology poses unique challenges for machine learning due to heterogeneous data and tumor complexity, limiting the ability of foundation models (FMs) to generalize across cohorts. Existing FMs also perform poorly in predicting uncommon molecular markers, which are essential for treatment response and risk stratification. To address these gaps, we developed a neuro-oncology specific FM with a distributionally robust loss function, enabling accurate estimation of tumor phenotypes while maintaining cross-institution generalization. We pretrained self-supervised backbones (BYOL, DINO, MAE, MoCo) on multi-institutional brain tumor MRI and applied distributionally robust optimization (DRO) to mitigate site and class imbalance. Downstream tasks included molecular classification of common markers (MGMT, IDH1, 1p/19q, EGFR), uncommon alterations (ATRX, TP53, CDKN2A/2B, TERT), continuous markers (Ki-67, TP53), and overall survival prediction in IDH1 wild-type glioblastoma at UCSF, UPenn, and CUIMC. Our method improved molecular prediction and reduced site-specific embedding differences. At CUIMC, mean balanced accuracy rose from 0.744 to 0.785 and AUC from 0.656 to 0.676, with the largest gains for underrepresented endpoints (CDKN2A/2B accuracy 0.86 to 0.92, AUC 0.73 to 0.92; ATRX AUC 0.69 to 0.82; Ki-67 accuracy 0.60 to 0.69). For survival, c-index improved at all sites: CUIMC 0.592 to 0.597, UPenn 0.647 to 0.672, UCSF 0.600 to 0.627. Grad-CAM highlighted tumor and peri-tumoral regions, confirming interpretability. Overall, coupling FMs with DRO yields more site-invariant representations, improves prediction of common and uncommon markers, and enhances survival discrimination, underscoring the need for prospective validation and integration of longitudinal and interventional signals to advance precision neuro-oncology.

Automating classification of treatment responses to combined targeted therapy and immunotherapy in HCC.

Quan B, Dai M, Zhang P, Chen S, Cai J, Shao Y, Xu P, Li P, Yu L

pubmed logopapersSep 17 2025
Tyrosine kinase inhibitors (TKIs) combined with immunotherapy regimens are now widely used for treating advanced hepatocellular carcinoma (HCC), but their clinical efficacy is limited to a subset of patients. Considering that the vast majority of advanced HCC patients lose the opportunity for liver resection and thus cannot provide tumor tissue samples, we leveraged the clinical and image data to construct a multimodal convolutional neural network (CNN)-Transformer model for predicting and analyzing tumor response to TKI-immunotherapy. An automatic liver tumor segmentation system, based on a two-stage 3D U-Net framework, delineates lesions by first segmenting the liver parenchyma and then precisely localizing the tumor. This approach effectively addresses the variability in clinical data and significantly reduces bias introduced by manual intervention. Thus, we developed a clinical model using only pre-treatment clinical information, a CNN model using only pre-treatment magnetic resonance imaging data, and an advanced multimodal CNN-Transformer model that fused imaging and clinical parameters using a training cohort (n = 181) and then validated them using an independent cohort (n = 30). In the validation cohort, the area under the curve (95% confidence interval) values were 0.720 (0.710-0.731), 0.695 (0.683-0.707), and 0.785 (0.760-0.810), respectively, indicating that the multimodal model significantly outperformed the single-modality baseline models across validations. Finally, single-cell sequencing with the surgical tumor specimens reveals tumor ecosystem diversity associated with treatment response, providing a preliminary biological validation for the prediction model. In summary, this multimodal model effectively integrates imaging and clinical features of HCC patients, has a superior performance in predicting tumor response to TKI-immunotherapy, and provides a reliable tool for optimizing personalized treatment strategies.

Robust and explainable framework to address data scarcity in diagnostic imaging.

Zhao Z, Alzubaidi L, Zhang J, Duan Y, Naseem U, Gu Y

pubmed logopapersSep 17 2025
Deep learning has significantly advanced automatic medical diagnostics, releasing human resources from clinical pressure, yet the persistent challenge of data scarcity in this area hampers its further improvements and applications. To address this gap, we introduce a novel ensemble framework called 'Efficient Transfer and Self-supervised Learning based Ensemble Framework' (ETSEF). ETSEF leverages features from multiple pre-trained deep learning models to efficiently learn powerful representations from a limited number of data samples. To the best of our knowledge, ETSEF is the first strategy that combines two pre-training methodologies (Transfer Learning and Self-supervised Learning) with ensemble learning approaches. Various data enhancement techniques, including data augmentation, feature fusion, feature selection, and decision fusion, have also been deployed to maximise the efficiency and robustness of the ETSEF model. Five independent medical imaging tasks, including endoscopy, breast cancer detection, monkeypox detection, brain tumour detection, and glaucoma detection, were tested to demonstrate ETSEF's effectiveness and robustness. Facing limited sample numbers and challenging medical tasks, ETSEF has demonstrated its effectiveness by improving diagnostic accuracy by up to 13.3% compared to strong ensemble baseline models and up to 14.4% compared with recent state-of-the-art methods. Moreover, we emphasise the robustness and trustworthiness of the ETSEF method through various vision-explainable artificial intelligence techniques, including Grad-CAM, SHAP, and t-SNE. Compared to large-scale deep learning models, ETSEF can be flexibly deployed and maintain superior performance for challenging medical imaging tasks, demonstrating potential for application in areas lacking training data. The code is available at Github ETSEF.

Decision Strategies in AI-Based Ensemble Models in Opportunistic Alzheimer's Detection from Structural MRI.

Hammonds SK, Eftestøl T, Kurz KD, Fernandez-Quilez A

pubmed logopapersSep 17 2025
Alzheimer's disease (AD) is a neurodegenerative condition and the most common form of dementia. Recent developments in AD treatment call for robust diagnostic tools to facilitate medical decision-making. Despite progress for early diagnostic tests, there remains uncertainty about clinical use. Structural magnetic resonance imaging (MRI), as a readily available imaging tool in the current AD diagnostic pathway, in combination with artificial intelligence, offers opportunities of added value beyond symptomatic evaluation. However, MRI studies in AD tend to suffer from small datasets and consequently limited generalizability. Although ensemble models take advantage of the strengths of several models to improve performance and generalizability, there is little knowledge of how the different ensemble models compare performance-wise and the relationship between detection performance and model calibration. The latter is especially relevant for clinical translatability. In our study, we applied three ensemble decision strategies with three different deep learning architectures for multi-class AD detection with structural MRI. For two of the three architectures, the weighted average was the best decision strategy in terms of balanced accuracy and calibration error. In contrast to the base models, the results of the ensemble models showed that the best detection performance corresponded to the lowest calibration error, independent of the architecture. For each architecture, the best ensemble model reduced the estimated calibration error compared to the base model average from (1) 0.174±0.01 to 0.164±0.04, (2) 0.182±0.02 to 0.141±0.04, and (3) 0.269±0.08 to 0.240±0.04 and increased the balanced accuracy from (1) 0.527±0.05 to 0.608±0.06, (2) 0.417±0.03 to 0.456±0.04, and (3) 0.348±0.02 to 0.371±0.03.
Page 24 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.