Sort by:
Page 64 of 2372364 results

In-hoc Concept Representations to Regularise Deep Learning in Medical Imaging

Valentina Corbetta, Floris Six Dijkstra, Regina Beets-Tan, Hoel Kervadec, Kristoffer Wickstrøm, Wilson Silva

arxiv logopreprintAug 19 2025
Deep learning models in medical imaging often achieve strong in-distribution performance but struggle to generalise under distribution shifts, frequently relying on spurious correlations instead of clinically meaningful features. We introduce LCRReg, a novel regularisation approach that leverages Latent Concept Representations (LCRs) (e.g., Concept Activation Vectors (CAVs)) to guide models toward semantically grounded representations. LCRReg requires no concept labels in the main training set and instead uses a small auxiliary dataset to synthesise high-quality, disentangled concept examples. We extract LCRs for predefined relevant features, and incorporate a regularisation term that guides a Convolutional Neural Network (CNN) to activate within latent subspaces associated with those concepts. We evaluate LCRReg across synthetic and real-world medical tasks. On a controlled toy dataset, it significantly improves robustness to injected spurious correlations and remains effective even in multi-concept and multiclass settings. On the diabetic retinopathy binary classification task, LCRReg enhances performance under both synthetic spurious perturbations and out-of-distribution (OOD) generalisation. Compared to baselines, including multitask learning, linear probing, and post-hoc concept-based models, LCRReg offers a lightweight, architecture-agnostic strategy for improving model robustness without requiring dense concept supervision. Code is available at the following link: https://github.com/Trustworthy-AI-UU-NKI/lcr\_regularization

A study on sex prediction by using machine algorithms with anthropometric measurements of the seventh cervical vertebra.

Cetin Unlu E, Oner Z, Oner S, Turan MK, Bakici RS

pubmed logopapersAug 19 2025
Prediction of sex is among important topics of forensic medicine and forensic anthropology. In studies conducted for sex prediction, pelvis and cranium bones are the most preferred bones. In cases when it is difficult to examine the pelvis and cranium bones, vertebrae have been the subject of research in sex analysis studies. The aim of this study is to predict sex by using Computed Tomography (CT) images of the vertebra prominens (C7). Another aim of the study is to make automatic measurements using labeling on C7. This retrospective study included images of 100 female and 100 male individuals (aged 2050 years). CT Images on the personal workstation (Horos Project, Version 3.0) were made orthogonal in the entire plane. They were transferred to the Sekazu program in DICOM format. The labels of the bookmarks determined on C7 were placed on the images by the Radiologist and Anatomist according to their coordinates. Then, automatic measurements were performed in the program and calculations were made. Optimization of the study was achieved by automatic measurements, thus eliminating the effects of intra-observer and/or inter-observer measurement errors. Sixteen length and 3 angle parameters were analysed by using machine learning (ML) algorithms. The accuracy rates in sex prediction using ML algorithms with the parameters obtained as a result of the analysis are as follows: Ada Boost Classification 8791%, Decision Tree 8592%, Extra Trees Classifier 8793%, Gradient Boosting Model 8591%, Gaussian Naive Bayes 8791%, Gaussian Process Classifier 8191%, K-nearest Neighbour Regression 8493%, Linear Discriminant Analysis 8894%, Linear Support Vector Classification 8892%, Non-Linear Support Vector Classification 8393%, Quadratic Discriminant Analysis 8790%, Random Forest 8392%, Support Vector Machines 8492%. In this study, it was predicted that sex prediction could be made up to 94% using ML algorithms from the parameters of vertebra prominens, which is an atypical vertebra. Therefore, we can say that vertebra prominens also shows sexual dimorphism.

Biomarker extraction-based Alzheimer's disease stage detection using optimized deep learning approach.

Sampath R, Baskar M

pubmed logopapersAug 19 2025
BackgroundCognitive decline and memory loss in Alzheimer's disease (AD) progresses over time. Early diagnosis is crucial for initiating treatment that can slow progression and preserve daily functioning. However, challenges such as overfitting in prediction models, underutilized biomarker features, and noisy imaging data hinder the accuracy of current detection methods.ObjectiveThis study proposes a novel deep learning-based framework aimed at improving the identification of AD stages while addressing the limitations of existing diagnostic techniques.MethodsStructural MRI scans are employed as the primary diagnostic tool. To enhance image quality, contrast-limited adaptive histogram equalization and wavelet soft thresholding are applied for noise reduction. Biomarker segmentation focuses on ventricular and hippocampal abnormalities, optimized using a firefly algorithm. Dimensionality reduction is performed via Linear Discriminant Analysis to minimize overfitting. Finally, a Deep Belief Network optimized using the Cuckoo Search algorithm is employed for classification and feature learning.ResultsThe proposed framework demonstrates improved performance over existing methods, achieving a 0.66% increase in accuracy and a 0.0345% decrease in error rate for AD stage detection.ConclusionsThis deep learning strategy shows promise as an effective tool for early and accurate AD stage identification. Enhanced segmentation, dimensionality reduction, and classification contribute to its improved performance, offering a meaningful advancement in AD diagnostics.

Improving risk stratification of PI-RADS 3 + 1 lesions of the peripheral zone: expert lexicon of terms, multi-reader performance and contribution of artificial intelligence.

A Glemser P, Netzer N, H Ziener C, Wilhelm M, Hielscher T, Sun Zhang K, Görtz M, Schütz V, Stenzinger A, Hohenfellner M, Schlemmer HP, Bonekamp D

pubmed logopapersAug 19 2025
According to PI-RADS v2.1, peripheral PI-RADS 3 lesions are upgraded to PI-RADS 4 if dynamic contrast-enhanced MRI is positive (3+1 lesions), however those lesions are radiologically challenging. We aimed to define criteria by expert consensus and test applicability by other radiologists for sPC prediction of PI-RADS 3+1 lesions and determine their value in integrated regression models. From consecutive 3 Tesla MR examinations performed between 08/2016 to 12/2018 we identified 85 MRI examinations from 83 patients with a total of 94 PI-RADS 3+1 lesions in the official clinical report. Lesions were retrospectively assessed by expert consensus with construction of a newly devised feature catalogue which was utilized subsequently by two additional radiologists specialized in prostate MRI for independent lesion assessment. With reference to extended fused targeted and systematic TRUS/MRI-biopsy histopathological correlation, relevant catalogue features were identified by univariate analysis and put into context to typically available clinical features and automated AI image assessment utilizing lasso-penalized logistic regression models, also focusing on the contribution of DCE imaging (feature-based, bi- and multiparametric AI-enhanced and solely bi- and multiparametric AI-driven). The feature catalog enabled image-based lesional risk stratification for all readers. Expert consensus provided 3 significant features in univariate analysis (adj. p-value <0.05; most relevant feature T2w configuration: "irregular/microlobulated/spiculated", OR 9.0 (95%CI 2.3-44.3); adj. p-value: 0.016). These remained after lasso penalized regression based feature reduction, while the only selected clinical feature was prostate volume (OR<1), enabling nomogram construction. While DCE-derived consensus features did not enhance model performance (bootstrapped AUC), there was a trend for increased performance by including multiparametric AI, but not biparametric AI into models, both for combined and AI-only models. PI-RADS 3+1 lesions can be risk-stratified using lexicon terms and a key feature nomogram. AI potentially benefits more from DCE imaging than experienced prostate radiologists. Not applicable.

Multimodal imaging deep learning model for predicting extraprostatic extension in prostate cancer using MpMRI and 18 F-PSMA-PET/CT.

Yao F, Lin H, Xue YN, Zhuang YD, Bian SY, Zhang YY, Yang YJ, Pan KH

pubmed logopapersAug 19 2025
This study aimed to construct a multimodal imaging deep learning (DL) model integrating mpMRI and <sup>18</sup>F-PSMA-PET/CT for the prediction of extraprostatic extension (EPE) in prostate cancer, and to assess its effectiveness in enhancing the diagnostic accuracy of radiologists. Clinical and imaging data were retrospectively collected from patients with pathologically confirmed prostate cancer (PCa) who underwent radical prostatectomy (RP). Data were collected from a primary institution (Center 1, n = 197) between January 2019 and June 2022 and an external institution (Center 2, n = 36) between July 2021 and November 2022. A multimodal DL model incorporating mpMRI and <sup>18</sup>F-PSMA-PET/CT was developed to support radiologists in assessing EPE using the EPE-grade scoring system. The predictive performance of the DL model was compared with that of single-modality models, as well as with radiologist assessments with and without model assistance. Clinical net benefit of the model was also assessed. For patients in Center 1, the area under the curve (AUC) for predicting EPE was 0.76 (0.72-0.80), 0.77 (0.70-0.82), and 0.82 (0.78-0.87) for the mpMRI-based DL model, PET/CT-based DL model, and the combined mpMRI + PET/CT multimodal DL model, respectively. In the external test set (Center 2), the AUCs for these models were 0.75 (0.60-0.88), 0.77 (0.72-0.88), and 0.81 (0.63-0.97), respectively. The multimodal DL model demonstrated superior predictive accuracy compared to single-modality models in both internal and external validations. The deep learning-assisted EPE-grade scoring model significantly improved AUC and sensitivity compared to radiologist EPE-grade scoring alone (P < 0.05), with a modest reduction in specificity. Additionally, the deep learning-assisted scoring model provided greater clinical net benefit than the radiologist EPE-grade score used by radiologists alone. The multimodal imaging deep learning model, integrating mpMRI and 18 F-PSMA PET/CT, demonstrates promising predictive performance for EPE in prostate cancer and enhances the accuracy of radiologists in EPE assessment. The model holds potential as a supportive tool for more individualized and precise therapeutic decision-making.

Lung adenocarcinoma subtype classification based on contrastive learning model with multimodal integration.

Wang C, Liu L, Fan C, Zhang Y, Mai Z, Li L, Liu Z, Tian Y, Hu J, Elazab A

pubmed logopapersAug 19 2025
Accurately identifying the stages of lung adenocarcinoma is essential for selecting the most appropriate treatment plans. Nonetheless, this task is complicated due to challenges such as integrating diverse data, similarities among subtypes, and the need to capture contextual features, making precise differentiation difficult. We address these challenges and propose a multimodal deep neural network that integrates computed tomography (CT) images, annotated lesion bounding boxes, and electronic health records. Our model first combines bounding boxes with precise lesion location data and CT scans, generating a richer semantic representation through feature extraction from regions of interest to enhance localization accuracy using a vision transformer module. Beyond imaging data, the model also incorporates clinical information encoded using a fully connected encoder. Features extracted from both CT and clinical data are optimized for cosine similarity using a contrastive language-image pre-training module, ensuring they are cohesively integrated. In addition, we introduce an attention-based feature fusion module that harmonizes these features into a unified representation to fuse features from different modalities further. This integrated feature set is then fed into a classifier that effectively distinguishes among the three types of adenocarcinomas. Finally, we employ focal loss to mitigate the effects of unbalanced classes and contrastive learning loss to enhance feature representation and improve the model's performance. Our experiments on public and proprietary datasets demonstrate the efficiency of our model, achieving a superior validation accuracy of 81.42% and an area under the curve of 0.9120. These results significantly outperform recent multimodal classification approaches. The code is available at https://github.com/fancccc/LungCancerDC .

Emerging modalities for neuroprognostication in neonatal encephalopathy: harnessing the potential of artificial intelligence.

Chawla V, Cizmeci MN, Sullivan KM, Gritz EC, Q Cardona V, Menkiti O, Natarajan G, Rao R, McAdams RM, Dizon ML

pubmed logopapersAug 19 2025
Neonatal Encephalopathy (NE) from presumed hypoxic-ischemic encephalopathy (pHIE) is a leading cause of morbidity and mortality in infants worldwide. Recent advancements in HIE research have introduced promising tools for improved screening of high-risk infants, time to diagnosis, and accuracy of assessment of neurologic injury to guide management and predict outcomes, some of which integrate artificial intelligence (AI) and machine learning (ML). This review begins with an overview of AI/ML before examining emerging prognostic approaches for predicting outcomes in pHIE. It explores various modalities including placental and fetal biomarkers, gene expression, electroencephalography, brain magnetic resonance imaging and other advanced neuroimaging techniques, clinical video assessment tools, and transcranial magnetic stimulation paired with electromyography. Each of these approaches may come to play a crucial role in predicting outcomes in pHIE. We also discuss the application of AI/ML to enhance these emerging prognostic tools. While further validation is needed for widespread clinical adoption, these tools and their multimodal integration hold the potential to better leverage neuroplasticity windows of affected infants. IMPACT: This article provides an overview of placental pathology, biomarkers, gene expression, electroencephalography, motor assessments, brain imaging, and transcranial magnetic stimulation tools for long-term neurodevelopmental outcome prediction following neonatal encephalopathy, that lend themselves to augmentation by artificial intelligence/machine learning (AI/ML). Emerging AI/ML tools may create opportunities for enhanced prognostication through multimodal analyses.

Effect of Data Augmentation on Conformal Prediction for Diabetic Retinopathy

Rizwan Ahamed, Annahita Amireskandari, Joel Palko, Carol Laxson, Binod Bhattarai, Prashnna Gyawali

arxiv logopreprintAug 19 2025
The clinical deployment of deep learning models for high-stakes tasks such as diabetic retinopathy (DR) grading requires demonstrable reliability. While models achieve high accuracy, their clinical utility is limited by a lack of robust uncertainty quantification. Conformal prediction (CP) offers a distribution-free framework to generate prediction sets with statistical guarantees of coverage. However, the interaction between standard training practices like data augmentation and the validity of these guarantees is not well understood. In this study, we systematically investigate how different data augmentation strategies affect the performance of conformal predictors for DR grading. Using the DDR dataset, we evaluate two backbone architectures -- ResNet-50 and a Co-Scale Conv-Attentional Transformer (CoaT) -- trained under five augmentation regimes: no augmentation, standard geometric transforms, CLAHE, Mixup, and CutMix. We analyze the downstream effects on conformal metrics, including empirical coverage, average prediction set size, and correct efficiency. Our results demonstrate that sample-mixing strategies like Mixup and CutMix not only improve predictive accuracy but also yield more reliable and efficient uncertainty estimates. Conversely, methods like CLAHE can negatively impact model certainty. These findings highlight the need to co-design augmentation strategies with downstream uncertainty quantification in mind to build genuinely trustworthy AI systems for medical imaging.

A state-of-the-art new method for diagnosing atrial septal defects with origami technique augmented dataset and a column-based statistical feature extractor.

Yaman I, Kilic I, Yaman O, Poyraz F, Erdem Kaya E, Ozgur Baris V, Ciris S

pubmed logopapersAug 19 2025
Early diagnosis of atrial septal defects (ASDs) from chest X-ray (CXR) images with high accuracy is vital. This study created a dataset from chest X-ray images obtained from different adult subjects. To diagnose atrial septal defects with very high accuracy, which we call state-of-the-art technology, the method known as the Origami paper folding technique, which was used for the first time in the literature on our dataset, was used for data augmentation. Two different augmented data sets were obtained using the Origami technique. The mean, standard deviation, median, variance, and skewness statistical values were obtained column-wise on the images in these data sets. These features were classified with a Support vector machine (SVM). The results obtained using the support vector machine were evaluated according to the k-nearest neighbors (k-NN) and decision tree classifiers for comparison. The results obtained from the classification of the data sets augmented with the Origami technique with the support vector machine (SVM) are state-of-the-art (99.69 %). Our study has provided a clear superiority over deep learning-based artificial intelligence methods.

Longitudinal CE-MRI-based Siamese network with machine learning to predict tumor response in HCC after DEB-TACE.

Wei N, Mathy RM, Chang DH, Mayer P, Liermann J, Springfeld C, Dill MT, Longerich T, Lurje G, Kauczor HU, Wielpütz MO, Öcal O

pubmed logopapersAug 19 2025
Accurate prediction of tumor response after drug-eluting beads transarterial chemoembolization (DEB-TACE) remains challenging in hepatocellular carcinoma (HCC), given tumor heterogeneity and dynamic changes over time. Existing prediction models based on single timepoint imaging do not capture dynamic treatment-induced changes. This study aims to develop and validate a predictive model that integrates deep learning and machine learning algorithms on longitudinal contrast-enhanced MRI (CE-MRI) to predict treatment response in HCC patients undergoing DEB-TACE. This retrospective study included 202 HCC patients treated with DEB-TACE from 2004 to 2023, divided into a training cohort (<i>n</i> = 141) and validation cohort (<i>n</i> = 61). Radiomics and deep learning features were extracted from standardized longitudinal CE-MRI to capture dynamic tumor changes. Feature selection involved correlation analysis, minimum redundancy maximum relevance, and least absolute shrinkage and selection operator regression. The patients were categorized into two groups: the objective response group (<i>n</i> = 123, 60.9%; complete response = 35, 28.5%; partial response = 88, 71.5%) and the non-response group (<i>n</i> = 79, 39.1%; stable disease = 62, 78.5%; progressive disease = 17, 21.5%). Predictive models were constructed using radiomics, deep learning, and integrated features. The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance of the models. We retrospectively evaluated 202 patients (62.67 ± 9.25 years old) with HCC treated after DEB-TACE. A total of 7,182 radiomics features and 4,096 deep learning features were extracted from the longitudinal CE-MRI images. The integrated model was developed using 13 quantitative radiomics features and 4 deep learning features and demonstrated acceptable and robust performance with an receiver operating characteristic curve (AUC) of 0.941 (95%CI: 0.893–0.989) in the training cohort, and AUC of 0.925 (95%CI: 0.850–0.998) with accuracy of 86.9%, sensitivity of 83.7%, as well as specificity of 94.4% in the validation set. This study presents a predictive model based on longitudinal CE-MRI data to estimate tumor response to DEB-TACE in HCC patients. By capturing tumor dynamics and integrating radiomics features with deep learning features, the model has the potential to guide individualized treatment strategies and inform clinical decision-making regarding patient management. The online version contains supplementary material available at 10.1186/s40644-025-00926-5.
Page 64 of 2372364 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.