Sort by:
Page 167 of 2442432 results

Towards Classifying Histopathological Microscope Images as Time Series Data

Sungrae Hong, Hyeongmin Park, Youngsin Ko, Sol Lee, Bryan Wong, Mun Yong Yi

arxiv logopreprintJun 19 2025
As the frontline data for cancer diagnosis, microscopic pathology images are fundamental for providing patients with rapid and accurate treatment. However, despite their practical value, the deep learning community has largely overlooked their usage. This paper proposes a novel approach to classifying microscopy images as time series data, addressing the unique challenges posed by their manual acquisition and weakly labeled nature. The proposed method fits image sequences of varying lengths to a fixed-length target by leveraging Dynamic Time-series Warping (DTW). Attention-based pooling is employed to predict the class of the case simultaneously. We demonstrate the effectiveness of our approach by comparing performance with various baselines and showcasing the benefits of using various inference strategies in achieving stable and reliable results. Ablation studies further validate the contribution of each component. Our approach contributes to medical image analysis by not only embracing microscopic images but also lifting them to a trustworthy level of performance.

Machine learning-based MRI radiomics predict IL18 expression and overall survival of low-grade glioma patients.

Zhang Z, Xiao Y, Liu J, Xiao F, Zeng J, Zhu H, Tu W, Guo H

pubmed logopapersJun 19 2025
Interleukin-18 has broad immune regulatory functions. Genomic data and enhanced Magnetic Resonance Imaging data related to LGG patients were downloaded from The Cancer Genome Atlas and Cancer Imaging Archive, and the constructed model was externally validated using hospital MRI enhanced images and clinical pathological features. Radiomic feature extraction was performed using "PyRadiomics", feature selection was conducted using Maximum Relevance Minimum Redundancy and Recursive Feature Elimination methods, and a model was built using the Gradient Boosting Machine algorithm to predict the expression status of IL18. The constructed radiomics model achieved areas under the receiver operating characteristic curve of 0.861, 0.788, and 0.762 in the TCIA training dataset (n = 98), TCIA validation dataset (n = 41), and external validation dataset (n = 50). Calibration curves and decision curve analysis demonstrated the calibration and high clinical utility of the model. The radiomics model based on enhanced MRI can effectively predict the expression status of IL18 and the prognosis of LGG.

PMFF-Net: A deep learning-based image classification model for UIP, NSIP, and OP.

Xu MW, Zhang ZH, Wang X, Li CT, Yang HY, Liao ZH, Zhang JQ

pubmed logopapersJun 19 2025
High-resolution computed tomography (HRCT) is helpful for diagnosing interstitial lung diseases (ILD), but it largely depends on the experience of physicians. Herein, our study aims to develop a deep-learning-based classification model to differentiate the three common types of ILD, so as to provide a reference to help physicians make the diagnosis and improve the accuracy of ILD diagnosis. Patients were selected from four tertiary Grade A hospitals in Kunming based on inclusion and exclusion criteria. HRCT scans of 130 patients were included. The imaging manifestations were usual interstitial pneumonia (UIP), non-specific interstitial pneumonia (NSIP), and organizing pneumonia (OP). Additionally, 50 chest HRCT cases without imaging abnormalities during the same period were selected.Construct a data set. Conduct the training, validation, and testing of the Parallel Multi-scale Feature Fusion Network (PMFF-Net) deep learning model. Utilize Python software to generate data and charts pertaining to model performance. Assess the model's accuracy, precision, recall, and F1-score, and juxtapose its diagnostic efficacy against that of physicians across various hospital levels, with differing levels of seniority, and from various departments. The PMFF -Net deep learning model is capable of classifying imaging types such as UIP, NSIP, and OP, as well as normal imaging. In a mere 105 s, it makes the diagnosis for 18 HRCT images with a diagnostic accuracy of 92.84 %, precision of 91.88 %, recall of 91.95 %, and an F1 score of 0.9171. The diagnostic accuracy of senior radiologists (83.33 %) and pulmonologists (77.77 %) from tertiary hospitals is higher than that of internists from secondary hospitals (33.33 %). Meanwhile, the diagnostic accuracy of middle-aged radiologists (61.11 %) and pulmonologists (66.66 %) are higher than junior radiologists (38.88 %) and pulmonologists (44.44 %) in tertiary hospitals, whereas junior and middle-aged internists at secondary hospitals were unable to complete the tests. This study found that the PMFF-Net model can effectively classify UIP, NSIP, OP imaging types, and normal imaging, which can help doctors of different hospital levels and departments make clinical decisions quickly and effectively.

A fusion-based deep-learning algorithm predicts PDAC metastasis based on primary tumour CT images: a multinational study.

Xue N, Sabroso-Lasa S, Merino X, Munzo-Beltran M, Schuurmans M, Olano M, Estudillo L, Ledesma-Carbayo MJ, Liu J, Fan R, Hermans JJ, van Eijck C, Malats N

pubmed logopapersJun 19 2025
Diagnosing the presence of metastasis of pancreatic cancer is pivotal for patient management and treatment, with contrast-enhanced CT scans (CECT) as the cornerstone of diagnostic evaluation. However, this diagnostic modality requires a multifaceted approach. To develop a convolutional neural network (CNN)-based model (PMPD, Pancreatic cancer Metastasis Prediction Deep-learning algorithm) to predict the presence of metastases based on CECT images of the primary tumour. CECT images in the portal venous phase of 335 patients with pancreatic ductal adenocarcinoma (PDAC) from the PanGenEU study and The First Affiliated Hospital of Zhengzhou University (ZZU) were randomly divided into training and internal validation sets by applying fivefold cross-validation. Two independent external validation datasets of 143 patients from the Radboud University Medical Center (RUMC), included in the PANCAIM study (RUMC-PANCAIM) and 183 patients from the PREOPANC trial of the Dutch Pancreatic Cancer Group (PREOPANC-DPCG) were used to evaluate the results. The area under the receiver operating characteristic curve (AUROC) for the internally tested model was 0.895 (0.853-0.937) and 0.779 (0.741-0.817) in the PanGenEU and ZZU sets, respectively. In the external validation sets, the mean AUROC was 0.806 (0.787-0.826) for the RUMC-PANCAIM and 0.761 (0.717-0.804) for the PREOPANC-DPCG. When stratified by the different metastasis sites, the PMPD model achieved the average AUROC between 0.901-0.927 in PanGenEU, 0.782-0.807 in ZZU and 0.761-0.820 in PREOPANC-DPCG sets. A PMPD-derived Metastasis Risk Score (MRS) (HR: 2.77, 95% CI 1.99 to 3.86, p=1.59e-09) outperformed the Resectability status from the National Comprehensive Cancer Network guideline and the CA19-9 biomarker in predicting overall survival. Meanwhile, the MRS could potentially predict developed metastasis (AUROC: 0.716 for within 3 months, 0.645 for within 6 months). This study represents a pioneering utilisation of a high-performance deep-learning model to predict extrapancreatic organ metastasis in patients with PDAC.

Applying a multi-task and multi-instance framework to predict axillary lymph node metastases in breast cancer.

Li Y, Chen Z, Ding Z, Mei D, Liu Z, Wang J, Tang K, Yi W, Xu Y, Liang Y, Cheng Y

pubmed logopapersJun 18 2025
Deep learning (DL) models have shown promise in predicting axillary lymph node (ALN) status. However, most existing DL models were classification-only models and did not consider the practical application scenarios of multi-view joint prediction. Here, we propose a Multi-Task Learning (MTL) and Multi-Instance Learning (MIL) framework that simulates the real-world clinical diagnostic scenario for ALN status prediction in breast cancer. Ultrasound images of the primary tumor and ALN (if available) regions were collected, each annotated with a segmentation label. The model was trained on a training cohort and tested on both internal and external test cohorts. The proposed two-stage DL framework using one of the Transformer models, Segformer, as the network backbone, exhibits the top-performing model. It achieved an AUC of 0.832, a sensitivity of 0.815, and a specificity of 0.854 in the internal test cohort. In the external cohort, this model attained an AUC of 0.918, a sensitivity of 0.851 and a specificity of 0.957. The Class Activation Mapping method demonstrated that the DL model correctly identified the characteristic areas of metastasis within the primary tumor and ALN regions. This framework may serve as an effective second reader to assist clinicians in ALN status assessment.

NERO: Explainable Out-of-Distribution Detection with Neuron-level Relevance

Anju Chhetri, Jari Korhonen, Prashnna Gyawali, Binod Bhattarai

arxiv logopreprintJun 18 2025
Ensuring reliability is paramount in deep learning, particularly within the domain of medical imaging, where diagnostic decisions often hinge on model outputs. The capacity to separate out-of-distribution (OOD) samples has proven to be a valuable indicator of a model's reliability in research. In medical imaging, this is especially critical, as identifying OOD inputs can help flag potential anomalies that might otherwise go undetected. While many OOD detection methods rely on feature or logit space representations, recent works suggest these approaches may not fully capture OOD diversity. To address this, we propose a novel OOD scoring mechanism, called NERO, that leverages neuron-level relevance at the feature layer. Specifically, we cluster neuron-level relevance for each in-distribution (ID) class to form representative centroids and introduce a relevance distance metric to quantify a new sample's deviation from these centroids, enhancing OOD separability. Additionally, we refine performance by incorporating scaled relevance in the bias term and combining feature norms. Our framework also enables explainable OOD detection. We validate its effectiveness across multiple deep learning architectures on the gastrointestinal imaging benchmarks Kvasir and GastroVision, achieving improvements over state-of-the-art OOD detection methods.

Multimodal deep learning for predicting unsuccessful recanalization in refractory large vessel occlusion.

González JD, Canals P, Rodrigo-Gisbert M, Mayol J, García-Tornel A, Ribó M

pubmed logopapersJun 18 2025
This study explores a multi-modal deep learning approach that integrates pre-intervention neuroimaging and clinical data to predict endovascular therapy (EVT) outcomes in acute ischemic stroke patients. To this end, consecutive stroke patients undergoing EVT were included in the study, including patients with suspected Intracranial Atherosclerosis-related Large Vessel Occlusion ICAD-LVO and other refractory occlusions. A retrospective, single-center cohort of patients with anterior circulation LVO who underwent EVT between 2017-2023 was analyzed. Refractory LVO (rLVO) defined class, comprised patients who presented any of the following: final angiographic stenosis > 50 %, unsuccessful recanalization (eTICI 0-2a) or required rescue treatments (angioplasty +/- stenting). Neuroimaging data included non-contrast CT and CTA volumes, automated vascular segmentation, and CT perfusion parameters. Clinical data included demographics, comorbidities and stroke severity. Imaging features were encoded using convolutional neural networks and fused with clinical data using a DAFT module. Data were split 80 % for training (with four-fold cross-validation) and 20 % for testing. Explainability methods were used to analyze the contribution of clinical variables and regions of interest in the images. The final sample comprised 599 patients; 481 for training the model (77, 16.0 % rLVO), and 118 for testing (16, 13.6 % rLVO). The best model predicting rLVO using just imaging achieved an AUC of 0.53 ± 0.02 and F1 of 0.19 ± 0.05 while the proposed multimodal model achieved an AUC of 0.70 ± 0.02 and F1 of 0.39 ± 0.02 in testing. Combining vascular segmentation, clinical variables, and imaging data improved prediction performance over single-source models. This approach offers an early alert to procedural complexity, potentially guiding more tailored, timely intervention strategies in the EVT workflow.

Classification of Multi-Parametric Body MRI Series Using Deep Learning

Boah Kim, Tejas Sudharshan Mathai, Kimberly Helm, Peter A. Pinto, Ronald M. Summers

arxiv logopreprintJun 18 2025
Multi-parametric magnetic resonance imaging (mpMRI) exams have various series types acquired with different imaging protocols. The DICOM headers of these series often have incorrect information due to the sheer diversity of protocols and occasional technologist errors. To address this, we present a deep learning-based classification model to classify 8 different body mpMRI series types so that radiologists read the exams efficiently. Using mpMRI data from various institutions, multiple deep learning-based classifiers of ResNet, EfficientNet, and DenseNet are trained to classify 8 different MRI series, and their performance is compared. Then, the best-performing classifier is identified, and its classification capability under the setting of different training data quantities is studied. Also, the model is evaluated on the out-of-training-distribution datasets. Moreover, the model is trained using mpMRI exams obtained from different scanners in two training strategies, and its performance is tested. Experimental results show that the DenseNet-121 model achieves the highest F1-score and accuracy of 0.966 and 0.972 over the other classification models with p-value$<$0.05. The model shows greater than 0.95 accuracy when trained with over 729 studies of the training data, whose performance improves as the training data quantities grew larger. On the external data with the DLDS and CPTAC-UCEC datasets, the model yields 0.872 and 0.810 accuracy for each. These results indicate that in both the internal and external datasets, the DenseNet-121 model attains high accuracy for the task of classifying 8 body MRI series types.

Can CTA-based Machine Learning Identify Patients for Whom Successful Endovascular Stroke Therapy is Insufficient?

Jeevarajan JA, Dong Y, Ballekere A, Marioni SS, Niktabe A, Abdelkhaleq R, Sheth SA, Giancardo L

pubmed logopapersJun 18 2025
Despite advances in endovascular stroke therapy (EST) devices and techniques, many patients are left with substantial disability, even if the final infarct volumes (FIVs) remain small. Here, we evaluate the performance of a machine learning (ML) approach using pre-treatment CT angiography (CTA) to identify this cohort of patients that may benefit from additional interventions. We identified consecutive large vessel occlusion (LVO) acute ischemic stroke (AIS) subjects who underwent EST with successful reperfusion in a multicenter prospective registry cohort. We included only subjects with FIV<30mL and recorded 90-day outcome (modified Rankin scale, mRS). A deep learning model was pre-trained and then fine-tuned to predict 90-day mRS 0-2 using pre-treatment CTA images (DSN-CTA model). The primary outcome was the predictive performance of the DSNCTA model compared to a logistic regression model with clinical variables, measured by the area under the receiver operating characteristic curve (AUROC). The DSN-CTA model was pre-trained on 1,542 subjects and then fine-tuned and cross-validated with 48 subjects, all of whom underwent EST with TICI 2b-3 reperfusion. Of this cohort, 56.2% of subjects had 90-day mRS 3-6 despite successful EST and FIV<30mL. The DSN-CTA model showed significantly better performance than a model with clinical variables alone when predicting good 90-day mRS (AUROC 0.81 vs 0.492, p=0.006). The CTA-based machine learning model was able to more reliably predict unexpected poor functional outcome after successful EST and small FIV for patients with LVO AIS compared to standard clinical variables. ML models may identify <i>a priori</i> patients in whom EST-based LVO reperfusion alone is insufficient to improve clinical outcomes. AIS= acute ischemic stroke; AUROC= area under the receiver operating characteristic curve; DSN-CTA= DeepSymNet-v3 model; EST= endovascular stroke therapy; FIV= final infarct volume; LVO= large vessel occlusion; ML= machine learning.

Multimodal MRI Marker of Cognition Explains the Association Between Cognition and Mental Health in UK Biobank

Buianova, I., Silvestrin, M., Deng, J., Pat, N.

medrxiv logopreprintJun 18 2025
BackgroundCognitive dysfunction often co-occurs with psychopathology. Advances in neuroimaging and machine learning have led to neural indicators that predict individual differences in cognition with reasonable performance. We examined whether these neural indicators explain the relationship between cognition and mental health in the UK Biobank cohort (n > 14000). MethodsUsing machine learning, we quantified the covariation between general cognition and 133 mental health indices and derived neural indicators of cognition from 72 neuroimaging phenotypes across diffusion-weighted MRI (dwMRI), resting-state functional MRI (rsMRI), and structural MRI (sMRI). With commonality analyses, we investigated how much of the cognition-mental health covariation is captured by each neural indicator and neural indicators combined within and across MRI modalities. ResultsThe predictive association between mental health and cognition was at out-of-sample r = 0.3. Neuroimaging phenotypes captured 2.1% to 25.8% of the cognition-mental health covariation. The highest proportion of variance explained by dwMRI was attributed to the number of streamlines connecting cortical regions (19.3%), by rsMRI through functional connectivity between 55 large-scale networks (25.8%), and by sMRI via the volumetric characteristics of subcortical structures (21.8%). Combining neuroimaging phenotypes within modalities improved the explanation to 25.5% for dwMRI, 29.8% for rsMRI, and 31.6% for sMRI, and combining them across all MRI modalities enhanced the explanation to 48%. ConclusionsWe present an integrated approach to derive multimodal MRI markers of cognition that can be transdiagnostically linked to psychopathology. This demonstrates that the predictive ability of neural indicators extends beyond the prediction of cognition itself, enabling us to capture the cognition-mental health covariation.
Page 167 of 2442432 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.