Sort by:
Page 22 of 2352341 results

MUSCLE: A New Perspective to Multi-scale Fusion for Medical Image Classification based on the Theory of Evidence.

Qiu J, Cao J, Huang Y, Zhu Z, Wang F, Lu C, Li Y, Zheng Y

pubmed logopapersSep 19 2025
In the field of medical image analysis, medical image classification is one of the most fundamental and critical tasks. Current researches often rely on the off-the-shelf backbone networks derived from the field of computer vision, hoping to achieve satisfactory classification performance for medical images. However, given the characteristics of medical images, such as scattered distribution and varying sizes of lesions, features extracted with a single scale from the existing backbones often fail to perform accurate medical image classification. To this end, we propose a novel multi-scale learning paradigm, namely MUlti-SCale Learning with trusted Evidences (MUSCLE), which extracts and integrates features from different scales based on the theory of evidence, to generate the more comprehensive feature representation for the medical image classification task. Particularly, the proposed MUSCLE first estimates the uncertainties of features extracted from different scales/stages of the classification backbone as the evidences, and accordingly form the opinions regarding to the feature trustworthiness via a set of evidential deep neural networks. Then, these opinions on different scales of features are ensembled to yield an aggregated opinion, which can be used to adaptively tune the weights of multi-scale features for scatteredly distributed and size-varying lesions, and consequently improve the network capacity for accurate medical image classification. Our MUSCLE paradigm has been evaluated on five publicly available medical image datasets. The experimental results show that the proposed MUSCLE not only improves the accuracy of the original backbone network, but also enhances the reliability and interpretability of model decisions with the trusted evidences (https://github.com/Q4CS/MUSCLE).

MFFC-Net: Multi-feature Fusion Deep Networks for Classifying Pulmonary Edema of a Pilot Study by Using Lung Ultrasound Image with Texture Analysis and Transfer Learning Technique.

Bui NT, Luoma CE, Zhang X

pubmed logopapersSep 19 2025
Lung ultrasound (LUS) has been widely used by point-of-care systems in both children and adult populations to provide different clinical diagnostics. This research aims to develop an interpretable system that uses a deep fusion network for classifying LUS video/patients based on extracted features by using texture analysis and transfer learning techniques to assist physicians. The pulmonary edema dataset includes 56 LUS videos and 4234 LUS frames. The COVID-BLUES dataset includes 294 LUS videos and 15,826 frames. The proposed multi-feature fusion classification network (MFFC-Net) includes the following: (1) two features extracted from Inception-ResNet-v2, Inception-v3, and 9 texture features of gray-level co-occurrence matrix (GLCM) and histogram of the region of interest (ROI); (2) a neural network for classifying LUS images with feature fusion input; and (3) four models (i.e., ANN, SVM, XGBoost, and kNN) used for classifying COVID/NON COVID patients. The training process was evaluated based on accuracy (0.9969), F1-score (0.9968), sensitivity (0.9967), specificity (0.9990), and precision (0.9970) metrics after the fivefold cross-validation stage. The results of the ANOVA analysis with 9 features of LUS images show that there was a significant difference between pulmonary edema and normal lungs (p < 0.01). The test results at the frame level of the MFFC-Net model achieved an accuracy of 100% and ROC-AUC (1.000) compared with ground truth at the video level with 4 groups of LUS videos. Test results at the patient level with the COVID-BLUES dataset achieved the highest accuracy of 81.25% with the kNN model. The proposed MFFC-Net model has 125 times higher information density (ID) compared to Inception-ResNet-v2 and 53.2 times compared with Inception-v3.

Leveraging transfer learning from Acute Lymphoblastic Leukemia (ALL) pretraining to enhance Acute Myeloid Leukemia (AML) prediction

Duraiswamy, A., Harris-Birtill, D.

medrxiv logopreprintSep 19 2025
We overcome current limitations in Acute Myeloid Leukemia (AML) diagnosis by leveraging a transfer learning approach from Acute Lymphoblastic Leukemia (ALL) classification models, thus addressing the urgent need for more accurate and accessible AML diagnostic tools. AML has poorer prognosis than ALL, with a 5-year relative survival rate of only 17-19% compared to ALL survival rates of up to 75%, making early and accurate detection of AML paramount. Current diagnostic methods, rely heavily on manual microscopic examination, and are often subjective, time-consuming, and can suffer from inter-observer variability. While machine learning has shown promise in cancer classification, its application to AML detection, particularly leveraging the potential of transfer learning from related cancers like Acute Lymphoblastic Leukemia (ALL), remains underexplored. A comprehensive review of state-of-the-art advancements in acute lymphoblastic leukemia (ALL) and acute myeloid leukemia (AML) classification using deep learning algorithms is undertaken and key approaches are evaluated. The insights gained from this review inform the development of two novel machine learning pipelines designed to benchmark effectiveness of proposed transfer learning approaches. Five pre-trained models are fine-tuned using ALL training data (a novel approach in this context) to optimize their potential for AML classification. The result was the development of a best-in-class (BIC) model that surpasses current state-of-the-art (SOTA) performance in AML classification, advancing the accuracy of machine learning (ML)-driven cancer diagnostics. Author summaryAcute Myeloid Leukemia (AML) is an aggressive cancer with a poor prognosis. Early and accurate diagnosis is critical, but current methods are often subjective and time-consuming. We wanted to create a more accurate diagnostic tool by applying a technique called transfer learning from a similar cancer, Acute Lymphoblastic Leukemia (ALL). Two machine learning pipelines were developed. The first trained five different models on a large AML dataset to establish a baseline. The second pipeline first trained these models on an ALL dataset to "learn" from it before fine-tuning them on the AML data. Our experiments showed that the models that underwent transfer learning process consistently performed better than the models trained on AML data alone. The MobileNetV2 model, in particular, was the best-in-class, outperforming all other models and surpassing the best-reported metrics for AML classification in current literature. Our research demonstrates that transfer learning can enable highly accurate AML diagnostic models. The best-in-class model could potentially be used as a AML diagnostic tool, helping clinicians make faster and more accurate diagnoses, improving patient outcomes.

Lightweight Transfer Learning Models for Multi-Class Brain Tumor Classification: Glioma, Meningioma, Pituitary Tumors, and No Tumor MRI Screening.

Gorenshtein A, Liba T, Goren A

pubmed logopapersSep 19 2025
Glioma, pituitary tumors, and meningiomas constitute the major types of primary brain tumors. The challenge in achieving a definitive diagnosis stem from the brain's complex structure, limited accessibility for precise imaging, and the resemblance between different types of tumors. An alternative and promising solution is the application of artificial intelligence (AI), specifically through deep learning models. We developed multiple lightweight deep learning models ResNet-18 (both pretrained on ImageNet and trained from scratch), ResNet-34, ResNet-50, and a custom CNN to classify glioma, meningioma, pituitary tumor, and no tumor MRI scans. A dataset of 7023 images was employed, split into 5712 for training and 1311 for validation. Each model was evaluated via accuracy, area under the curve (AUC), sensitivity, specificity, and confusion matrices. We compared our models to SOTA methods such as SAlexNet and TumorGANet, highlighting computational efficiency and classification performance. ResNet pretrained achieved 98.5-99.2% accuracy and near-perfect validation metrics, with an overall AUC of 1.0 and average sensitivity and specificity both exceeding 97% across the four classes. In comparison, ResNet-18 trained from scratch and the custom CNN achieved 91.99% and 87.03% accuracy, respectively, with AUCs ranging from 0.94 to 1.00. Error analysis revealed moderate misclassification of meningiomas as gliomas in non-pretrained models. Learning rate optimization facilitated stable convergence, and loss metrics indicated effective generalization with minimal overfitting. Our findings confirm that a moderately sized, transfer-learned network (ResNet-18) can deliver high diagnostic accuracy and robust performance for four-class brain tumor classification. This approach aligns with the goal of providing efficient, accurate, and easily deployable AI solutions, particularly for smaller clinical centers with limited computational resources. Future studies should incorporate multi-sequence MRI and extended patient cohorts to further validate these promising results.

AI-Based Algorithm to Detect Heart and Lung Disease From Acute Chest Computed Tomography Scans: Protocol for an Algorithm Development and Validation Study.

Olesen ASO, Miger K, Ørting SN, Petersen J, de Bruijne M, Boesen MP, Andersen MB, Grand J, Thune JJ, Nielsen OW

pubmed logopapersSep 19 2025
Dyspnea is a common cause of hospitalization, posing diagnostic challenges among older adult patients with multimorbid conditions. Chest computed tomography (CT) scans are increasingly used in patients with dyspnea and offer superior diagnostic accuracy over chest radiographs but face limited use due to a shortage of radiologists. This study aims to develop and validate artificial intelligence (AI) algorithms to enable automatic analysis of acute CT scans and provide immediate feedback on the likelihood of pneumonia, pulmonary embolism, and cardiac decompensation. This protocol will focus on cardiac decompensation. We designed a retrospective method development and validation study. This study has been approved by the Danish National Committee on Health Research Ethics (1575037). We extracted 4672 acute chest CT scans with corresponding radiological reports from the Copenhagen University Hospital-Bispebjerg and Frederiksberg, Denmark, from 2016 to 2021. The scans will be randomly split into training (2/3) and internal validation (1/3) sets. Development of the AI algorithm involves parameter tuning and feature selection using cross validation. Internal validation uses radiological reports as the ground truth, with algorithm-specific thresholds based on true positive and negative rates of 90% or greater for heart and lung diseases. The AI models will be validated in low-dose chest CT scans from consecutive patients admitted with acute dyspnea and in coronary CT angiography scans from patients with acute coronary syndrome. As of August 2025, CT data extraction has been completed. Algorithm development, including image segmentation and natural language processing, is ongoing. However, for pulmonary congestion, the algorithm development has been completed. Internal and external validation are planned, with overall validation expected to conclude in 2025 and the final results to be available in 2026. The results are expected to enhance clinical decision-making by providing immediate, AI-driven insights from CT scans, which will be beneficial for both clinicians and patients. DERR1-10.2196/77030.

Enhancing the reliability of Alzheimer's disease prediction in MRI images.

Islam J, Furqon EN, Farady I, Alex JSR, Shih CT, Kuo CC, Lin CY

pubmed logopapersSep 19 2025
Alzheimer's Disease (AD) diagnostic procedures employing Magnetic Resonance Imaging (MRI) analysis encounter considerable obstacles pertaining to reliability and accuracy, especially when deep learning models are utilized within clinical environments. Present deep learning methodologies for MRI-based AD detection frequently demonstrate spatial dependencies and exhibit deficiencies in robust validation mechanisms. Extant validation techniques inadequately integrate anatomical knowledge and exhibit challenges in feature interpretability across a range of imaging conditions. To address this fundamental gap, we introduce a reverse validation paradigm that systematically repositions anatomical structures to test whether models recognize features based on anatomical characteristics rather than spatial memorization. Our research endeavors to rectify these shortcomings by proposing three innovative methodologies: Feature Position Invariance (FPI) for the validation of anatomical features, biomarker location augmentation aimed at enhancing spatial learning, and High-Confidence Cohort (HCC) selection for the reliable identification of training samples. The FPI methodology leverages reverse validation approach to substantiate model predictions through the reconstruction of anatomical features, bolstered by our extensive data augmentation strategy and a confidence-based sample selection technique. The application of this framework utilizing YOLO and MobileNet architecture has yielded significant advancements in both binary and three-class AD classification tasks, achieving state-of-the-art accuracy with enhancements of 2-4 % relative to baseline models. Additionally, our methodology generates interpretable insights through anatomy-aligned validation, establishing direct links between model decisions and neuropathological features. Our experimental findings reveal consistent performance across various anatomical presentations, signifying that the framework effectively enhances both the reliability and interpretability of AD diagnosis through MRI analysis, thereby equipping medical professionals with a more robust diagnostic support system.

A New Method of Modeling the Multi-stage Decision-Making Process of CRT Using Machine Learning with Uncertainty Quantification.

Larsen K, Zhao C, He Z, Keyak J, Sha Q, Paez D, Zhang X, Hung GU, Zou J, Peix A, Zhou W

pubmed logopapersSep 19 2025
Current machine learning-based (ML) models usually attempt to utilize all available patient data to predict patient outcomes while ignoring the associated cost and time for data acquisition. The purpose of this study is to create a multi-stage ML model to predict cardiac resynchronization therapy (CRT) response for heart failure (HF) patients. This model exploits uncertainty quantification to recommend additional collection of single-photon emission computed tomography myocardial perfusion imaging (SPECT MPI) variables if baseline clinical variables and features from electrocardiogram (ECG) are not sufficient. Two hundred eighteen patients who underwent rest-gated SPECT MPI were enrolled in this study. CRT response was defined as an increase in left ventricular ejection fraction (LVEF) > 5% at a 6 ± 1 month follow-up. A multi-stage ML model was created by combining two ensemble models: Ensemble 1 was trained with clinical variables and ECG; Ensemble 2 included Ensemble 1 plus SPECT MPI features. Uncertainty quantification from Ensemble 1 allowed for multi-stage decision-making to determine if the acquisition of SPECT data for a patient is necessary. The performance of the multi-stage model was compared with that of Ensemble models 1 and 2. The response rate for CRT was 55.5% (n = 121) with overall male gender 61.0% (n = 133), an average age of 62.0 ± 11.8, and LVEF of 27.7 ± 11.0. The multi-stage model performed similarly to Ensemble 2 (which utilized the additional SPECT data) with AUC of 0.75 vs. 0.77, accuracy of 0.71 vs. 0.69, sensitivity of 0.70 vs. 0.72, and specificity 0.72 vs. 0.65, respectively. However, the multi-stage model only required SPECT MPI data for 52.7% of the patients across all folds. By using rule-based logic stemming from uncertainty quantification, the multi-stage model was able to reduce the need for additional SPECT MPI data acquisition without significantly sacrificing performance.

PneumoNet: Deep Neural Network for Advanced Pneumonia Detection.

Mahesh TR, Gupta M, Thakur A, Khan SB, Quasim MT, Almusharraf A

pubmed logopapersSep 19 2025
Advancements in computational methods in medicine have brought about extensive improvement in the diagnosis of illness, with machine learning models such as Convolutional Neural Networks leading the charge. This work introduces PneumoNet, a novel deep-learning model designed for accurate pneumonia detection from chest X-ray images. Pneumonia detection from chest X-ray images is one of the greatest challenges in diagnostic practice and medical imaging. Proper identification of standard chest X-ray views or pneumonia-specific views is required to perform this task effectively. Contemporary methods, such as classical machine learning models and initial deep learning methods, guarantee good performance but are generally marred by accuracy, generalizability, and preprocessing issues. These techniques are generally marred by clinical usage constraints like high false positives and poor performance over a broad spectrum of datasets. A novel deep learning architecture, PneumoNet, has been proposed as a solution to these problems. PneumoNet applies a convolutional neural network (CNN) structure specifically employed for the improvement of accuracy and precision in image classification. The model employs several layers of convolution as well as pooling, followed by fully connected dense layers, for efficient extraction of intricate features in X-ray images. The innovation of this approach lies in its advanced layer structure and its training, which are optimized to enhance feature extraction and classification performance greatly. The model proposed here, PneumoNet, has been cross-validated and trained on a well-curated dataset that includes a balanced representation of normal and pneumonia cases. Quantitative results demonstrate the model's performance, with an overall accuracy of 98% and precision values of 96% for normal and 98% for pneumonia cases. The recall values for normal and pneumonia cases are 96% and 98%, respectively, highlighting the consistency of the model. These performance measures collectively indicate the promise of the proposed model to improve the diagnostic process, with a substantial advancement over current methods and paving the way for its application in clinical practice.

Bayesian machine learning enables discovery of risk factors for hepatosplenic multimorbidity related to schistosomiasis

Zhi, Y.-C., Anguajibi, V., Oryema, J. B., Nabatte, B., Opio, C. K., Kabatereine, N. B., Chami, G. F.

medrxiv logopreprintSep 19 2025
One in 25 deaths worldwide is related to liver disease, and often with multiple hepatosplenic conditions. Yet, little is understood of the risk factors for hepatosplenic multimorbidity, especially in the context of chronic infections. We present a novel Bayesian multitask learning framework to jointly model 45 hepatosplenic conditions assessed using point-of-care B-mode ultrasound for 3155 individuals aged 5-91 years within the SchistoTrack cohort across rural Uganda where chronic intestinal schistosomiasis is endemic. We identified distinct and shared biomedical, socioeconomic, and spatial risk factors for individual conditions and hepatosplenic multimorbidity, and introduced methods for measuring condition dependencies as risk factors. Notably, for gastro-oesophageal varices, we discovered key risk factors of older age, lower hemoglobin concentration, and severe schistosomal liver fibrosis. Our findings provide a compendium of risk factors to inform surveillance, triage, and follow-up, while our model enables improved prediction of hepatosplenic multimorbidity, and if validated on other systems, general multimorbidity.

Multimodal AI-driven Biomarker for Early Detection of Cancer Cachexia

Ahmed, S., Parker, N., Park, M., Davis, E. W., Jeong, D., Permuth, J. B., Schabath, M. B., Yilmaz, Y., Rasool, G.

medrxiv logopreprintSep 19 2025
Cancer cachexia, a multifactorial metabolic syndrome characterized by severe muscle wasting and weight loss, contributes to poor outcomes across various cancer types but lacks a standardized, generalizable biomarker for early detection. We present a multimodal AI-based biomarker trained on real-world clinical, radiologic, laboratory, and unstructured clinical note data, leveraging foundation models and large language models (LLMs) to identify cachexia at the time of cancer diagnosis. Prediction accuracy improved with each added modality: 77% using clinical variables alone, 81% with added laboratory data, and 85% with structured symptom features extracted from clinical notes. Incorporating embeddings from clinical text and CT images further improved accuracy to 92%. The framework also demonstrated prognostic utility, improving survival prediction as data modalities were integrated. Designed for real-world clinical deployment, the framework accommodates missing modalities without requiring imputation or case exclusion, supporting scalability across diverse oncology settings. Unlike prior models trained on curated datasets, our approach utilizes standard-of-care clinical data, facilitating integration into oncology workflows. In contrast to fixed-threshold composite indices such as the cachexia index (CXI), the model generates patient-specific predictions, enabling adaptable, cancer-agnostic performance. To enhance clinical reliability and safety, the framework incorporates uncertainty estimation to flag low-confidence cases for expert review. This work advances a clinically applicable, scalable, and trustworthy AI-driven decision support tool for early cachexia detection and personalized oncology care.
Page 22 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.