Sort by:
Page 54 of 2352345 results

Enhanced Sarcopenia Detection in Nursing Home Residents Using Ultrasound Radiomics and Machine Learning.

Fu H, Luo S, Zhuo Y, Lian R, Chen X, Jiang W, Wang L, Yang M

pubmed logopapersAug 26 2025
Ultrasound only has low-to-moderate accuracy for sarcopenia. We aimed to investigate whether ultrasound radiomics combined with machine learning enhances sarcopenia diagnostic accuracy compared with conventional ultrasound parameters among older adults in long-term care. Diagnostic accuracy study. A total of 628 residents from 15 nursing homes in China. Sarcopenia diagnosis followed AWGS 2019 criteria. Ultrasound of thigh muscles (rectus femoris [ReF], vastus intermedius [VI], and quadriceps femoris [QF]) was performed. Conventional parameters (muscle thickness [MT], echo intensity [EI]) and radiomic features were extracted. Participants were split into training (70%)/validation (30%) sets. Conventional (muscle thickness + EI), radiomics, and integrated (MT, echo intensity, radiomics, basic clinical data including age, sex, and body mass index) models were built using 5 machine learning algorithms (including logistic regression [LR]). Performance was assessed in the validation set using the area under the receiver operating characteristic curve (AUC), calibration, and decision curve analysis (DCA). Sarcopenia prevalence was 61.9%. The LR algorithm consistently exhibited superior performance. The diagnostic accuracy of the ultrasound radiomic models was superior to that of the models based on conventional ultrasound parameters, regardless of muscle group. The integrated models further improved the accuracy, achieving AUCs (95% CIs) of 0.85 (0.79-0.91) for ReF, 0.81 (0.75-0.87) for VI, and 0.83 (0.77-0.90) for QF. In the validation set, the AUCs (95% CIs) for the conventional ultrasound models were 0.70 (0.63-0.78) for ReF, 0.73 (0.65-0.80) for VI, and 0.75 (0.68-0.82) for QF. The corresponding AUCs (95% CIs) for the radiomics models were 0.76 (0.69-0.83) for ReF, 0.76 (0.69-0.83) for VI, and 0.78 (0.71-0.85) for QF. The integrated models demonstrated good calibration and net benefit in DCA. Ultrasound radiomics, especially when integrated with conventional parameters and clinical data using LR, significantly improves sarcopenia diagnostic accuracy in nursing home residents. This accessible, noninvasive approach holds promise for enhancing sarcopenia screening and early detection in long-term care settings.

Relation knowledge distillation 3D-ResNet-based deep learning for breast cancer molecular subtypes prediction on ultrasound videos: a multicenter study.

Wu Y, Zhou L, Zhao J, Peng Y, Li X, Wang Y, Zhu S, Hou C, Du P, Ling L, Wang Y, Tian J, Sun L

pubmed logopapersAug 26 2025
To develop and test a relation knowledge distillation three-dimensional residual network (RKD-R3D) model for predicting breast cancer molecular subtypes using ultrasound (US) videos to aid clinical personalized management. This multicentre study retrospectively included 882 breast cancer patients (2375 US videos and 9499 images) between January 2017 and December 2021, which was divided into training, validation, and internal test cohorts. Additionally, 86 patients was collected between May 2023 and November 2023 as the external test cohort. St. Gallen molecular subtypes (luminal A, luminal B, HER2-positive, and triple-negative) were confirmed via postoperative immunohistochemistry. The RKD-R3D based on US videos was developed and validated to predict four-classification molecular subtypes of breast cancer. The predictive performance of RKD-R3D was compared with RKD-R2D, traditional R3D, and preoperative core needle biopsy (CNB). The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, balanced accuracy, precision, recall, and F1-score were analyzed. RKD-R3D (AUC: 0.88, 0.95) outperformed RKD-R2D (AUC: 0.72, 0.85) and traditional R3D (AUC: 0.65, 0.79) in predicting four-classification breast cancer molecular subtypes in the internal and external test cohorts. RKD-R3D outperformed CNB (Accuracy: 0.87 vs. 0.79) in the external test cohort, achieved good performance in predicting triple negative from non-triple negative breast cancers (AUC: 0.98), and obtained satisfactory prediction performance for both T1 and non-T1 lesions (AUC: 0.96, 0.90). RKD-R3D when used with US videos becomes a potential supplementary tool to non-invasively assess breast cancer molecular subtypes.

HarmonicEchoNet: Leveraging harmonic convolutions for automated standard plane detection in fetal heart ultrasound videos.

Sarker MMK, Mishra D, Alsharid M, Hernandez-Cruz N, Ahuja R, Patey O, Papageorghiou AT, Noble JA

pubmed logopapersAug 26 2025
Fetal echocardiography offers non-invasive and real-time imaging acquisition of fetal heart images to identify congenital heart conditions. Manual acquisition of standard heart views is time-consuming, whereas automated detection remains challenging due to high spatial similarity across anatomical views with subtle local image appearance variations. To address these challenges, we introduce a very lightweight frequency-guided deep learning-based model named HarmonicEchoNet that can automatically detect heart standard views in a transverse sweep or freehand ultrasound scan of the fetal heart. HarmonicEchoNet uses harmonic convolution blocks (HCBs) and a harmonic spatial and channel squeeze-and-excitation (hscSE) module. The HCBs apply a Discrete Cosine Transform (DCT)-based harmonic decomposition to input features, which are then combined using learned weights. The hscSE module identifies significant regions in the spatial domain to improve feature extraction of the fetal heart anatomical structures, capturing both spatial and channel-wise dependencies in an ultrasound image. The combination of these modules improves model performance relative to recent CNN-based, transformer-based, and CNN+transformer-based image classification models. We use four datasets from two private studies, PULSE (Perception Ultrasound by Learning Sonographic Experience) and CAIFE (Clinical Artificial Intelligence in Fetal Echocardiography), to develop and evaluate HarmonicEchoNet models. Experimental results show that HarmonicEchoNet is 10-15 times faster than ConvNeXt, DeiT, and VOLO, with an inference time of just 3.9 ms. It also achieves 2%-7% accuracy improvement in classifying fetal heart standard planes compared to these baselines. Furthermore, with just 19.9 million parameters compared to ConvNeXt's 196.24 million, HarmonicEchoNet is nearly ten times more parameter-efficient.

Improved pulmonary embolism detection in CT pulmonary angiogram scans with hybrid vision transformers and deep learning techniques.

Abdelhamid A, El-Ghamry A, Abdelhay EH, Abo-Zahhad MM, Moustafa HE

pubmed logopapersAug 26 2025
Pulmonary embolism (PE) represents a severe, life-threatening cardiovascular condition and is notably the third leading cause of cardiovascular mortality, after myocardial infarction and stroke. This pathology occurs when blood clots obstruct the pulmonary arteries, impeding blood flow and oxygen exchange in the lungs. Prompt and accurate detection of PE is critical for appropriate clinical decision-making and patient survival. The complexity involved in interpreting medical images can often results misdiagnosis. However, recent advances in Deep Learning (DL) have substantially improved the capabilities of Computer-Aided Diagnosis (CAD) systems. Despite these advancements, existing single-model DL methods are limited when handling complex, diverse, and imbalanced medical imaging datasets. Addressing this gap, our research proposes an ensemble framework for classifying PE, capitalizing on the unique capabilities of ResNet50, DenseNet121, and Swin Transformer models. This ensemble method harnesses the complementary strengths of convolutional neural networks (CNNs) and vision transformers (ViTs), leading to improved prediction accuracy and model robustness. The proposed methodology includes a sophisticated preprocessing pipeline leveraging autoencoder (AE)-based dimensionality reduction, data augmentation to avoid overfitting, discrete wavelet transform (DWT) for multiscale feature extraction, and Sobel filtering for effective edge detection and noise reduction. The proposed model was rigorously evaluated using the public Radiological Society of North America (RSNA-STR) PE dataset, demonstrating remarkable performance metrics of 97.80% accuracy and a 0.99 for Area Under Receiver Operating Curve (AUROC). Comparative analysis demonstrated superior performance over state-of-the-art pre-trained models and recent ViT-based approaches, highlighting our method's effectiveness in improving early PE detection and providing robust support for clinical decision-making.

A Novel Model for Predicting Microsatellite Instability in Endometrial Cancer: Integrating Deep Learning-Pathomics and MRI-Based Radiomics.

Zhou L, Zheng L, Hong C, Hu Y, Wang Z, Guo X, Du Z, Feng Y, Mei J, Zhu Z, Zhao Z, Xu M, Lu C, Chen M, Ji J

pubmed logopapersAug 26 2025
To develop and validate a novel model based on multiparametric MRI (mpMRI) and whole slide images (WSIs) for predicting microsatellite instability (MSI) status in endometrial cancer (EC) patients. A total of 136 surgically confirmed EC patients were included in this retrospective study. Patients were randomly divided into a training set (96 patients) and a validation set (40 patients) in a 7:3 ratio. Deep learning with ResNet50 was used to extract deep-learning pathomics features, while Pyradiomics was applied to extract radiomics features specifically from sequences including T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and late arterial phase (AP). we developed a deep learning pathoradiomics model (DLPRM) by multilayer perceptron (MLP) based on radiomics features and pathomics features. Furthermore, we validated the DLPRM comprehensively, and compared it with two single-scale signatures-including the area under the receiver operating characteristic (ROC) curve, accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and F1-score. Finally, we employed shapley additive explanations (SHAP) to elucidate the mechanism of prediction model. After undergoing feature selection, a final set of nine radiomics features and 27 pathomics features were selected to construct the radiomics signature (RS) and the deep learning pathomics signature (DLPS). The DLPRM combining the RS and DLPS had favorable performance for the prediction of MSI status in the training set (AUC 0.960 [95% CI 0.936-0.984]), and in the validation set (AUC 0.917 [95% CI 0.824-1.000]). The AUCs of DLPS and RS ranged from 0.817 to 0.943 across the training and validation sets. The decision curve analysis indicated the DLPRM had relatively higher clinical net benefits. DLPRM can effectively predict MSI status in EC patients based on pretreatment pathoradiomics images with high accuracy and robustness, could provide a novel tool to assist clinicians in individualized management of EC.

Machine Learning-Driven radiomics on 18 F-FDG PET for glioma diagnosis: a systematic review and meta-analysis.

Shahriari A, Ghazanafar Ahari S, Mousavi A, Sadeghi M, Abbasi M, Hosseinpour M, Mir A, Zohouri Zanganeh D, Gharedaghi H, Ezati S, Sareminia A, Seyedi D, Shokouhfar M, Darzi A, Ghaedamini A, Zamani S, Khosravi F, Asadi Anar M

pubmed logopapersAug 26 2025
Machine learning (ML) applied to radiomics has revolutionized neuro-oncological imaging, yet the diagnostic performance of ML models based specifically on ^18F-FDG PET features in glioma remains poorly characterized. To systematically evaluate and quantitatively synthesize the diagnostic accuracy of ML models trained on ^18F-FDG PET radiomics for glioma classification. We conducted a PRISMA-compliant systematic review and meta-analysis registered on OSF ( https://doi.org/10.17605/OSF.IO/XJG6P ). PubMed, Scopus, and Web of Science were searched up to January 2025. Studies were included if they applied ML algorithms to ^18F-FDG PET radiomic features for glioma classification and reported at least one performance metric. Data extraction included demographics, imaging protocols, feature types, ML models, and validation design. Meta-analysis was performed using random-effects models with pooled estimates of accuracy, sensitivity, specificity, AUC, F1 score, and precision. Heterogeneity was explored via meta-regression and Galbraith plots. Twelve studies comprising 2,321 patients were included. Pooled diagnostic metrics were: accuracy 92.6% (95% CI: 91.3-93.9%), AUC 0.95 (95% CI: 0.94-0.95), sensitivity 85.4%, specificity 89.7%, F1 score 0.78, and precision 0.90. Heterogeneity was high across all domains (I² >75%). Meta-regression identified ML model type and validation strategy as partial moderators. Models using CNNs or PET/MRI integration achieved superior performance. ML models based on ^18F-FDG PET radiomics demonstrate strong and balanced diagnostic performance for glioma classification. However, methodological heterogeneity underscores the need for standardized pipelines, external validation, and transparent reporting before clinical integration.

Optimized deep learning for brain tumor detection: a hybrid approach with attention mechanisms and clinical explainability.

Aiya AJ, Wani N, Ramani M, Kumar A, Pant S, Kotecha K, Kulkarni A, Al-Danakh A

pubmed logopapersAug 26 2025
Brain tumor classification (BTC) from Magnetic Resonance Imaging (MRI) is a critical diagnosis task, which is highly important for treatment planning. In this study, we propose a hybrid deep learning (DL) model that integrates VGG16, an attention mechanism, and optimized hyperparameters to classify brain tumors into different categories as glioma, meningioma, pituitary tumor, and no tumor. The approach leverages state-of-the-art preprocessing techniques, transfer learning, and Gradient-weighted Class Activation Mapping (Grad-CAM) visualization on a dataset of 7023 MRI images to enhance both performance and interpretability. The proposed model achieves 99% test accuracy and impressive precision and recall figures and outperforms traditional approaches like Support Vector Machines (SVM) with Histogram of Oriented Gradients (HOG), Local Binary Pattern (LBP) and Principal Component Analysis (PCA) by a significant margin. Moreover, the model eliminates the need for manual labelling-a common challenge in this domain-by employing end-to-end learning, which allows the proposed model to derive meaningful features hence reducing human input. The integration of attention mechanisms further promote feature selection, in turn improving classification accuracy, while Grad-CAM visualizations show which regions of the image had the greatest impact on classification decisions, leading to increased transparency in clinical settings. Overall, the synergy of superior prediction, automatic feature extraction, and improved predictability confirms the model as an important application to neural networks approaches for brain tumor classification with valuable potential for enhancing medical imaging (MI) and clinical decision-making.

Development and evaluation of a convolutional neural network model for sex prediction using cephalometric radiographs and cranial photographs.

Handayani VW, Margareth Amiatun Ruth MS, Rulaningtyas R, Caesarardhi MR, Yudhantorro BA, Yudianto A

pubmed logopapersAug 25 2025
Accurately determining sex using features like facial bone profiles and teeth is crucial for identifying unknown victims. Lateral cephalometric radiographs effectively depict the lateral cranial structure, aiding the development of computational identification models. This study develops and evaluates a sex prediction model using cephalometric radiographs with several convolutional neural network (CNN) architectures. The primary goal is to evaluate the model's performance on standardized radiographic data and real-world cranial photographs to simulate forensic applications. Six CNN architectures-VGG16, VGG19, MobileNetV2, ResNet50V2, InceptionV3, and InceptionResNetV2-were employed to train and validate 340 cephalometric images of Indonesian individuals aged 18 to 40 years. The data were divided into training (70%), validation (15%), and testing (15%) subsets. Data augmentation was implemented to mitigate class imbalance. Additionally, a set of 40 cranial images from anatomical specimens was employed to evaluate the model's generalizability. Model performance metrics included accuracy, precision, recall, and F1-score. CNN models were trained and evaluated on 340 cephalometric images (255 females and 85 males). VGG19 and ResNet50V2 achieved high F1-scores of 95% (females) and 83% (males), respectively, using cephalometric data, highlighting their strong class-specific performance. Although the overall accuracy exceeded 90%, the F1-score better reflected model performance in this imbalanced dataset. In contrast, performance notably decreased with cranial photographs, particularly when classifying female samples. That is, while InceptionResNetV2 achieved the highest F1-score for cranial photographs (62%), misclassification of females remained significant. Confusion matrices and per-class metrics further revealed persistent issues related to data imbalance and generalization across imaging modalities. Basic CNN models perform well on standardized cephalometric images but less effectively on photographic cranial images, indicating a domain shift between image types that limits generalizability. Improving real-world forensic performance will require further optimization and more diverse training data. Not applicable.

Benchmarking Class Activation Map Methods for Explainable Brain Hemorrhage Classification on Hemorica Dataset

Z. Rafati, M. Hoseyni, J. Khoramdel, A. Nikoofard

arxiv logopreprintAug 25 2025
Explainable Artificial Intelligence (XAI) has become an essential component of medical imaging research, aiming to increase transparency and clinical trust in deep learning models. This study investigates brain hemorrhage diagnosis with a focus on explainability through Class Activation Mapping (CAM) techniques. A pipeline was developed to extract pixellevel segmentation and detection annotations from classification models using nine state-of-the-art CAM algorithms, applied across multiple network stages, and quantitatively evaluated on the Hemorica dataset, which uniquely provides both slice-level labels and high-quality segmentation masks. Metrics including Dice, IoU, and pixel-wise overlap were employed to benchmark CAM variants. Results show that the strongest localization performance occurred at stage 5 of EfficientNetV2S, with HiResCAM yielding the highest bounding-box alignment and AblationCAM achieving the best pixel-level Dice (0.57) and IoU (0.40), representing strong accuracy given that models were trained solely for classification without segmentation supervision. To the best of current knowledge, this is among the f irst works to quantitatively compare CAM methods for brain hemorrhage detection, establishing a reproducible benchmark and underscoring the potential of XAI-driven pipelines for clinically meaningful AI-assisted diagnosis.

Multimodal Positron Emission Tomography/Computed Tomography Radiomics Combined with a Clinical Model for Preoperative Prediction of Invasive Pulmonary Adenocarcinoma in Ground-Glass Nodules.

Wang X, Li P, Li Y, Zhang R, Duan F, Wang D

pubmed logopapersAug 25 2025
To develop and validate predictive models based on <sup>18</sup>F-fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) radiomics and a clinical model for differentiating invasive adenocarcinoma (IAC) from non-invasive ground-glass nodules (GGNs) in early-stage lung cancer. A total of 164 patients with GGNs histologically confirmed as part of the lung adenocarcinoma spectrum (including both invasive and non-invasive subtypes) who underwent preoperative <sup>18</sup>F-FDG PET/CT and surgery. Radiomic features were extracted from PET and CT images. Models were constructed using support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost). Five predictive models (CT, PET, PET/CT, Clinical, Combined) were evaluated using receiver operating characteristic (ROC) curves, decision curve analysis (DCA), and calibration curves. Statistical comparisons were performed using DeLong's test, net reclassification improvement (NRI), and integrated discrimination improvement (IDI). The Combined model, integrating PET/CT radiomic features with the clinical model, achieved the highest diagnostic performance (AUC: 0.950 in training, 0.911 in test). It consistently showed superior IDI and NRI across both cohorts and significantly outperformed the clinical model (DeLong p = 0.027), confirming its enhanced predictive power through multimodal integration. A clinical nomogram was constructed from the final model to support individualized risk stratification. Integrating PET/CT radiomic features with a clinical model significantly enhances the preoperative prediction of GGN invasiveness. This multimodal image data may assist in preoperative risk stratification and support personalized surgical decision-making in early-stage lung adenocarcinoma.
Page 54 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.