Sort by:
Page 32 of 2352341 results

Machine learning model based on the radiomics features of CE-CBBCT shows promising predictive ability for HER2-positive BC.

Chen X, Li M, Liang X, Su D

pubmed logopapersSep 12 2025
This study aimed to investigate whether establishing a machine learning (ML) model based on contrast-enhanced cone-beam breast computed tomography (CE-CBBCT) radiomic features could predict human epidermal growth factor receptor 2-positive breast cancer (BC). Eighty-eight patients diagnosed with invasive BC who underwent preoperative CE-CBBCT were retrospectively enrolled. Patients were randomly assigned to the training and testing cohorts at a ratio of approximately 7:3. A total of 1046 quantitative radiomics features were extracted from the CE-CBBCT images using PyRadiomics. Z-score normalization was used to standardize the radiomics features, and Pearson correlation coefficient and one-way analysis of variance were used to explore the significant features. Six ML algorithms (support vector machine, random forest [RF], logistic regression, adaboost, linear discriminant analysis, and decision tree) were used to construct optimal predictive models. Receiver operating characteristic curves were constructed and the area under the curve (AUC) was calculated. Four top-performing radiomic models were selected to develop the 6 predictive features. The AUC values for support vector machine, linear discriminant analysis, RF, logistic regression, adaboost, and decision tree were 0.741, 0.753, 1.000, 0.752, 1.000, and 1.000, respectively, in the training cohort, and 0.700, 0.671, 0.806, 0.665, 0.706, and 0.712, respectively, in the testing cohort. Notably, the RF model exhibited the highest predictive ability with an AUC of 0.806 in the testing cohort. For the RF model, the DeLong test showed statistically significant differences in the AUC between the training and testing cohorts (Z = 2.105, P = .035). The ML model based on CE-CBBCT radiomics features showed promising predictive ability for human epidermal growth factor receptor 2-positive BC, with the RF model demonstrating the best diagnostic performance.

Application of Deep Learning for Predicting Hematoma Expansion in Intracerebral Hemorrhage Using Computed Tomography Scans: A Systematic Review and Meta-Analysis of Diagnostic Accuracy.

Ahmadzadeh AM, Ashoobi MA, Broomand Lomer N, Elyassirad D, Gheiji B, Vatanparast M, Bathla G, Tu L

pubmed logopapersSep 11 2025
We aimed to systematically review the studies that utilized deep learning (DL)-based networks to predict hematoma expansion (HE) in patients with intracerebral hemorrhage (ICH) using computed tomography (CT) images. We carried out a comprehensive literature search across four major databases to identify relevant studies. To evaluate the quality of the included studies, we used both the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) and the METhodological RadiomICs Score (METRICS) checklists. We then calculated pooled diagnostic estimates and assessed heterogeneity using the I<sup>2</sup> statistic. To assess the sources of heterogeneity, effects of individual studies, and publication bias, we performed subgroup analysis, sensitivity analysis, and Deek's asymmetry test. Twenty-two studies were included in the qualitative synthesis, of which 11 and 6 were utilized for exclusive DL and combined DL meta-analyses, respectively. We found pooled sensitivity of 0.81 and 0.84, specificity of 0.79 and 0.91, positive diagnostic likelihood ratio (DLR) of 3.96 and 9.40, negative DLR of 0.23 and 0.18, diagnostic odds ratio of 16.97 and 53.51, and area under the curve of 0.87 and 0.89 for exclusive DL-based and combined DL-based models, respectively. Subgroup analysis revealed significant inter-group differences according to the segmentation technique and study quality. DL-based networks showed strong potential in accurately identifying HE in ICH patients. These models may guide earlier targeted interventions such as intensive blood pressure control or administration of hemostatic drugs, potentially leading to improved patient outcomes.

Invisible Attributes, Visible Biases: Exploring Demographic Shortcuts in MRI-based Alzheimer's Disease Classification

Akshit Achara, Esther Puyol Anton, Alexander Hammers, Andrew P. King

arxiv logopreprintSep 11 2025
Magnetic resonance imaging (MRI) is the gold standard for brain imaging. Deep learning (DL) algorithms have been proposed to aid in the diagnosis of diseases such as Alzheimer's disease (AD) from MRI scans. However, DL algorithms can suffer from shortcut learning, in which spurious features, not directly related to the output label, are used for prediction. When these features are related to protected attributes, they can lead to performance bias against underrepresented protected groups, such as those defined by race and sex. In this work, we explore the potential for shortcut learning and demographic bias in DL based AD diagnosis from MRI. We first investigate if DL algorithms can identify race or sex from 3D brain MRI scans to establish the presence or otherwise of race and sex based distributional shifts. Next, we investigate whether training set imbalance by race or sex can cause a drop in model performance, indicating shortcut learning and bias. Finally, we conduct a quantitative and qualitative analysis of feature attributions in different brain regions for both the protected attribute and AD classification tasks. Through these experiments, and using multiple datasets and DL models (ResNet and SwinTransformer), we demonstrate the existence of both race and sex based shortcut learning and bias in DL based AD classification. Our work lays the foundation for fairer DL diagnostic tools in brain MRI. The code is provided at https://github.com/acharaakshit/ShortMR

MetaLLMix : An XAI Aided LLM-Meta-learning Based Approach for Hyper-parameters Optimization

Mohammed Tiouti, Mohamed Bal-Ghaoui

arxiv logopreprintSep 11 2025
Effective model and hyperparameter selection remains a major challenge in deep learning, often requiring extensive expertise and computation. While AutoML and large language models (LLMs) promise automation, current LLM-based approaches rely on trial and error and expensive APIs, which provide limited interpretability and generalizability. We propose MetaLLMiX, a zero-shot hyperparameter optimization framework combining meta-learning, explainable AI, and efficient LLM reasoning. By leveraging historical experiment outcomes with SHAP explanations, MetaLLMiX recommends optimal hyperparameters and pretrained models without additional trials. We further employ an LLM-as-judge evaluation to control output format, accuracy, and completeness. Experiments on eight medical imaging datasets using nine open-source lightweight LLMs show that MetaLLMiX achieves competitive or superior performance to traditional HPO methods while drastically reducing computational cost. Our local deployment outperforms prior API-based approaches, achieving optimal results on 5 of 8 tasks, response time reductions of 99.6-99.9%, and the fastest training times on 6 datasets (2.4-15.7x faster), maintaining accuracy within 1-5% of best-performing baselines.

Breast cancer risk assessment for screening: a hybrid artificial intelligence approach.

Tendero R, Larroza A, Pérez-Benito FJ, Perez-Cortes JC, Román M, Llobet R

pubmed logopapersSep 11 2025
This study evaluates whether integrating clinical data with mammographic features using artificial intelligence (AI) improves 2-year breast cancer risk prediction compared to using either data type alone. This retrospective nested case-control study included 2193 women (mean age, 59 ± 5 years) screened at Hospital del Mar, Spain (2013-2020), with 418 cases (mammograms taken 2 years before diagnosis) and 1775 controls (cancer-free for ≥ 2 years). Three models were evaluated: (1) ERTpd + im, based on Extremely Randomized Trees (ERT), split into sub-models for personal data (ERTpd) and image features (ERTim); (2) an image-only model (CNN); and (3) a hybrid model (ERTpd + im + CNN). Five-fold cross-validation, area under the receiver operating characteristic curve (AUC), bootstrapping for confidence intervals, and DeLong tests for paired data assessed performance. Robustness was evaluated across breast density quartiles and detection type (screen-detected vs. interval cancers). The hybrid model achieved an AUC of 0.75 (95% CI: 0.71-0.76), significantly outperforming the CNN model (AUC, 0.74; 95% CI: 0.70-0.75; p < 0.05) and slightly surpassing ERTpd + im (AUC, 0.74; 95% CI: 0.70-0.76). Sub-models ERTpd and ERTim had AUCs of 0.59 and 0.73, respectively. The hybrid model performed consistently across breast density quartiles (p > 0.05) and better for screen-detected (AUC, 0.79) than interval cancers (AUC, 0.59; p < 0.001). This study shows that integrating clinical and mammographic data with AI improves 2-year breast cancer risk prediction, outperforming single-source models. The hybrid model demonstrated higher accuracy and robustness across breast density quartiles, with better performance for screen-detected cancers. Question Current breast cancer risk models have limitations in accuracy. Can integrating clinical and mammographic data using artificial intelligence (AI) improve short-term risk prediction? Findings A hybrid model combining clinical and imaging data achieved the highest accuracy in predicting 2-year breast cancer risk, outperforming models using either data type alone. Clinical relevance Integrating clinical and mammographic data with AI improves breast cancer risk prediction. This approach enables personalized screening strategies and supports early detection. It helps identify high-risk women and optimizes the use of additional assessments within screening programs.

Ultrasound Assessment of Muscle Atrophy During Short- and Medium-Term Head-Down Bed Rest.

Yang X, Yu L, Tian Y, Yin G, Lv Q, Guo J

pubmed logopapersSep 11 2025
This study aims to investigate the feasibility of ultrasound technology for assessing muscle atrophy progression in a head-down bed rest model, providing a reference for monitoring muscle functional status in a microgravity environment. A 40-day head-down bed rest model using rhesus monkeys was established to simulate the microgravity environment in space. A dual-encoder parallel deep learning model was developed to extract features from B-mode ultrasound images and radiofrequency signals separately. Additionally, an up-sampling module incorporating the Coordinate Attention mechanism and the Pixel-attention-guided fusion module was designed to enhance direction and position awareness, as well as improve the recognition of target boundaries and detailed features. The evaluation efficacy of single ultrasound signals and fused signals was compared. The assessment accuracy reached approximately 87% through inter-individual cross-validation in 6 rhesus monkeys. The fusion of ultrasound signals significantly enhanced classification performance compared to using single modalities, such as B-mode images or radiofrequency signals. This study demonstrates that ultrasound technology combined with deep learning algorithms can effectively assess disuse muscle atrophy. The proposed approach offers a promising reference for diagnosing muscle atrophy under long-term immobilization, with significant application value and potential for widespread adoption.

The Combined Use of Cervical Ultrasound and Deep Learning Improves the Detection of Patients at Risk for Spontaneous Preterm Delivery.

Sejer EPF, Pegios P, Lin M, Bashir Z, Wulff CB, Christensen AN, Nielsen M, Feragen A, Tolsgaard MG

pubmed logopapersSep 11 2025
Preterm birth is the leading cause of neonatal mortality and morbidity. While ultrasound-based cervical length measurement is the current standard for predicting preterm birth, its performance is limited. Artificial intelligence (AI) has shown potential in ultrasound analysis, yet few small-scale studies have evaluated its use for predicting preterm birth. To develop and validate an AI model for spontaneous preterm birth prediction from cervical ultrasound images and compare its performance to cervical length. In this multicenter study, we developed a deep learning-based AI model using data from women who underwent cervical ultrasound scans as part of antenatal care between 2008 and 2018 in Denmark. Indications for ultrasound were not systematically recorded, and scans were likely performed due to risk factors or symptoms of preterm labor. We compared the performance of the AI model with cervical length measurement for spontaneous preterm birth prediction by assessing the area under the curve (AUC), sensitivity, specificity, and likelihood ratios. Subgroup analyses evaluated model performance across baseline characteristics, and saliency heat maps identified anatomical features that influenced AI model predictions the most. The final dataset included 4,224 pregnancies and 7,862 cervical ultrasound images, with 50% resulting in spontaneous preterm birth. The AI model surpassed cervical length for predicting spontaneous preterm birth before 37 weeks with a sensitivity of 0.51 (95% CI 0.50-0.53) versus 0.41 (0.39-0.42) at a fixed specificity at 0.85, p<0.001, and a higher AUC of 0.75 (0.74-0.76) versus 0.67 (0.66-0.68), p<0.001. For identifying late preterm births at 34-37 weeks, the AI model had 36.6 % higher sensitivity than cervical length (0.47 versus 0.34, p<0.001). The AI model achieved higher AUCs across all subgroups, especially at earlier gestational ages. Saliency heat maps indicated that in 54% of preterm birth cases, the AI model focused on the posterior inner lining of the lower uterine segment, suggesting it incorporates more data than cervical length alone. To our knowledge, this is the first large-scale, multicenter study demonstrating that AI is more sensitive than cervical length measurement in identifying spontaneous preterm births across multiple characteristics, 19 hospital sites, and different ultrasound machines. The AI model performs particularly well at earlier gestational ages, enabling more timely prophylactic interventions.

SqueezeViX-Net with SOAE: A Prevailing Deep Learning Framework for Accurate Pneumonia Classification using X-Ray and CT Imaging Modalities.

Kavitha N, Anand B

pubmed logopapersSep 11 2025
Pneumonia represents a dangerous respiratory illness that leads to severe health problems when proper diagnosis does not occur, followed by an increase in deaths, particularly among at-risk populations. Appropriate treatment requires the correct identification of pneumonia types in conjunction with swift and accurate diagnosis. This paper presents the deep learning framework SqueezeViX-Net, specifically designed for pneumonia classification. The model benefits from a Self-Optimized Adaptive Enhancement (SOAE) method, which makes programmed changes to the dropout rate during the training process. The adaptive dropout adjustment mechanism leads to better model suitability and stability. The evaluation of SqueezeViX-Net is conducted through the analysis of extensive X-ray and CT image collections derived from publicly accessible Kaggle repositories. SqueezeViX-Net outperformed various established deep learning architectures, including DenseNet-121, ResNet-152V2, and EfficientNet-B7, when evaluated in terms of performance. The model demonstrated better results, as it performed with higher accuracy levels, surpassing both precision and recall metrics, as well as the F1-score metric. The validation process of this model was conducted using a range of pneumonia data sets, comprising both CT images and X-ray images, which demonstrated its ability to handle modality variations. SqueezeViX-Net integrates SOAE technology to develop an advanced framework that enables the specific identification of pneumonia for clinical use. The model demonstrates excellent diagnostic potential for medical staff through its dynamic learning capabilities and high precision, contributing to improved patient treatment outcomes.

MetaLLMix : An XAI Aided LLM-Meta-learning Based Approach for Hyper-parameters Optimization

Mohammed Tiouti, Mohamed Bal-Ghaoui

arxiv logopreprintSep 11 2025
Effective model and hyperparameter selection remains a major challenge in deep learning, often requiring extensive expertise and computation. While AutoML and large language models (LLMs) promise automation, current LLM-based approaches rely on trial and error and expensive APIs, which provide limited interpretability and generalizability. We propose MetaLLMiX, a zero-shot hyperparameter optimization framework combining meta-learning, explainable AI, and efficient LLM reasoning. By leveraging historical experiment outcomes with SHAP explanations, MetaLLMiX recommends optimal hyperparameters and pretrained models without additional trials. We further employ an LLM-as-judge evaluation to control output format, accuracy, and completeness. Experiments on eight medical imaging datasets using nine open-source lightweight LLMs show that MetaLLMiX achieves competitive or superior performance to traditional HPO methods while drastically reducing computational cost. Our local deployment outperforms prior API-based approaches, achieving optimal results on 5 of 8 tasks, response time reductions of 99.6-99.9%, and the fastest training times on 6 datasets (2.4-15.7x faster), maintaining accuracy within 1-5% of best-performing baselines.

Explainable Deep Learning Framework for Classifying Mandibular Fractures on Panoramic Radiographs.

Seo H, Lee JI, Park JU, Sung IY

pubmed logopapersSep 10 2025
This study aimed to develop a deep-learning model for the automatic classification of mandibular fractures using panoramic radiographs. A pretrained convolutional neural network (CNN) was used to classify fractures based on a novel, clinically relevant classification system. The dataset comprised 800 panoramic radiographs obtained from patients with facial trauma. The model demonstrated robust classification performance across 8 fracture categories, achieving consistently high accuracy and F1 scores. Performance was evaluated using standard metrics, including accuracy, precision, recall, and F1-score. To enhance interpretability and clinical applicability, explainable AI techniques-Gradient-weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-Agnostic Explanations (LIME)-were used to visualize the model's decision-making process. These findings suggest that the proposed deep learning framework is a reliable and efficient tool for classifying mandibular fractures on panoramic radiographs. Its application may help reduce diagnostic time and improve decision-making in maxillofacial trauma care. Further validation using larger, multi-institutional datasets is recommended to ensure generalizability.
Page 32 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.