Sort by:
Page 8 of 2332330 results

Accelerating Cerebral Diagnostics with BrainFusion: A Comprehensive MRI Tumor Framework

Walid Houmaidi, Youssef Sabiri, Salmane El Mansour Billah, Amine Abouaomar

arxiv logopreprintSep 29 2025
The early and accurate classification of brain tumors is crucial for guiding effective treatment strategies and improving patient outcomes. This study presents BrainFusion, a significant advancement in brain tumor analysis using magnetic resonance imaging (MRI) by combining fine-tuned convolutional neural networks (CNNs) for tumor classification--including VGG16, ResNet50, and Xception--with YOLOv8 for precise tumor localization with bounding boxes. Leveraging the Brain Tumor MRI Dataset, our experiments reveal that the fine-tuned VGG16 model achieves test accuracy of 99.86%, substantially exceeding previous benchmarks. Beyond setting a new accuracy standard, the integration of bounding-box localization and explainable AI techniques further enhances both the clinical interpretability and trustworthiness of the system's outputs. Overall, this approach underscores the transformative potential of deep learning in delivering faster, more reliable diagnoses, ultimately contributing to improved patient care and survival rates.

EVLF-FM: Explainable Vision Language Foundation Model for Medicine

Yang Bai, Haoran Cheng, Yang Zhou, Jun Zhou, Arun Thirunavukarasu, Yuhe Ke, Jie Yao, Kanae Fukutsu, Chrystie Wan Ning Quek, Ashley Hong, Laura Gutierrez, Zhen Ling Teo, Darren Shu Jeng Ting, Brian T. Soetikno, Christopher S. Nielsen, Tobias Elze, Zengxiang Li, Linh Le Dinh, Hiok Hong Chan, Victor Koh, Marcus Tan, Kelvin Z. Li, Leonard Yip, Ching Yu Cheng, Yih Chung Tham, Gavin Siew Wei Tan, Leopold Schmetterer, Marcus Ang, Rahat Hussain, Jod Mehta, Tin Aung, Lionel Tim-Ee Cheng, Tran Nguyen Tuan Anh, Chee Leong Cheng, Tien Yin Wong, Nan Liu, Iain Beehuat Tan, Soon Thye Lim, Eyal Klang, Tony Kiat Hon Lim, Rick Siow Mong Goh, Yong Liu, Daniel Shu Wei Ting

arxiv logopreprintSep 29 2025
Despite the promise of foundation models in medical AI, current systems remain limited - they are modality-specific and lack transparent reasoning processes, hindering clinical adoption. To address this gap, we present EVLF-FM, a multimodal vision-language foundation model (VLM) designed to unify broad diagnostic capability with fine-grain explainability. The development and testing of EVLF-FM encompassed over 1.3 million total samples from 23 global datasets across eleven imaging modalities related to six clinical specialties: dermatology, hepatology, ophthalmology, pathology, pulmonology, and radiology. External validation employed 8,884 independent test samples from 10 additional datasets across five imaging modalities. Technically, EVLF-FM is developed to assist with multiple disease diagnosis and visual question answering with pixel-level visual grounding and reasoning capabilities. In internal validation for disease diagnostics, EVLF-FM achieved the highest average accuracy (0.858) and F1-score (0.797), outperforming leading generalist and specialist models. In medical visual grounding, EVLF-FM also achieved stellar performance across nine modalities with average mIOU of 0.743 and [email protected] of 0.837. External validations further confirmed strong zero-shot and few-shot performance, with competitive F1-scores despite a smaller model size. Through a hybrid training strategy combining supervised and visual reinforcement fine-tuning, EVLF-FM not only achieves state-of-the-art accuracy but also exhibits step-by-step reasoning, aligning outputs with visual evidence. EVLF-FM is an early multi-disease VLM model with explainability and reasoning capabilities that could advance adoption of and trust in foundation models for real-world clinical deployment.

Integrating Multi-Modal Imaging Features for Early Prediction of Acute Kidney Injury in Pneumonia Sepsis: A Multicenter Retrospective Study.

Gu Y, Li L, Yang K, Zou C, Yin B

pubmed logopapersSep 29 2025
Sepsis, a severe complication of infection, often leads to acute kidney injury (AKI), which significantly increases the risk of death. Despite its clinical importance, early prediction of AKI remains challenging. Current tools rely on blood and urine tests, which are costly, variable, and not always available in time for intervention. Pneumonia is the most common cause of sepsis, accounting for over one-third of cases. In such patients, pulmonary inflammation and perilesional tissue alterations may serve as surrogate markers of systemic disease progression. However, these imaging features are rarely used in clinical decision-making. To overcome this limitation, our study aims to extract informative imaging features from pneumonia-associated sepsis cases using deep learning, with the goal of predicting the development of AKI. This dual-center retrospective study included pneumonia-associated sepsis patients (Jan 2020-Jul 2024). Chest CT images, clinical records, and laboratory data at admission were collected. We propose MCANet (Multimodal Cross-Attention Network), a two-stage deep learning framework designed to predict the occurrence of pneumonia-associated sepsis-related acute kidney injury (pSA-AKI). In the first stage, region-specific features were extracted from the lungs, epicardial adipose tissue, and T4-level subcutaneous adipose tissue using ResNet-18, which was chosen for its lightweight architecture and efficiency in processing multi-regional 2D CT slices with low computational cost. In the second stage, the extracted features were fused via a Multiscale Feature Attention Network (MSFAN) employing cross-attention mechanisms to enhance interactions among anatomical regions, followed by classification using ResNet-101, selected for its deeper architecture and strong ability to model global semantic representations and complex patterns.Model performance was evaluated using AUC, accuracy, precision, recall, and F1-score. Grad-CAM and PyRadiomics were employed for visual interpretation and radiomic analysis, respectively. A total of 399 patients with pneumonia-associated sepsis were included in this study. The modality ablation experiments demonstrated that the model integrating features from the lungs, T4-level subcutaneous adipose tissue, and epicardial adipose tissue achieved the best performance, with an accuracy of 0.981 and an AUC of 0.99 on the external test set from an independent center. For the prediction of AKI onset time, the LightGBM model incorporating imaging and clinical features achieved the highest accuracy of 0.8409 on the external test set. Furthermore, the multimodal model combining deep features, radiomics features, and clinical data further improved predictive performance, reaching an accuracy of 0.9773 and an AUC of 0.961 on the external test set. This study developed MCAnet, a multimodal deep learning framework that integrates imaging features from the lungs, epicardial adipose tissue, and T4-level subcutaneous adipose tissue. The framework significantly improved the accuracy of AKI occurrence and temporal prediction in pneumonia-associated sepsis patients, highlighting the synergistic role of adipose tissue and lung characteristics. Furthermore, explainability analysis revealed potential decision-making mechanisms underlying the temporal progression of pSA-AKI, offering new insights for clinical management.

Can Machine Learning Models Based on Radiomic and Clinical Information Improve Radiologists' Diagnostic Performance for Bone Tumors? An MRMC Study.

Pan D, Yuan L, Wang S, Zeng H, Liang T, Ruan C, Ao L, Li X, Chen W

pubmed logopapersSep 29 2025
To explore whether machine learning models of bone tumors can improve the diagnostic performance of imaging physicians. Retrospective radiographic and clinical data collection from bone tumor patients to construct multiple machine learning models. Area under the curve (AUC) values were used as the primary assessment metric to select auxiliary models for this study. Seven readers were selected based on pre-experiment results from the Multireader multicase (MRMC) study. Two reading experiments were conducted using an independent test set to validate the value of interpretable models as clinician aids. We used the Obuchowski-Rockette method to compare differences in physician categorization. The extreme gradient boosting (XGBoost) model based on clinical information and radiomics features performed best for classification with an AUC value of 0.905 (95% CI: 0.841, 0.949). The interpretable algorithm suggested that gray level co-occurrence matrix (GLCM) features provided the most crucial predictive information for the classification model. The AUC was significantly higher for senior physicians (with 7-11 years of experience) than for junior physicians (with 2-5 years of experience) in reading musculoskeletal radiographs (0.929-0.956 vs. 0.812-0.906). The mean AUC value of the independent reading by the seven physicians was 0.904, and the mean AUC value of the model-assisted reading result was improved by 0.037 (95% CI: -0.074, -0.001%), which was statistically significant (P=0.047). The machine learning model based on the radiomics features and clinical information of knee X-ray images can effectively assist clinicians in completing the preoperative diagnosis of benign and malignant bone tumors.

Predictive Value of MRI Radiomics for the Efficacy of High-Intensity Focused Ultrasound (HIFU) Ablation in Uterine Fibroids: A Systematic Review and Meta-Analysis.

Salimi M, Abdolizadeh A, Fayedeh F, Vadipour P

pubmed logopapersSep 29 2025
High-Intensity Focused Ultrasound (HIFU) ablation has emerged as a non-invasive treatment option for uterine fibroids that preserves fertility and offers faster recovery. Pre-intervention prediction of HIFU efficacy can augment clinical decision-making and patient management. This systematic review and meta-analysis aims to evaluate the performance of MRI-based radiomics machine learning (ML) models in predicting the efficacy of HIFU ablation in uterine fibroids. Studies were retrieved by conducting a thorough literature search across databases including PubMed, Scopus, Embase, and Web of Science, up to June 2025. The quality of the included studies was assessed using the QUADAS-2 and METRICS tools. A meta-analysis of the radiomics models was conducted to pool sensitivity, specificity, and AUC using a bivariate random-effects model. A total of 13 studies were incorporated in the systematic review and meta-analysis. Meta-analysis of 608 patients from 7 internal and 6 external validation cohorts showed pooled AUC, sensitivity, and specificity of 0.84, 77%, and 78%, respectively. QUADAS-2 was notable for significant methodological biases in the index test and flow and timing domains. Across all studies, the mean METRICS score was 76.93%-with a range of 54.9%-90.3%-denoting good overall quality and performance in most domains but with notable gaps in the open science domain. MRI-based radiomics models show promise in predicting the effectiveness of HIFU ablation for uterine fibroids. However, limitations such as limited geographic diversity, inconsistent reporting standards, and poor open science practices hinder broader application. Therefore, future research should focus on standardizing imaging protocols, using multi-center designs with external validation, and integrating diverse data sources.

Advancement in hepatocellular carcinoma research: Biomarkers, therapeutics approaches and impact of artificial intelligence.

Rajak D, Nema P, Sahu A, Vishwakarma S, Kashaw SK

pubmed logopapersSep 29 2025
Cancer is a leading, highly complex, and deadly disease that has become a major concern in modern medicine. Hepatocellular carcinoma is the most common primary liver cancer and a leading cause of global cancer mortality. Its development is predominantly associated with chronic liver diseases such as hepatitis B and C infections, cirrhosis, alcohol consumption, and non-alcoholic fatty liver disease. Molecular mechanisms underlying HCC involve genetic mutations, epigenetic changes, and disrupted signalling pathways, including Wnt/β-catenin and PI3K/AKT/mTOR. Early diagnosis remains challenging, as most cases are detected at advanced stages, limiting curative treatment options. Diagnostic advancements, including biomarkers like alpha-fetoprotein and cutting-edge imaging techniques such as CT, MRI, and ultrasound-based radiomics, have improved early detection. Treatment strategies depend on the disease stage, ranging from curative options like surgical resection and liver transplantation to palliative therapies, including transarterial chemoembolization, systemic therapies, and immunotherapy. Immune checkpoint inhibitors targeting PD-1/PD-L1 and CTLA-4 have shown promise for advanced HCC. In this review we discuss about emerging technologies, including artificial intelligence and multi-omics platforms for HCC management by enhancing diagnostic accuracy, identifying novel therapeutic targets, and enabling personalized treatments. Despite these advancements, the prognosis for HCC patients remains poor, underscoring the need for continued research into early detection, innovative therapies, and translational applications to effectively address this global health challenge.

3DSN-net: dual-tandem attention mechanism interaction network for breast tumor classification.

Li L, Wang M, Li D, Yang T

pubmed logopapersSep 29 2025
Breast cancer is one of the most prevalent malignancies among women worldwide and remains a major public health concern. Accurate classification of breast tumor subtypes is essential for guiding treatment decisions and improving patient outcomes. However, existing deep learning methods for histopathological image analysis often face limitations in balancing classification accuracy with computational efficiency, while failing to fully exploit the deep semantic features in complex tumor images. We developed 3DSN-net, a dual-attention interaction network for multiclass breast tumor classification. The model combines two complementary strategies: (i) spatial–channel attention mechanisms to strengthen the representation of discriminative features, and (ii) deformable convolutional layers to capture fine-grained structural variations in histopathological images. To further improve efficiency, a lightweight attention component was introduced to support stable gradient propagation and multi-scale feature fusion Experimental findings demonstrate that 3DSN-net consistently outperforms competing methods in both accuracy and robustness while maintaining favorable computational efficiency. The model effectively distinguishes benign and malignant tumors as well as multiple subtypes, highlighting the advantages of combining spatial–channel attention with deformable feature modeling. The model was trained and evaluated on two histopathological datasets, BreakHis and BCPSD, and benchmarked against several state-of-the-art CNN and Transformer-based approaches under identical experimental conditions. Experimental results show that 3DSN-net consistently outperforms baseline CNN and Transformer models, achieving 92%–100% accuracy for benign tumors and 86%–99% for malignant tumors, with error rates below 8%. On average, it improves classification accuracy by 3%–5% and ROC-AUC by 0.02 to 0.04 compared with state-of-the-art methods, while maintaining competitive computational efficiency. By enhancing the interaction between spatial and channel attention mechanisms, the model effectively distinguishes breast cancer subtypes, with only a slight reduction in classification speed on larger datasets due to increased data complexity. This study presents 3DSN-net as a reliable and effective framework for breast tumor classification from histopathological images. Beyond methodological improvements, the enhanced diagnostic performance has direct clinical implications, offering potential to reduce misclassification, assist pathologists in decision-making, and improve patient outcomes. The approach can also be extended to other medical imaging tasks. Future work will focus on optimizing computational efficiency and validating generalizability across larger, multi-center datasets. The online version contains supplementary material available at 10.1186/s12880-025-01936-2.

Diagnostic accuracy of a machine learning model using radiomics features from breast synthetic MRI.

Matsuda T, Matsuda M, Haque H, Fuchibe S, Matsumoto M, Shiraishi Y, Nobe Y, Kuwabara K, Toshimori W, Okada K, Kawaguchi N, Kurata M, Kamei Y, Kitazawa R, Kido T

pubmed logopapersSep 29 2025
In breast magnetic resonance imaging (MRI), the differentiation between benign and malignant breast masses relies on the Breast Imaging Reporting and Data System Magnetic Resonance Imaging (BI-RADS-MRI) lexicon. While BI-RADS-MRI classification demonstrates high sensitivity, specificities vary. This study aimed to evaluate the feasibility of machine learning models utilizing radiomics features derived from synthetic MRI to distinguish benign from malignant breast masses. Patients who underwent breast MRI, including a multi-dynamic multi-echo (MDME) sequence using 3.0 T MRI, and had histopathologically diagnosed enhanced breast mass lesions were retrospectively included. Clinical features, lesion shape features, texture features, and textural evaluation metrics were extracted. Machine learning models were trained and evaluated, and an ensemble model integrating BI-RADS and the machine learning model was also assessed. A total of 199 lesions (48 benign, 151 malignant) in 199 patients were included in the cross-validation dataset, while 43 lesions (15 benign, 28 malignant) in 40 new patients were included in the test dataset. For the test dataset, the sensitivity, specificity, accuracy, and area under the curve (AUC) of the receiver operating characteristic for BI-RADS were 100%, 33.3%, 76.7%, and 0.667, respectively. The logistic regression model yielded 64.3% sensitivity, 80.0% specificity, 69.8% accuracy, and an AUC of 0.707. The ensemble model achieved 82.1% sensitivity, 86.7% specificity, 83.7% accuracy, and an AUC of 0.883. The AUC of the ensemble model was significantly larger than that of both BI-RADS and the machine learning model. The ensemble model integrating BI-RADS and machine learning improved lesion classification. The online version contains supplementary material available at 10.1186/s12880-025-01930-8.

Hepatocellular carcinoma (HCC) and focal nodular hyperplasia (FNH) showing iso- or hyperintensity in the hepatobiliary phase: differentiation using Gd-EOB-DTPA enhanced MRI radiomics and deep learning features.

Mao HY, Hu JC, Zhang T, Fan YF, Wang XM, Hu CH, Yu YX

pubmed logopapersSep 29 2025
To develop and validate radiomics and deep learning models based on Gd-EOB-DTPA enhanced MRI for differentiation between hepatocellular carcinoma (HCC) and focal nodular hyperplasia (FNH) showing iso- or hyperintensity in the hepatobiliary phase (HBP). 112 patients from three hospitals were collected totally. 84 patients from hospital a and b with 54 HCCs and 30 FNHs randomly divided into a training cohort (<i>n</i> = 59: 38 HCC; 21 FNH) and an internal validation cohort (<i>n</i> = 25: 16 HCC; 9 FNH). A total of 28 patients from hospital c (<i>n</i> = 28: 20 HCC; 8 FNH) acted as an external test cohort. 1781 radiomics features were extracted from tumor volumes of interest (VOIs) in the pre-contrast phase (Pre), arterial phase (AP), portal venous phase (PP) and HBP images. 512 deep learning features were extracted from VOIs in the AP, PP and HBP images. Pearson correlation coefficient (PCC) and analysis of variance (ANOVA) were used to select the useful features. Conventional, delta radiomics and deep learning models were established using machine learning algorithms (support vector machine [SVM] and logistic regression [LR]) and their discriminatory efficacy assessed and compared. The combined deep learning models demonstrated the highest diagnostic performance in both the internal validation and external test cohorts, with area under the curve (AUC) values of 0.965 (95% confidence interval [CI]: 0.906, 1.000) and 0.851 (95% CI: 0.620, 1.000) respectively. The conventional and delta radiomics models achieved AUCs of 0.944 (95% CI: 0.779–0.979) and 0.938 (95% CI: 0.836–1.000) respectively, which were not significantly different from the deep learning models or each other (<i>P</i> = 0.559, 0.256, and 0.137). The combined deep learning models based on Gd-EOB-DTPA enhanced MRI may be useful for discriminating HCC from FNH showing iso-or hyperintensity in the HBP. The online version contains supplementary material available at 10.1186/s12880-025-01927-3.

Deep learning NTCP model for late dysphagia after radiotherapy for head and neck cancer patients based on 3D dose, CT and segmentations.

de Vette SPM, Neh H, van der Hoek L, MacRae DC, Chu H, Gawryszuk A, Steenbakkers RJHM, van Ooijen PMA, Fuller CD, Hutcheson KA, Langendijk JA, Sijtsema NM, van Dijk LV

pubmed logopapersSep 29 2025
Late radiation-associated dysphagia after head and neck cancer (HNC) significantly impacts patient's health and quality of life. Conventional normal tissue complication probability (NTCP) models use discrete dose parameters to predict toxicity risk but fail to fully capture the complexity of this side effect. Deep learning (DL) offers potential improvements by incorporating 3D dose data for all anatomical structures involved in swallowing. This study aims to enhance dysphagia prediction with 3D DL NTCP models compared to conventional NTCP models. A multi-institutional cohort of 1484 HNC patients was used to train and validate a 3D DL model (Residual Network) incorporating 3D dose distributions, organ-at-risk segmentations, and CT scans, with or without patient- or treatment-related data. Predictions of grade ≥ 2 dysphagia (CTCAEv4) at six months post-treatment were evaluated using area under the curve (AUC) and calibration curves. Results were compared to a conventional NTCP model based on pre-treatment dysphagia, tumour location, and mean dose to swallowing organs. Attention maps highlighting regions of interest for individual patients were assessed. DL models outperformed the conventional NTCP model in both the independent test set (AUC = 0.80-0.84 versus 0.76) and external test set (AUC = 0.73-0.74 versus 0.63) in AUC and calibration. Attention maps showed a focus on the oral cavity and superior pharyngeal constrictor muscle. DL NTCP models performed significantly better than the conventional NTCP model, suggesting the benefit of using 3D-input over the conventional discrete dose parameters. Attention maps highlighted relevant regions linked to dysphagia, supporting the utility of DL for improved predictions.
Page 8 of 2332330 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.