Sort by:
Page 65 of 2372364 results

TME-guided deep learning predicts chemotherapy and immunotherapy response in gastric cancer with attention-enhanced residual Swin Transformer.

Sang S, Sun Z, Zheng W, Wang W, Islam MT, Chen Y, Yuan Q, Cheng C, Xi S, Han Z, Zhang T, Wu L, Li W, Xie J, Feng W, Chen Y, Xiong W, Yu J, Li G, Li Z, Jiang Y

pubmed logopapersAug 19 2025
Adjuvant chemotherapy and immune checkpoint blockade exert quite durable anti-tumor responses, but the lack of effective biomarkers limits the therapeutic benefits. Utilizing multi-cohorts of 3,095 patients with gastric cancer, we propose an attention-enhanced residual Swin Transformer network to predict chemotherapy response (main task), and two predicting subtasks (ImmunoScore and periostin [POSTN]) are used as intermediate tasks to improve the model's performance. Furthermore, we assess whether the model can identify which patients would benefit from immunotherapy. The deep learning model achieves high accuracy in predicting chemotherapy response and the tumor microenvironment (ImmunoScore and POSTN). We further find that the model can identify which patient may benefit from checkpoint blockade immunotherapy. This approach offers precise chemotherapy and immunotherapy response predictions, opening avenues for personalized treatment options. Prospective studies are warranted to validate its clinical utility.

Multi-View Echocardiographic Embedding for Accessible AI Development

Tohyama, T., Han, A., Yoon, D., Paik, K., Gow, B., Izath, N., Kpodonu, J., Celi, L. A.

medrxiv logopreprintAug 19 2025
Background and AimsEchocardiography serves as a cornerstone of cardiovascular diagnostics through multiple standardized imaging views. While recent AI foundation models demonstrate superior capabilities across cardiac imaging tasks, their massive computational requirements and reliance on large-scale datasets create accessibility barriers, limiting AI development to well-resourced institutions. Vector embedding approaches offer promising solutions by leveraging compact representations from original medical images for downstream applications. Furthermore, demographic fairness remains critical, as AI models may incorporate biases that confound clinically relevant features. We developed a multi-view encoder framework to address computational accessibility while investigating demographic fairness challenges. MethodsWe utilized the MIMIC-IV-ECHO dataset (7,169 echocardiographic studies) to develop a transformer-based multi-view encoder that aggregates view-level representations into study-level embeddings. The framework incorporated adversarial learning to suppress demographic information while maintaining clinical performance. We evaluated performance across 21 binary classification tasks encompassing echocardiographic measurements and clinical diagnoses, comparing against foundation model baselines with varying adversarial weights. ResultsThe multi-view encoder achieved a mean improvement of 9.0 AUC points (12.0% relative improvement) across clinical tasks compared to foundation model embeddings. Performance remained robust with limited echocardiographic views compared to the conventional approach. However, adversarial learning showed limited effectiveness in reducing demographic shortcuts, with stronger weighting substantially compromising diagnostic performance. ConclusionsOur framework democratizes advanced cardiac AI capabilities, enabling substantial diagnostic improvements without massive computational infrastructure. While algorithmic approaches to demographic fairness showed limitations, the multi-view encoder provides a practical pathway for broader AI adoption in cardiovascular medicine with enhanced efficiency in real-world clinical settings. Structured graphical abstract or graphical abstractO_ST_ABSKey QuestionC_ST_ABSCan multi-view encoder frameworks achieve superior diagnostic performance compared to foundation model embeddings while reducing computational requirements and maintaining robust performance with fewer echocardiographic views for cardiac AI applications? Key FindingMulti-view encoder achieved 12.0% relative improvement (9.0 AUC points) across 21 cardiac tasks compared to foundation model baselines, with efficient 512-dimensional vector embeddings and robust performance using fewer echocardiographic views. Take-home MessageVector embedding approaches with attention-based multi-view integration significantly improve cardiac diagnostic performance while reducing computational requirements, offering a pathway toward more efficient AI implementation in clinical settings. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=83 SRC="FIGDIR/small/25333725v1_ufig1.gif" ALT="Figure 1"> View larger version (22K): [email protected]@a75818org.highwire.dtl.DTLVardef@88a588org.highwire.dtl.DTLVardef@12bad06_HPS_FORMAT_FIGEXP M_FIG C_FIG Translational PerspectiveOur proposed multi-view encoder framework overcomes critical barriers to the widespread adoption of artificial intelligence in echocardiography. By dramatically reducing computational requirements, the multi-view encoder approach allows smaller healthcare institutions to develop sophisticated AI models locally. The framework maintains robust performance with fewer echocardiographic examinations, which addresses real-world clinical constraints where comprehensive imaging is not feasible due to patient factors or time limitations. This technology provides a practical way to democratize advanced cardiac AI capabilities, which could improve access to cardiovascular care across diverse healthcare settings while reducing dependence on proprietary datasets and massive computational resources.

Automated Protocol Suggestions for Cranial MRI Examinations Using Locally Fine-tuned BERT Models.

Boschenriedter C, Rubbert C, Vach M, Caspers J

pubmed logopapersAug 18 2025
Selection of appropriate imaging sequences protocols for cranial magnetic resonance imaging (MRI) is crucial to address the medical question and adequately support patient care. Inappropriate protocol selection can compromise diagnostic accuracy, extend scan duration, and increase the risk of misdiagnosis. Typically, radiologists determine scanning protocols based on their expertise, a process that can be time-consuming and subject to variability. Language models offer the potential to streamline this process. This study investigates the capability of bidirectional encoder representations from transformers (BERT)-based models to suggest appropriate MRI protocols based on referral information.A total of 410 anonymized electronic referrals for cranial MRI from a local order-entry system were categorized into nine protocol classes by an experienced neuroradiologist. A locally hosted instance of four different, pre-trained BERT-based classifiers (BERT, ModernBERT, GottBERT, and medBERT.de) were trained to classify protocols based on referral entries, including preliminary diagnoses, prior treatment history, and clinical questions. Each model was additionally fine-tuned for local language on a large dataset of electronic referrals.The model based on medBERT.de with local language fine-tuning was the best-performing model and correctly predicted 81% of all protocols, achieving a macro-F1 score of 0.71, macro-precision and macro-recall values of 0.73 and 0.71, respectively. Moreover, we were able to show that local language fine-tuning led to performance improvements across all models.These results demonstrate the potential of language models to predict MRI protocols, even with limited training data. This approach could accelerate and standardize radiological protocol selection, offering significant benefits for clinical workflows.

Machine learning driven diagnostic pathway for clinically significant prostate cancer: the role of micro-ultrasound.

Saitta C, Buffi N, Avolio P, Beatrici E, Paciotti M, Lazzeri M, Fasulo V, Cella L, Garofano G, Piccolini A, Contieri R, Nazzani S, Silvani C, Catanzaro M, Nicolai N, Hurle R, Casale P, Saita A, Lughezzani G

pubmed logopapersAug 18 2025
Detecting clinically significant prostate cancer (csPCa) remains a top priority in delivering high-quality care, yet consensus on an optimal diagnostic pathway is constantly evolving. In this study, we present an innovative diagnostic approach, leveraging a machine learning model tailored to the emerging role of prostate micro-ultrasound (micro-US) in the setting of csPCa diagnosis. We queried our prospective database for patients who underwent Micro-US for a clinical suspicious of prostate cancer. CsPCa was defined as any Gleason group grade > 1. Primary outcome was the development of a diagnostic pathway which implements clinical and radiological findings using machine learning algorithm. The dataset was divided into training (70%) and testing subsets. Boruta algorithms was used for variable selection, then based on the importance coefficients multivariable logistic regression model (MLR) was fitted to predict csPCA. Classification and Regression Tree (CART) model was fitted to create the decision tree. Accuracy of the model was tested using receiver characteristic curve (ROC) analysis using estimated area under the curve (AUC). Overall, 1422 patients were analysed. Multivariable LR revealed PRI-MUS score ≥ 3 (OR 4.37, p < 0.001), PI-RADS score ≥ 3 (OR 2.01, p < 0.001), PSA density ≥ 0.15 (OR 2.44, p < 0.001), DRE (OR 1.93, p < 0.001), anterior lesions (OR 1.49, p = 0.004), prostate cancer familiarity (OR 1.54, p = 0.005) and increasing age (OR 1.031, p < 0.001) as the best predictors for csPCa, demonstrating an AUC in the validation cohort of 83%, 78% sensitivity, 72.1% specificity and 81% negative predictive value. CART analysis revealed elevated PRIMUS score as the main node to stratify our cohort. By integrating clinical features, serum biomarkers, and imaging findings, we have developed a point of care model that accurately predicts the presence of csPCa. Our findings support a paradigm shift towards adopting MicroUS as a first level diagnostic tool for csPCa detection, potentially optimizing clinical decision making. This approach could improve the identification of patients at higher risk for csPca and guide the selection of the most appropriate diagnostic exams. External validation is essential to confirm these results.

Reproducible meningioma grading across multi-center MRI protocols via hybrid radiomic and deep learning features.

Saadh MJ, Albadr RJ, Sur D, Yadav A, Roopashree R, Sangwan G, Krithiga T, Aminov Z, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Farhood B

pubmed logopapersAug 18 2025
This study aimed to create a reliable method for preoperative grading of meningiomas by combining radiomic features and deep learning-based features extracted using a 3D autoencoder. The goal was to utilize the strengths of both handcrafted radiomic features and deep learning features to improve accuracy and reproducibility across different MRI protocols. The study included 3,523 patients with histologically confirmed meningiomas, consisting of 1,900 low-grade (Grade I) and 1,623 high-grade (Grades II and III) cases. Radiomic features were extracted from T1-contrast-enhanced and T2-weighted MRI scans using the Standardized Environment for Radiomics Analysis (SERA). Deep learning features were obtained from the bottleneck layer of a 3D autoencoder integrated with attention mechanisms. Feature selection was performed using Principal Component Analysis (PCA) and Analysis of Variance (ANOVA). Classification was done using machine learning models like XGBoost, CatBoost, and stacking ensembles. Reproducibility was evaluated using the Intraclass Correlation Coefficient (ICC), and batch effects were harmonized with the ComBat method. Performance was assessed based on accuracy, sensitivity, and the area under the receiver operating characteristic curve (AUC). For T1-contrast-enhanced images, combining radiomic and deep learning features provided the highest AUC of 95.85% and accuracy of 95.18%, outperforming models using either feature type alone. T2-weighted images showed slightly lower performance, with the best AUC of 94.12% and accuracy of 93.14%. Deep learning features performed better than radiomic features alone, demonstrating their strength in capturing complex spatial patterns. The end-to-end 3D autoencoder with T1-contrast images achieved an AUC of 92.15%, accuracy of 91.14%, and sensitivity of 92.48%, surpassing T2-weighted imaging models. Reproducibility analysis showed high reliability (ICC > 0.75) for 127 out of 215 features, ensuring consistent performance across multi-center datasets. The proposed framework effectively integrates radiomic and deep learning features to provide a robust, non-invasive, and reproducible approach for meningioma grading. Future research should validate this framework in real-world clinical settings and explore adding clinical parameters to enhance its prognostic value.

A systematic review of comparisons of AI and radiologists in the diagnosis of HCC in multiphase CT: implications for practice.

Younger J, Morris E, Arnold N, Athulathmudali C, Pinidiyapathirage J, MacAskill W

pubmed logopapersAug 18 2025
This systematic review aims to examine the literature of artificial intelligence (AI) algorithms in the diagnosis of hepatocellular carcinoma (HCC) among focal liver lesions compared to radiologists on multiphase CT images, focusing on performance metrics that include sensitivity and specificity as a minimum. We searched Embase, PubMed and Web of Science for studies published from January 2018 to May 2024. Eligible studies evaluated AI algorithms for diagnosing HCC using multiphase CT, with radiologist interpretation as a comparator. The performance of AI models and radiologists was recorded using sensitivity and specificity from each study. TRIPOD + AI was used for quality appraisal and PROBAST was used to assess the risk of bias. Seven studies out of the 3532 reviewed were included in the review. All seven studies analysed the performance of AI models and radiologists. Two studies additionally assessed performance with and without supplementary clinical information to assist the AI model in diagnosis. Three studies additionally evaluated the performance of radiologists with assistance of the AI algorithm in diagnosis. The AI algorithms demonstrated a sensitivity ranging from 63.0 to 98.6% and a specificity of 82.0-98.6%. In comparison, junior radiologists (with less than 10 years of experience) exhibited a sensitivity of 41.2-92.0% and a specificity of 72.2-100%, while senior radiologists (with more than 10 years of experience) achieved a sensitivity between 63.9% and 93.7% and a specificity ranging from 71.9 to 99.9%. AI algorithms demonstrate adequate performance in the diagnosis of HCC from focal liver lesions on multiphase CT images. Across geographic settings, AI could help streamline workflows and improve access to timely diagnosis. However, thoughtful implementation strategies are still needed to mitigate bias and overreliance.

Overview of Multimodal Radiomics and Deep Learning in the Prediction of Axillary Lymph Node Status in Breast Cancer.

Zhao X, Wang M, Wei Y, Lu Z, Peng Y, Cheng X, Song J

pubmed logopapersAug 18 2025
Breast cancer is the most prevalent malignancy in women, with the status of axillary lymph nodes being a pivotal factor in treatment decision-making and prognostic evaluation. With the integration of deep learning algorithms, radiomics has become a transformative tool with increasingly extensive applications across multimodality, particularly in oncological imaging. Recent studies of radiomics and deep learning have demonstrated considerable potential for noninvasive diagnosis and prediction in breast cancer through multimodalities (mammography, ultrasonography, MRI and PET/CT), specifically for predicting axillary lymph node status. Although significant progress has been achieved in radiomics-based prediction of axillary lymph node metastasis in breast cancer, several methodological and technical challenges remain to be addressed. The comprehensive review incorporates a detailed analysis of radiomics workflow and model construction strategies. The objective of this review is to synthesize and evaluate current research findings, thereby providing valuable references for precision diagnosis and assessment of axillary lymph node metastasis in breast cancer, while promoting development and advancement in this evolving field.

A Dual-Attention Graph Network for fMRI Data Classification

Amirali Arbab, Zeinab Davarani, Mehran Safayani

arxiv logopreprintAug 18 2025
Understanding the complex neural activity dynamics is crucial for the development of the field of neuroscience. Although current functional MRI classification approaches tend to be based on static functional connectivity or cannot capture spatio-temporal relationships comprehensively, we present a new framework that leverages dynamic graph creation and spatiotemporal attention mechanisms for Autism Spectrum Disorder(ASD) diagnosis. The approach used in this research dynamically infers functional brain connectivity in each time interval using transformer-based attention mechanisms, enabling the model to selectively focus on crucial brain regions and time segments. By constructing time-varying graphs that are then processed with Graph Convolutional Networks (GCNs) and transformers, our method successfully captures both localized interactions and global temporal dependencies. Evaluated on the subset of ABIDE dataset, our model achieves 63.2 accuracy and 60.0 AUC, outperforming static graph-based approaches (e.g., GCN:51.8). This validates the efficacy of joint modeling of dynamic connectivity and spatio-temporal context for fMRI classification. The core novelty arises from (1) attention-driven dynamic graph creation that learns temporal brain region interactions and (2) hierarchical spatio-temporal feature fusion through GCNtransformer fusion.

Applications of Small Language Models in Medical Imaging Classification with a Focus on Prompt Strategies

Yiting Wang, Ziwei Wang, Jiachen Zhong, Di Zhu, Weiyi Li

arxiv logopreprintAug 18 2025
Large language models (LLMs) have shown remarkable capabilities in natural language processing and multi-modal understanding. However, their high computational cost, limited accessibility, and data privacy concerns hinder their adoption in resource-constrained healthcare environments. This study investigates the performance of small language models (SLMs) in a medical imaging classification task, comparing different models and prompt designs to identify the optimal combination for accuracy and usability. Using the NIH Chest X-ray dataset, we evaluate multiple SLMs on the task of classifying chest X-ray positions (anteroposterior [AP] vs. posteroanterior [PA]) under three prompt strategies: baseline instruction, incremental summary prompts, and correction-based reflective prompts. Our results show that certain SLMs achieve competitive accuracy with well-crafted prompts, suggesting that prompt engineering can substantially enhance SLM performance in healthcare applications without requiring deep AI expertise from end users.

Multimodal large language models for medical image diagnosis: Challenges and opportunities.

Zhang A, Zhao E, Wang R, Zhang X, Wang J, Chen E

pubmed logopapersAug 18 2025
The integration of artificial intelligence (AI) into radiology has significantly improved diagnostic accuracy and workflow efficiency. Multimodal large language models (MLLMs), which combine natural language processing (NLP) and computer vision techniques, hold the potential to further revolutionize medical image analysis. Despite these advances, their widespread clinical adoption of MLLMs remains limited by challenges such as data quality, interpretability, ethical and regulatory compliance- including adherence to frameworks like the General Data Protection Regulation (GDPR) - computational demands, and generalizability across diverse patient populations. Addressing these interconnected challenges presents opportunities to enhance MLLM performance and reliability. Priorities for future research include improving model transparency, safeguarding data privacy through federated learning, optimizing multimodal fusion strategies, and establishing standardized evaluation frameworks. By overcoming these barriers, MLLMs can become essential tools in radiology, supporting clinical decision-making, and improving patient outcomes.
Page 65 of 2372364 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.