Sort by:
Page 42 of 3053046 results

Explainable multimodal deep learning for predicting thyroid cancer lateral lymph node metastasis using ultrasound imaging.

Shen P, Yang Z, Sun J, Wang Y, Qiu C, Wang Y, Ren Y, Liu S, Cai W, Lu H, Yao S

pubmed logopapersAug 1 2025
Preoperative prediction of lateral lymph node metastasis is clinically crucial for guiding surgical strategy and prognosis assessment, yet precise prediction methods are lacking. We therefore develop Lateral Lymph Node Metastasis Network (LLNM-Net), a bidirectional-attention deep-learning model that fuses multimodal data (preoperative ultrasound images, radiology reports, pathological findings, and demographics) from 29,615 patients and 9836 surgical cases across seven centers. Integrating nodule morphology and position with clinical text, LLNM-Net achieves an Area Under the Curve (AUC) of 0.944 and 84.7% accuracy in multicenter testing, outperforming human experts (64.3% accuracy) and surpassing previous models by 7.4%. Here we show tumors within 0.25 cm of the thyroid capsule carry >72% metastasis risk, with middle and upper lobes as high-risk regions. Leveraging location, shape, echogenicity, margins, demographics, and clinician inputs, LLNM-Net further attains an AUC of 0.983 for identifying high-risk patients. The model is thus a promising for tool for preoperative screening and risk stratification.

Transparent brain tumor detection using DenseNet169 and LIME.

Abraham LA, Palanisamy G, Veerapu G

pubmed logopapersAug 1 2025
A crucial area of research in the field of medical imaging is that of brain tumor classification, which greatly aids diagnosis and facilitates treatment planning. This paper proposes DenseNet169-LIME-TumorNet, a model based on deep learning and an integrated combination of DenseNet169 with LIME to boost the performance of brain tumor classification and its interpretability. The model was trained and evaluated on the publicly available Brain Tumor MRI Dataset containing 2,870 images spanning three tumor types. Dense169-LIME-TumorNet achieves a classification accuracy of 98.78%, outperforming widely used architectures including Inception V3, ResNet50, MobileNet V2, EfficientNet variants, and other DenseNet configurations. The integration of LIME provides visual explanations that enhance transparency and reliability in clinical decision-making. Furthermore, the model demonstrates minimal computational overhead, enabling faster inference and deployment in resource-constrained clinical environments, thereby highlighting its practical utility for real-time diagnostic support. Work in the future should run towards creating generalization through the adoption of a multi-modal learning approach, hybrid deep learning development, and real-time application development for AI-assisted diagnosis.

Multimodal data curation via interoperability: use cases with the Medical Imaging and Data Resource Center.

Chen W, Whitney HM, Kahaki S, Meyer C, Li H, Sá RC, Lauderdale D, Napel S, Gersing K, Grossman RL, Giger ML

pubmed logopapersAug 1 2025
Interoperability (the ability of data or tools from non-cooperating resources to integrate or work together with minimal effort) is particularly important for curation of multimodal datasets from multiple data sources. The Medical Imaging and Data Resource Center (MIDRC), a multi-institutional collaborative initiative to collect, curate, and share medical imaging datasets, has made interoperability with other data commons one of its top priorities. The purpose of this study was to demonstrate the interoperability between MIDRC and two other data repositories, BioData Catalyst (BDC) and National Clinical Cohort Collaborative (N3C). Using interoperability capabilities of the data repositories, we built two cohorts for example use cases, with each containing clinical and imaging data on matched patients. The representativeness of the cohorts is characterized by comparing with CDC population statistics using the Jensen-Shannon distance. The process and methods of interoperability demonstrated in this work can be utilized by MIDRC, BDC, and N3C users to create multimodal datasets for development of artificial intelligence/machine learning models.

Keyword-based AI assistance in the generation of radiology reports: A pilot study.

Dong F, Nie S, Chen M, Xu F, Li Q

pubmed logopapersAug 1 2025
Radiology reporting is a time-intensive process, and artificial intelligence (AI) shows potential for textual processing in radiology reporting. In this study, we proposed a keyword-based AI-assisted radiology reporting paradigm and evaluated its potential for clinical implementation. Using MRI data from 100 patients with intracranial tumors, two radiology residents independently wrote both a routine complete report (routine report) and a keyword report for each patient. Based on the keyword reports and a designed prompt, AI-assisted reports were generated (AI-generated reports). The results demonstrated median reporting time reduction ratios of 27.1% and 28.8% (mean, 28.0%) for the two residents, with no significant difference in quality scores between AI-generated and routine reports (p > 0.50). AI-generated reports showed primary diagnosis accuracies of 68.0% (Resident 1) and 76.0% (Resident 2) (mean, 72.0%). These findings suggest that the keyword-based AI-assisted reporting paradigm exhibits significant potential for clinical translation.

Development and Validation of a Brain Aging Biomarker in Middle-Aged and Older Adults: Deep Learning Approach.

Li Z, Li J, Li J, Wang M, Xu A, Huang Y, Yu Q, Zhang L, Li Y, Li Z, Wu X, Bu J, Li W

pubmed logopapersAug 1 2025
Precise assessment of brain aging is crucial for early detection of neurodegenerative disorders and aiding clinical practice. Existing magnetic resonance imaging (MRI)-based methods excel in this task, but they still have room for improvement in capturing local morphological variations across brain regions and preserving the inherent neurobiological topological structures. To develop and validate a deep learning framework incorporating both connectivity and complexity for accurate brain aging estimation, facilitating early identification of neurodegenerative diseases. We used 5889 T1-weighted MRI scans from the Alzheimer's Disease Neuroimaging Initiative dataset. We proposed a novel brain vision graph neural network (BVGN), incorporating neurobiologically informed feature extraction modules and global association mechanisms to provide a sensitive deep learning-based imaging biomarker. Model performance was evaluated using mean absolute error (MAE) against benchmark models, while generalization capability was further validated on an external UK Biobank dataset. We calculated the brain age gap across distinct cognitive states and conducted multiple logistic regressions to compare its discriminative capacity against conventional cognitive-related variables in distinguishing cognitively normal (CN) and mild cognitive impairment (MCI) states. Longitudinal track, Cox regression, and Kaplan-Meier plots were used to investigate the longitudinal performance of the brain age gap. The BVGN model achieved an MAE of 2.39 years, surpassing current state-of-the-art approaches while obtaining an interpretable saliency map and graph theory supported by medical evidence. Furthermore, its performance was validated on the UK Biobank cohort (N=34,352) with an MAE of 2.49 years. The brain age gap derived from BVGN exhibited significant difference across cognitive states (CN vs MCI vs Alzheimer disease; P<.001), and demonstrated the highest discriminative capacity between CN and MCI than general cognitive assessments, brain volume features, and apolipoprotein E4 carriage (area under the receiver operating characteristic curve [AUC] of 0.885 vs AUC ranging from 0.646 to 0.815). Brain age gap exhibited clinical feasibility combined with Functional Activities Questionnaire, with improved discriminative capacity in models achieving lower MAEs (AUC of 0.945 vs 0.923 and 0.911; AUC of 0.935 vs 0.900 and 0.881). An increasing brain age gap identified by BVGN may indicate underlying pathological changes in the CN to MCI progression, with each unit increase linked to a 55% (hazard ratio=1.55, 95% CI 1.13-2.13; P=.006) higher risk of cognitive decline in individuals who are CN and a 29% (hazard ratio=1.29, 95% CI 1.09-1.51; P=.002) increase in individuals with MCI. BVGN offers a precise framework for brain aging assessment, demonstrates strong generalization on an external large-scale dataset, and proposes novel interpretability strategies to elucidate multiregional cooperative aging patterns. The brain age gap derived from BVGN is validated as a sensitive biomarker for early identification of MCI and predicting cognitive decline, offering substantial potential for clinical applications.

Contrast-Enhanced Ultrasound-Based Intratumoral and Peritumoral Radiomics for Discriminating Carcinoma In Situ and Invasive Carcinoma of the Breast.

Zheng Y, Song Y, Wu T, Chen J, Du Y, Liu H, Wu R, Kuang Y, Diao X

pubmed logopapersAug 1 2025
This study aimed to evaluate the efficacy of a diagnostic model integrating intratumoral and peritumoral radiomic features based on contrast-enhanced ultrasound (CEUS) for differentiation between carcinoma in situ (CIS) and invasive breast carcinoma (IBC). Consecutive cases confirmed by postoperative histopathological analysis were retrospectively gathered, comprising 143 cases of CIS from January 2018 to May 2024, and 186 cases of IBC from May 2022 to May 2024, totaling 322 patients with 329 lesion and complete preoperative CEUS imaging. Intratumoral regions of interest (ROI) were defined in CEUS peak-phase images deferring gray-scale mode, while peritumoral ROI were defined by expanding 2 mm, 5 mm, and 8 mm beyond the tumor margin for radiomic features extraction. Statistical and machine learning techniques were employed for feature selection. Logistic regression classifier was utilized to construct radiomic models integrating intratumoral, peritumoral, and clinical features. Model performance was assessed using the area under the curve (AUC). The model incorporating 5 mm peritumoral features with intratumoral and clinical data exhibited superior diagnostic performance, achieving AUCs of 0.927 and 0.911 in the training and test sets, respectively. It outperformed models based only on clinical features or other radiomic configurations, with the 5 mm peritumoral region proving most effective for lesions discrimination. This study highlights the significant potential of combined intratumoral and peritumoral CEUS radiomics for classifying CIS and IBC, with the integration of 5 mm peritumoral features notably enhancing diagnostic accuracy.

Segmentation of coronary calcifications with a domain knowledge-based lightweight 3D convolutional neural network.

Santos R, Castro R, Baeza R, Nunes F, Filipe VM, Renna F, Paredes H, Fontes-Carvalho R, Pedrosa J

pubmed logopapersAug 1 2025
Cardiovascular diseases are the leading cause of death in the world, with coronary artery disease being the most prevalent. Coronary artery calcifications are critical biomarkers for cardiovascular disease, and their quantification via non-contrast computed tomography is a widely accepted and heavily employed technique for risk assessment. Manual segmentation of these calcifications is a time-consuming task, subject to variability. State-of-the-art methods often employ convolutional neural networks for an automated approach. However, there is a lack of studies that perform these segmentations with 3D architectures that can gather important and necessary anatomical context to distinguish the different coronary arteries. This paper proposes a novel and automated approach that uses a lightweight three-dimensional convolutional neural network to perform efficient and accurate segmentations and calcium scoring. Results show that this method achieves Dice score coefficients of 0.93 ± 0.02, 0.93 ± 0.03, 0.84 ± 0.02, 0.63 ± 0.06 and 0.89 ± 0.03 for the foreground, left anterior descending artery (LAD), left circumflex artery (LCX), left main artery (LM) and right coronary artery (RCA) calcifications, respectively, outperforming other state-of-the-art architectures. An external cohort validation also showed the generalization of this method's performance and how it can be applied in different clinical scenarios. In conclusion, the proposed lightweight 3D convolutional neural network demonstrates high efficiency and accuracy, outperforming state-of-the-art methods and showcasing robust generalization potential.

A RF-based end-to-end Breast Cancer Prediction algorithm.

Win KN

pubmed logopapersAug 1 2025
Breast cancer became the primary cause of cancer-related deaths among women year by year. Early detection and accurate prediction of breast cancer play a crucial role in strengthening the quality of human life. Many scientists have concentrated on analyzing and conducting the development of many algorithms and progressing computer-aided diagnosis applications. Whereas many research have been conducted, feature research on cancer diagnosis is rare, especially regarding predicting the desired features by providing and feeding breast cancer features into the system. In this regard, this paper proposed a Breast Cancer Prediction (RF-BCP) algorithm based on Random Forest by taking inputs to predict cancer. For the experiment of the proposed algorithm, two datasets were utilized namely Breast Cancer dataset and a curated mammography dataset, and also compared the accuracy of the proposed algorithm with SVM, Gaussian NB, and KNN algorithms. Experimental results show that the proposed algorithm can predict well and outperform other existing machine learning algorithms to support decision-making.

Cerebral Amyloid Deposition With <sup>18</sup>F-Florbetapir PET Mediates Retinal Vascular Density and Cognitive Impairment in Alzheimer's Disease.

Chen Z, He HL, Qi Z, Bi S, Yang H, Chen X, Xu T, Jin ZB, Yan S, Lu J

pubmed logopapersAug 1 2025
Alzheimer's disease (AD) is accompanied by alterations in retinal vascular density (VD), but the mechanisms remain unclear. This study investigated the relationship among cerebral amyloid-β (Aβ) deposition, VD, and cognitive decline. We enrolled 92 participants, including 47 AD patients and 45 healthy control (HC) participants. VD across retinal subregions was quantified using deep learning-based fundus photography, and cerebral Aβ deposition was measured with <sup>18</sup>F-florbetapir (<sup>18</sup>F-AV45) PET/MRI. Using the minimum bounding circle of the optic disc as the diameter (papilla-diameter, PD), VD (total, 0.5-1.0 PD, 1.0-1.5 PD, 1.5-2.0 PD, 2.0-2.5 PD) was calculated. Standardized uptake value ratio (SUVR) for Aβ deposition was computed for global and regional cortical areas, using the cerebellar cortex as the reference region. Cognitive performance was assessed with the Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA). Pearson correlation, multiple linear regression, and mediation analyses were used to explore Aβ deposition, VD, and cognition. AD patients exhibited significantly lower VD in all subregions compared to HC (p < 0.05). Reduced VD correlated with higher SUVR in the global cortex and a decline in cognitive abilities (p < 0.05). Mediation analysis indicated that VD influenced MMSE and MoCA through SUVR in the global cortex, with the most pronounced effects observed in the 1.0-1.5 PD range. Retinal VD is associated with cognitive decline, a relationship primarily mediated by cerebral Aβ deposition measured via <sup>18</sup>F-AV45 PET. These findings highlight the potential of retinal VD as a biomarker for early detection in AD.

Rapid review: Growing usage of Multimodal Large Language Models in healthcare.

Gupta P, Zhang Z, Song M, Michalowski M, Hu X, Stiglic G, Topaz M

pubmed logopapersAug 1 2025
Recent advancements in large language models (LLMs) have led to multimodal LLMs (MLLMs), which integrate multiple data modalities beyond text. Although MLLMs show promise, there is a gap in the literature that empirically demonstrates their impact in healthcare. This paper summarizes the applications of MLLMs in healthcare, highlighting their potential to transform health practices. A rapid literature review was conducted in August 2024 using World Health Organization (WHO) rapid-review methodology and PRISMA standards, with searches across four databases (Scopus, Medline, PubMed and ACM Digital Library) and top-tier conferences-including NeurIPS, ICML, AAAI, MICCAI, CVPR, ACL and EMNLP. Articles on MLLMs healthcare applications were included for analysis based on inclusion and exclusion criteria. The search yielded 115 articles, 39 included in the final analysis. Of these, 77% appeared online (preprints and published) in 2024, reflecting the emergence of MLLMs. 80% of studies were from Asia and North America (mainly China and US), with Europe lagging. Studies split evenly between pre-built MLLMs evaluations (60% focused on GPT versions) and custom MLLMs/frameworks development with task-specific customizations. About 81% of studies examined MLLMs for diagnosis and reporting in radiology, pathology, and ophthalmology, with additional applications in education, surgery, and mental health. Prompting strategies, used in 80% of studies, improved performance in nearly half. However, evaluation practices were inconsistent with 67% reported accuracy. Error analysis was mostly anecdotal, with only 18% categorized failure types. Only 13% validated explainability through clinician feedback. Clinical deployment was demonstrated in just 3% of studies, and workflow integration, governance, and safety were rarely addressed. MLLMs offer substantial potential for healthcare transformation through multimodal data integration. Yet, methodological inconsistencies, limited validation, and underdeveloped deployment strategies highlight the need for standardized evaluation metrics, structured error analysis, and human-centered design to support safe, scalable, and trustworthy clinical adoption.
Page 42 of 3053046 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.