Sort by:
Page 67 of 2372364 results

End-to-end deep learning for the diagnosis of pelvic and sacral tumors using non-enhanced MRI: a multi-center study.

Yin P, Liu K, Chen R, Liu Y, Lu L, Sun C, Liu Y, Zhang T, Zhong J, Chen W, Yu R, Wang D, Liu X, Hong N

pubmed logopapersAug 15 2025
This study developed an end-to-end deep learning (DL) model using non-enhanced MRI to diagnose benign and malignant pelvic and sacral tumors (PSTs). Retrospective data from 835 patients across four hospitals were employed to train, validate, and test the models. Six diagnostic models with varied input sources were compared. Performance (AUC, accuracy/ACC) and reading times of three radiologists were compared. The proposed Model SEG-CL-NC achieved AUC/ACC of 0.823/0.776 (Internal Test Set 1) and 0.836/0.781 (Internal Test Set 2). In External Dataset Centers 2, 3, and 4, its ACC was 0.714, 0.740, and 0.756, comparable to contrast-enhanced models and radiologists (P > 0.05), while its diagnosis time was significantly shorter than radiologists (P < 0.01). Our results suggested that the proposed Model SEG-CL-NC could achieve comparable performance to contrast-enhanced models and radiologists in diagnosing benign and malignant PSTs, offering an accurate, efficient, and cost-effective tool for clinical practice.

Radiomics in pediatric brain tumors: from images to insights.

Rai P, Ahmed S, Mahajan A

pubmed logopapersAug 15 2025
Radiomics has emerged as a promising non-invasive imaging approach in pediatric neuro-oncology, offering the ability to extract high-dimensional quantitative features from routine MRI to support diagnosis, risk stratification, molecular characterization, and outcome prediction. Pediatric brain tumors, which differ significantly from adult tumors in biology and imaging appearance, present unique diagnostic and prognostic challenges. By integrating radiomics with machine learning algorithms, studies have demonstrated strong performance in classifying tumor types such as medulloblastoma, ependymoma, and gliomas, and predicting molecular subgroups and mutations such as H3K27M and BRAF. Recent studies combining radiomics with machine learning algorithms - including support vector machines, random forests, and deep learning CNNs - have demonstrated promising performance, with AUCs ranging from 0.75 to 0.98 for tumor classification and 0.77 to 0.88 for molecular subgroup prediction, across cohorts from 50 to over 450 patients, with internal cross-validation and external validation in some cases. In resource-limited settings or regions with limited radiologist manpower, radiomics-based tools could help augment diagnostic accuracy and consistency, serving as decision support to prioritize patients for further evaluation or biopsy. Emerging applications such as radio-immunomics and radio-pathomics may further enhance understanding of tumor biology but remain investigational. Despite its potential, clinical translation faces notable barriers, including limited pediatric-specific datasets, variable imaging protocols, and the lack of standardized, reproducible workflows. Multi-institutional collaboration, harmonized pipelines, and prospective validation are essential next steps. Radiomics should be viewed as a supplementary tool that complements existing clinical and pathological frameworks, supporting more informed and equitable care in pediatric brain tumor management.

SMAS: Structural MRI-based AD Score using Bayesian supervised VAE.

Nemali A, Bernal J, Yakupov R, D S, Dyrba M, Incesoy EI, Mukherjee S, Peters O, Ersözlü E, Hellmann-Regen J, Preis L, Priller J, Spruth E, Altenstein S, Lohse A, Schneider A, Fliessbach K, Kimmich O, Wiltfang J, Hansen N, Schott B, Rostamzadeh A, Glanz W, Butryn M, Buerger K, Janowitz D, Ewers M, Perneczky R, Rauchmann B, Teipel S, Kilimann I, Goerss D, Laske C, Sodenkamp S, Spottke A, Coenjaerts M, Brosseron F, Lüsebrink F, Dechent P, Scheffler K, Hetzer S, Kleineidam L, Stark M, Jessen F, Duzel E, Ziegler G

pubmed logopapersAug 15 2025
This study introduces the Structural MRI-based Alzheimer's Disease Score (SMAS), a novel index intended to quantify Alzheimer's Disease (AD)-related morphometric patterns using a deep learning Bayesian-supervised Variational Autoencoder (Bayesian-SVAE). The SMAS index was constructed using baseline structural MRI data from the DELCODE study and evaluated longitudinally in two independent cohorts: DELCODE (n=415) and ADNI (n=190). Our findings indicate that SMAS has strong associations with cognitive performance (DELCODE: r=-0.83; ADNI: r=-0.62), age (DELCODE: r=0.50; ADNI: r=0.28), hippocampal volume (DELCODE: r=-0.44; ADNI: r=-0.66), and total gray matter volume (DELCODE: r=-0.42; ADNI: r=-0.47), suggesting its potential as a biomarker for AD-related brain atrophy. Moreover, our longitudinal studies indicated that SMAS may be useful for the early identification and tracking of AD. The model demonstrated significant predictive accuracy in distinguishing cognitively healthy individuals from those with AD (DELCODE: AUC=0.971 at baseline, 0.833 at 36 months; ADNI: AUC=0.817 at baseline, improving to 0.903 at 24 months). Notably, over 36 months, the SMAS index outperformed existing measures such as SPARE-AD and hippocampal volume. The relevance map analysis revealed significant morphological changes in key AD-related brain regions, including the hippocampus, posterior cingulate cortex, precuneus, and lateral parietal cortex, highlighting that SMAS is a sensitive and interpretable biomarker of brain atrophy, suitable for early AD detection and longitudinal monitoring of disease progression.

Spatio-temporal deep learning with temporal attention for indeterminate lung nodule classification.

Farina B, Carbajo Benito R, Montalvo-García D, Bermejo-Peláez D, Maceiras LS, Ledesma-Carbayo MJ

pubmed logopapersAug 15 2025
Lung cancer is the leading cause of cancer-related death worldwide. Deep learning-based computer-aided diagnosis (CAD) systems in screening programs enhance malignancy prediction, assist radiologists in decision-making, and reduce inter-reader variability. However, limited research has explored the analysis of repeated annual exams of indeterminate lung nodules to improve accuracy. We introduced a novel spatio-temporal deep learning framework, the global attention convolutional recurrent neural network (globAttCRNN), to predict indeterminate lung nodule malignancy using serial screening computed tomography (CT) images from the National Lung Screening Trial (NLST) dataset. The model comprises a lightweight 2D convolutional neural network for spatial feature extraction and a recurrent neural network with a global attention module to capture the temporal evolution of lung nodules. Additionally, we proposed new strategies to handle missing data in the temporal dimension to mitigate potential biases arising from missing time steps, including temporal augmentation and temporal dropout. Our model achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.954 in an independent test set of 175 lung nodules, each detected in multiple CT scans over patient follow-up, outperforming baseline single-time and multiple-time architectures. The temporal global attention module prioritizes informative time points, enabling the model to capture key spatial and temporal features while ignoring irrelevant or redundant information. Our evaluation emphasizes its potential as a valuable tool for the diagnosis and stratification of patients at risk of lung cancer.

FusionFM: Fusing Eye-specific Foundational Models for Optimized Ophthalmic Diagnosis

Ke Zou, Jocelyn Hui Lin Goh, Yukun Zhou, Tian Lin, Samantha Min Er Yew, Sahana Srinivasan, Meng Wang, Rui Santos, Gabor M. Somfai, Huazhu Fu, Haoyu Chen, Pearse A. Keane, Ching-Yu Cheng, Yih Chung Tham

arxiv logopreprintAug 15 2025
Foundation models (FMs) have shown great promise in medical image analysis by improving generalization across diverse downstream tasks. In ophthalmology, several FMs have recently emerged, but there is still no clear answer to fundamental questions: Which FM performs the best? Are they equally good across different tasks? What if we combine all FMs together? To our knowledge, this is the first study to systematically evaluate both single and fused ophthalmic FMs. To address these questions, we propose FusionFM, a comprehensive evaluation suite, along with two fusion approaches to integrate different ophthalmic FMs. Our framework covers both ophthalmic disease detection (glaucoma, diabetic retinopathy, and age-related macular degeneration) and systemic disease prediction (diabetes and hypertension) based on retinal imaging. We benchmarked four state-of-the-art FMs (RETFound, VisionFM, RetiZero, and DINORET) using standardized datasets from multiple countries and evaluated their performance using AUC and F1 metrics. Our results show that DINORET and RetiZero achieve superior performance in both ophthalmic and systemic disease tasks, with RetiZero exhibiting stronger generalization on external datasets. Regarding fusion strategies, the Gating-based approach provides modest improvements in predicting glaucoma, AMD, and hypertension. Despite these advances, predicting systemic diseases, especially hypertension in external cohort remains challenging. These findings provide an evidence-based evaluation of ophthalmic FMs, highlight the benefits of model fusion, and point to strategies for enhancing their clinical applicability.

A novel interpreted deep network for Alzheimer's disease prediction based on inverted self attention and vision transformer.

Ibrar W, Khan MA, Hamza A, Rubab S, Alqahtani O, Alouane MT, Teng S, Nam Y

pubmed logopapersAug 15 2025
In the world, Alzheimer's disease (AD) is the utmost public reason for dementia. AD causes memory loss and disturbing mental function impairment in aging people. The loss of memory and disturbing mental function brings a significant load on patients as well as on society. So far, there is no actual treatment that can cure AD; however, early diagnosis can slow down this disease. Deep learning has shown substantial success in diagnosing AZ disease. However, challenges remain due to limited data, improper model selection, and extraction of irrelevant features. In this work, we proposed a fully automated framework based on the fusion of a vision transformer and a novel inverted residual bottleneck with self-attention (IRBwSA) for AD diagnosis. In the first step, data augmentation was performed to balance the selected dataset. After that, the vision model is designed and modified according to the dataset. Similarly, a new inverted bottleneck self-attention model is developed. The designed models are trained on the augmented dataset, and extracted features are fused using a novel search-based approach. Moreover, the designed models are interpreted using an explainable artificial intelligence technique named LIME. The fused features are finally classified using a shallow wide neural network and other classifiers. The experimental process was conducted on an augmented MRI dataset, and 96.1% accuracy and 96.05% precision rate were obtained. Comparison with a few recent techniques shows the proposed framework's better performance.

Automating the Referral of Bone Metastases Patients With and Without the Use of Large Language Models.

Sangwon KL, Han X, Becker A, Zhang Y, Ni R, Zhang J, Alber DA, Alyakin A, Nakatsuka M, Fabbri N, Aphinyanaphongs Y, Yang JT, Chachoua A, Kondziolka D, Laufer I, Oermann EK

pubmed logopapersAug 15 2025
Bone metastases, affecting more than 4.8% of patients with cancer annually, and particularly spinal metastases require urgent intervention to prevent neurological complications. However, the current process of manually reviewing radiological reports leads to potential delays in specialist referrals. We hypothesized that natural language processing (NLP) review of routine radiology reports could automate the referral process for timely multidisciplinary care of spinal metastases. We assessed 3 NLP models-a rule-based regular expression (RegEx) model, GPT-4, and a specialized Bidirectional Encoder Representations from Transformers (BERT) model (NYUTron)-for automated detection and referral of bone metastases. Study inclusion criteria targeted patients with active cancer diagnoses who underwent advanced imaging (computed tomography, MRI, or positron emission tomography) without previous specialist referral. We defined 2 separate tasks: task of identifying clinically significant bone metastatic terms (lexical detection), and identifying cases needing a specialist follow-up (clinical referral). Models were developed using 3754 hand-labeled advanced imaging studies in 2 phases: phase 1 focused on spine metastases, and phase 2 generalized to bone metastases. Standard McRae's line performance metrics were evaluated and compared across all stages and tasks. In the lexical detection, a simple RegEx achieved the highest performance (sensitivity 98.4%, specificity 97.6%, F1 = 0.965), followed by NYUTron (sensitivity 96.8%, specificity 89.9%, and F1 = 0.787). For the clinical referral task, RegEx also demonstrated superior performance (sensitivity 92.3%, specificity 87.5%, and F1 = 0.936), followed by a fine-tuned NYUTron model (sensitivity 90.0%, specificity 66.7%, and F1 = 0.750). An NLP-based automated referral system can accurately identify patients with bone metastases requiring specialist evaluation. A simple RegEx model excels in syntax-based identification and expert-informed rule generation for efficient referral patient recommendation in comparison with advanced NLP models. This system could significantly reduce missed follow-ups and enhance timely intervention for patients with bone metastases.

BRIEF: BRain-Inspired network connection search with Extensive temporal feature Fusion enhances disease classification

Xiangxiang Cui, Min Zhao, Dongmei Zhi, Shile Qi, Vince D Calhoun, Jing Sui

arxiv logopreprintAug 15 2025
Existing deep learning models for functional MRI-based classification have limitations in network architecture determination (relying on experience) and feature space fusion (mostly simple concatenation, lacking mutual learning). Inspired by the human brain's mechanism of updating neural connections through learning and decision-making, we proposed a novel BRain-Inspired feature Fusion (BRIEF) framework, which is able to optimize network architecture automatically by incorporating an improved neural network connection search (NCS) strategy and a Transformer-based multi-feature fusion module. Specifically, we first extracted 4 types of fMRI temporal representations, i.e., time series (TCs), static/dynamic functional connection (FNC/dFNC), and multi-scale dispersion entropy (MsDE), to construct four encoders. Within each encoder, we employed a modified Q-learning to dynamically optimize the NCS to extract high-level feature vectors, where the NCS is formulated as a Markov Decision Process. Then, all feature vectors were fused via a Transformer, leveraging both stable/time-varying connections and multi-scale dependencies across different brain regions to achieve the final classification. Additionally, an attention module was embedded to improve interpretability. The classification performance of our proposed BRIEF was compared with 21 state-of-the-art models by discriminating two mental disorders from healthy controls: schizophrenia (SZ, n=1100) and autism spectrum disorder (ASD, n=1550). BRIEF demonstrated significant improvements of 2.2% to 12.1% compared to 21 algorithms, reaching an AUC of 91.5% - 0.6% for SZ and 78.4% - 0.5% for ASD, respectively. This is the first attempt to incorporate a brain-inspired, reinforcement learning strategy to optimize fMRI-based mental disorder classification, showing significant potential for identifying precise neuroimaging biomarkers.

Deep learning radiomics of elastography for diagnosing compensated advanced chronic liver disease: an international multicenter study.

Lu X, Zhang H, Kuroda H, Garcovich M, de Ledinghen V, Grgurević I, Linghu R, Ding H, Chang J, Wu M, Feng C, Ren X, Liu C, Song T, Meng F, Zhang Y, Fang Y, Ma S, Wang J, Qi X, Tian J, Yang X, Ren J, Liang P, Wang K

pubmed logopapersAug 15 2025
Accurate, noninvasive diagnosis of compensated advanced chronic liver disease (cACLD) is essential for effective clinical management but remains challenging. This study aimed to develop a deep learning-based radiomics model using international multicenter data and to evaluate its performance by comparing it to the two-dimensional shear wave elastography (2D-SWE) cut-off method covering multiple countries or regions, etiologies, and ultrasound device manufacturers. This retrospective study included 1937 adult patients with chronic liver disease due to hepatitis B, hepatitis C, or metabolic dysfunction-associated steatotic liver disease. All patients underwent 2D-SWE imaging and liver biopsy at 17 centers across China, Japan, and Europe using devices from three manufacturers (SuperSonic Imagine, General Electric, and Mindray). The proposed generalized deep learning radiomics of elastography model integrated both elastographic images and liver stiffness measurements and was trained and tested on stratified internal and external datasets. A total of 1937 patients with 9472 2D-SWE images were included in the statistical analysis. Compared to 2D-SWE, the model achieved a higher area under the receiver operating characteristic curve (AUC) (0.89 vs 0.83, P = 0.025). It also achieved a highly consistent diagnosis across all subanalyses (P values: 0.21-0.91), whereas 2D-SWE exhibited different AUCs in the country or region (P < 0.001) and etiology (P = 0.005) subanalyses but not in the manufacturer subanalysis (P = 0.24). The model demonstrated more accurate and robust performance in noninvasive cACLD diagnosis than 2D-SWE across different countries or regions, etiologies, and manufacturers.

Prospective validation of an artificial intelligence assessment in a cohort of applicants seeking financial compensation for asbestosis (PROSBEST).

Smesseim I, Lipman KBWG, Trebeschi S, Stuiver MM, Tissier R, Burgers JA, de Gooijer CJ

pubmed logopapersAug 15 2025
Asbestosis, a rare pneumoconiosis marked by diffuse pulmonary fibrosis, arises from prolonged asbestos exposure. Its diagnosis, guided by the Helsinki criteria, relies on exposure history, clinical findings, radiology, and lung function. However, interobserver variability complicates diagnoses and financial compensation. This study prospectively validated the sensitivity of an AI-driven assessment for asbestosis compensation in the Netherlands. Secondary objectives included evaluating specificity, accuracy, predictive values, area under the curve of the receiver operating characteristic (ROC-AUC), area under the precision-recall curve (PR-AUC), and interobserver variability. Between September 2020 and July 2022, 92 adult compensation applicants were assessed using both AI models and pulmonologists' reviews based on Dutch Health Council criteria. The AI model assigned an asbestosis probability score: negative (< 35), uncertain (35-66), or positive (≥ 66). Uncertain cases underwent additional reviews for a final determination. The AI assessment demonstrated sensitivity of 0.86 (95% confidence interval: 0.77-0.95), specificity of 0.85 (0.76-0.97), accuracy of 0.87 (0.79-0.93), ROC-AUC of 0.92 (0.84-0.97), and PR-AUC of 0.95 (0.89-0.99). Despite strong metrics, the sensitivity target of 98% was unmet. Pulmonologist reviews showed moderate to substantial interobserver variability. The AI-driven approach demonstrated robust accuracy but insufficient sensitivity for validation. Addressing interobserver variability and incorporating objective fibrosis measurements could enhance future reliability in clinical and compensation settings. The AI-driven assessment for financial compensation of asbestosis showed adequate accuracy but did not meet the required sensitivity for validation. We prospectively assessed the sensitivity of an AI-driven assessment procedure for financial compensation of asbestosis. The AI-driven asbestosis probability score underperformed across all metrics compared to internal testing. The AI-driven assessment procedure achieved a sensitivity of 0.86 (95% confidence interval: 0.77-0.95). It did not meet the predefined sensitivity target.
Page 67 of 2372364 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.