Sort by:
Page 209 of 2352341 results

Deep Learning-Based Multimodal Feature Interaction-Guided Fusion: Enhancing the Evaluation of EGFR in Advanced Lung Adenocarcinoma.

Xu J, Feng B, Chen X, Wu F, Liu Y, Yu Z, Lu S, Duan X, Chen X, Li K, Zhang W, Dai X

pubmed logopapersMay 22 2025
The aim of this study is to develop a deep learning-based multimodal feature interaction-guided fusion (DL-MFIF) framework that integrates macroscopic information from computed tomography (CT) images with microscopic information from whole-slide images (WSIs) to predict the epidermal growth factor receptor (EGFR) mutations of primary lung adenocarcinoma in patients with advanced-stage disease. Data from 396 patients with lung adenocarcinoma across two medical institutions were analyzed. The data from 243 cases were divided into a training set (n=145) and an internal validation set (n=98) in a 6:4 ratio, and data from an additional 153 cases from another medical institution were included as an external validation set. All cases included CT scan images and WSIs. To integrate multimodal information, we developed the DL-MFIF framework, which leverages deep learning techniques to capture the interactions between radiomic macrofeatures derived from CT images and microfeatures obtained from WSIs. Compared to other classification models, the DL-MFIF model achieved significantly higher area under the curve (AUC) values. Specifically, the model outperformed others on both the internal validation set (AUC=0.856, accuracy=0.750) and the external validation set (AUC=0.817, accuracy=0.708). Decision curve analysis (DCA) demonstrated that the model provided superior net benefits(range 0.15-0.87). Delong's test for external validation confirmed the statistical significance of the results (P<0.05). The DL-MFIF model demonstrated excellent performance in evaluating and distinguishing the EGFR in patients with advanced lung adenocarcinoma. This model effectively aids radiologists in accurately classifying EGFR mutations in patients with primary lung adenocarcinoma, thereby improving treatment outcomes for this population.

Predicting Depression in Healthy Young Adults: A Machine Learning Approach Using Longitudinal Neuroimaging Data.

Zhang A, Zhang H

pubmed logopapersMay 22 2025
Accurate prediction of depressive symptoms in healthy individuals can enable early intervention and reduce both individual and societal costs. This study aimed to develop predictive models for depression in young adults using machine learning (ML) techniques and longitudinal data from the Beck Depression Inventory, structural MRI (sMRI), and resting-state functional MRI (rs-fMRI). Feature selection methods, including the least absolute shrinkage and selection operator (LASSO), Boruta, and VSURF, were applied to identify MRI features associated with depression. Support vector machine and random forest algorithms were then used to construct prediction models. Eight MRI features were identified as predictive of depression, including brain regions in the Orbital Gyrus, Superior Frontal Gyrus, Middle Frontal Gyrus, Parahippocampal Gyrus, Cingulate Gyrus, and Inferior Parietal Lobule. The overlaps and the differences between selected features and brain regions with significant between-group differences in t-tests suggest that ML provides a unique perspective on the neural changes associated with depression. Six pairs of prediction models demonstrated varying performance, with accuracies ranging from 0.68 to 0.85 and areas under the curve (AUC) ranging from 0.57 to 0.81. The best-performing model achieved an accuracy of 0.85 and an AUC of 0.80, highlighting the potential of combining sMRI and rs-fMRI features with ML for early depression detection while revealing the potential of overfitting in small-sample and high-dimensional settings. This study necessitates further research to (1) replicate findings in independent larger datasets to address potential overfitting and (2) utilize different advanced ML techniques and multimodal data fusion to improve model performance.

Deep Learning for Automated Prediction of Sphenoid Sinus Pneumatization in Computed Tomography.

Alamer A, Salim O, Alharbi F, Alsaleem F, Almuqbil A, Alhassoon K, Alsunaydih F

pubmed logopapersMay 22 2025
The sphenoid sinus is an important access point for trans-sphenoidal surgeries, but variations in its pneumatization may complicate surgical safety. Deep learning can be used to identify these anatomical variations. We developed a convolutional neural network (CNN) model for the automated prediction of sphenoid sinus pneumatization patterns in computed tomography (CT) scans. This model was tested on mid-sagittal CT images. Two radiologists labeled all CT images into four pneumatization patterns: Conchal (type I), presellar (type II), sellar (type III), and postsellar (type IV). We then augmented the training set to address the limited size and imbalanced nature of the data. The initial dataset included 249 CT images, divided into training (n = 174) and test (n = 75) datasets. The training dataset was augmented to 378 images. Following augmentation, the overall diagnostic accuracy of the model improved from 76.71% to 84%, with an area under the curve (AUC) of 0.84, indicating very good diagnostic performance. Subgroup analysis showed excellent results for type IV, with the highest AUC of 0.93, perfect sensitivity (100%), and an F1-score of 0.94. The model also performed robustly for type I, achieving an accuracy of 97.33% and high specificity (99%). These metrics highlight the model's potential for reliable clinical application. The proposed CNN model demonstrates very good diagnostic accuracy in identifying various sphenoid sinus pneumatization patterns, particularly excelling in type IV, which is crucial for endoscopic sinus surgery due to its higher risk of surgical complications. By assisting radiologists and surgeons, this model enhances the safety of transsphenoidal surgery, highlighting its value, novelty, and applicability in clinical settings.

A Deep Learning Vision-Language Model for Diagnosing Pediatric Dental Diseases

Pham, T.

medrxiv logopreprintMay 22 2025
This study proposes a deep learning vision-language model for the automated diagnosis of pediatric dental diseases, with a focus on differentiating between caries and periapical infections. The model integrates visual features extracted from panoramic radiographs using methods of non-linear dynamics and textural encoding with textual descriptions generated by a large language model. These multimodal features are concatenated and used to train a 1D-CNN classifier. Experimental results demonstrate that the proposed model outperforms conventional convolutional neural networks and standalone language-based approaches, achieving high accuracy (90%), sensitivity (92%), precision (92%), and an AUC of 0.96. This work highlights the value of combining structured visual and textual representations in improving diagnostic accuracy and interpretability in dental radiology. The approach offers a promising direction for the development of context-aware, AI-assisted diagnostic tools in pediatric dental care.

Mammography-based artificial intelligence for breast cancer detection, diagnosis, and BI-RADS categorization using multi-view and multi-level convolutional neural networks.

Tan H, Wu Q, Wu Y, Zheng B, Wang B, Chen Y, Du L, Zhou J, Fu F, Guo H, Fu C, Ma L, Dong P, Xue Z, Shen D, Wang M

pubmed logopapersMay 21 2025
We developed an artificial intelligence system (AIS) using multi-view multi-level convolutional neural networks for breast cancer detection, diagnosis, and BI-RADS categorization support in mammography. Twenty-four thousand eight hundred sixty-six breasts from 12,433 Asian women between August 2012 and December 2018 were enrolled. The study consisted of three parts: (1) evaluation of AIS performance in malignancy diagnosis; (2) stratified analysis of BI-RADS 3-4 subgroups with AIS; and (3) reassessment of BI-RADS 0 breasts with AIS assistance. We further evaluate AIS by conducting a counterbalance-designed AI-assisted study, where ten radiologists read 1302 cases with/without AIS assistance. The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, and F1 score were measured. The AIS yielded AUC values of 0.995, 0.933, and 0.947 for malignancy diagnosis in the validation set, testing set 1, and testing set 2, respectively. Within BI-RADS 3-4 subgroups with pathological results, AIS downgraded 83.1% of false-positives into benign groups, and upgraded 54.1% of false-negatives into malignant groups. AIS also successfully assisted radiologists in identifying 7 out of 43 malignancies initially diagnosed with BI-RADS 0, with a specificity of 96.7%. In the counterbalance-designed AI-assisted study, the average AUC across ten readers significantly improved with AIS assistance (p = 0.001). AIS can accurately detect and diagnose breast cancer on mammography and further serve as a supportive tool for BI-RADS categorization. An AI risk assessment tool employing deep learning algorithms was developed and validated for enhancing breast cancer diagnosis from mammograms, to improve risk stratification accuracy, particularly in patients with dense breasts, and serve as a decision support aid for radiologists. The false positive and negative rates of mammography diagnosis remain high. The AIS can yield a high AUC for malignancy diagnosis. The AIS is important in stratifying BI-RADS categorization.

An Ultrasound Image-Based Deep Learning Radiomics Nomogram for Differentiating Between Benign and Malignant Indeterminate Cytology (Bethesda III) Thyroid Nodules: A Retrospective Study.

Zhong L, Shi L, Li W, Zhou L, Wang K, Gu L

pubmed logopapersMay 21 2025
Our objective is to develop and validate a deep learning radiomics nomogram (DLRN) based on preoperative ultrasound images and clinical features, for predicting the malignancy of thyroid nodules with indeterminate cytology (Bethesda III). Between June 2017 and June 2022, we conducted a retrospective study on 194 patients with surgically confirmed indeterminate cytology (Bethesda III) in our hospital. The training and internal validation cohorts were comprised of 155 and 39 patients, in a 7:3 ratio. To facilitate external validation, we selected an additional 80 patients from each of the remaining two medical centers. Utilizing preoperative ultrasound data, we obtained imaging markers that encompass both deep learning and manually radiomic features. After feature selection, we developed a comprehensive diagnostic model to evaluate the predictive value for Bethesda III benign and malignant cases. The model's diagnostic accuracy, calibration, and clinical applicability were systematically assessed. The results showed that the prediction model, which integrated 512 DTL features extracted from the pre-trained Resnet34 network, ultrasound radiomics, and clinical features, exhibited superior stability in distinguishing between benign and malignant indeterminate thyroid nodules (Bethesda Class III). In the validation set, the AUC was 0.92 (95% CI: 0.831-1.000), and the accuracy, sensitivity, specificity, precision, and recall were 0.897, 0.882, 0.909, 0.882, and 0.882, respectively. The comprehensive multidimensional data model based on deep transfer learning, ultrasound radiomics features, and clinical characteristics can effectively distinguish the benign and malignant indeterminate thyroid nodules (Bethesda Class III), providing valuable guidance for treatment selection in patients with indeterminate thyroid nodules (Bethesda Class III).

VET-DINO: Learning Anatomical Understanding Through Multi-View Distillation in Veterinary Imaging

Andre Dourson, Kylie Taylor, Xiaoli Qiao, Michael Fitzke

arxiv logopreprintMay 21 2025
Self-supervised learning has emerged as a powerful paradigm for training deep neural networks, particularly in medical imaging where labeled data is scarce. While current approaches typically rely on synthetic augmentations of single images, we propose VET-DINO, a framework that leverages a unique characteristic of medical imaging: the availability of multiple standardized views from the same study. Using a series of clinical veterinary radiographs from the same patient study, we enable models to learn view-invariant anatomical structures and develop an implied 3D understanding from 2D projections. We demonstrate our approach on a dataset of 5 million veterinary radiographs from 668,000 canine studies. Through extensive experimentation, including view synthesis and downstream task performance, we show that learning from real multi-view pairs leads to superior anatomical understanding compared to purely synthetic augmentations. VET-DINO achieves state-of-the-art performance on various veterinary imaging tasks. Our work establishes a new paradigm for self-supervised learning in medical imaging that leverages domain-specific properties rather than merely adapting natural image techniques.

Predictive machine learning and multimodal data to develop highly sensitive, composite biomarkers of disease progression in Friedreich ataxia.

Saha S, Corben LA, Selvadurai LP, Harding IH, Georgiou-Karistianis N

pubmed logopapersMay 21 2025
Friedreich ataxia (FRDA) is a rare, inherited progressive movement disorder for which there is currently no cure. The field urgently requires more sensitive, objective, and clinically relevant biomarkers to enhance the evaluation of treatment efficacy in clinical trials and to speed up the process of drug discovery. This study pioneers the development of clinically relevant, multidomain, fully objective composite biomarkers of disease severity and progression, using multimodal neuroimaging and background data (i.e., demographic, disease history, genetics). Data from 31 individuals with FRDA and 31 controls from a longitudinal multimodal natural history study IMAGE-FRDA, were included. Using an elasticnet predictive machine learning (ML) regression model, we derived a weighted combination of background, structural MRI, diffusion MRI, and quantitative susceptibility imaging (QSM) measures that predicted Friedreich ataxia rating scale (FARS) with high accuracy (R<sup>2</sup> = 0.79, root mean square error (RMSE) = 13.19). This composite also exhibited strong sensitivity to disease progression over two years (Cohen's d = 1.12), outperforming the sensitivity of the FARS score alone (d = 0.88). The approach was validated using the Scale for the assessment and rating of ataxia (SARA), demonstrating the potential and robustness of ML-derived composites to surpass individual biomarkers and act as complementary or surrogate markers of disease severity and progression. However, further validation, refinement, and the integration of additional data modalities will open up new opportunities for translating these biomarkers into clinical practice and clinical trials for FRDA, as well as other rare neurodegenerative diseases.

An automated deep learning framework for brain tumor classification using MRI imagery.

Aamir M, Rahman Z, Bhatti UA, Abro WA, Bhutto JA, He Z

pubmed logopapersMay 21 2025
The precise and timely diagnosis of brain tumors is essential for accelerating patient recovery and preserving lives. Brain tumors exhibit a variety of sizes, shapes, and visual characteristics, requiring individualized treatment strategies for each patient. Radiologists require considerable proficiency to manually detect brain malignancies. However, tumor recognition remains inefficient, imprecise, and labor-intensive in manual procedures, underscoring the need for automated methods. This study introduces an effective approach for identifying brain lesions in magnetic resonance imaging (MRI) images, minimizing dependence on manual intervention. The proposed method improves image clarity by combining guided filtering techniques with anisotropic Gaussian side windows (AGSW). A morphological analysis is conducted prior to segmentation to exclude non-tumor regions from the enhanced MRI images. Deep neural networks segment the images, extracting high-quality regions of interest (ROIs) and multiscale features. Identifying salient elements is essential and is accomplished through an attention module that isolates distinctive features while eliminating irrelevant information. An ensemble model is employed to classify brain tumors into different categories. The proposed technique achieves an overall accuracy of 99.94% and 99.67% on the publicly available brain tumor datasets BraTS2020 and Figshare, respectively. Furthermore, it surpasses existing technologies in terms of automation and robustness, thereby enhancing the entire diagnostic process.

Multi-modal Integration Analysis of Alzheimer's Disease Using Large Language Models and Knowledge Graphs

Kanan Kiguchi, Yunhao Tu, Katsuhiro Ajito, Fady Alnajjar, Kazuyuki Murase

arxiv logopreprintMay 21 2025
We propose a novel framework for integrating fragmented multi-modal data in Alzheimer's disease (AD) research using large language models (LLMs) and knowledge graphs. While traditional multimodal analysis requires matched patient IDs across datasets, our approach demonstrates population-level integration of MRI, gene expression, biomarkers, EEG, and clinical indicators from independent cohorts. Statistical analysis identified significant features in each modality, which were connected as nodes in a knowledge graph. LLMs then analyzed the graph to extract potential correlations and generate hypotheses in natural language. This approach revealed several novel relationships, including a potential pathway linking metabolic risk factors to tau protein abnormalities via neuroinflammation (r>0.6, p<0.001), and unexpected correlations between frontal EEG channels and specific gene expression profiles (r=0.42-0.58, p<0.01). Cross-validation with independent datasets confirmed the robustness of major findings, with consistent effect sizes across cohorts (variance <15%). The reproducibility of these findings was further supported by expert review (Cohen's k=0.82) and computational validation. Our framework enables cross modal integration at a conceptual level without requiring patient ID matching, offering new possibilities for understanding AD pathology through fragmented data reuse and generating testable hypotheses for future research.
Page 209 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.