Sort by:
Page 70 of 2212205 results

Recent Advances in Generative Models for Synthetic Brain MRI Image Generation.

Ding X, Bai L, Abbasi SF, Pournik O, Arvanitis T

pubmed logopapersJun 26 2025
With the use of artificial intelligence (AI) for image analysis of Magnetic Resonance Imaging (MRI), the lack of training data has become an issue. Realistic synthetic MRI images can serve as a solution and generative models have been proposed. This study investigates the most recent advances on synthetic brain MRI image generation with AI-based generative models. A search has been conducted on the relevant studies published within the last three years, followed by a narrative review on the identified articles. Popular models from the search results have been discussed in this study, including Generative Adversarial Networks (GANs), diffusion models, Variational Autoencoders (VAEs), and transformers.

Enhancing Diagnostic Precision: Utilising a Large Language Model to Extract U Scores from Thyroid Sonography Reports.

Watts E, Pournik O, Allington R, Ding X, Boelaert K, Sharma N, Ghalichi L, Arvanitis TN

pubmed logopapersJun 26 2025
This study evaluates the performance of ChatGPT-4, a Large Language Model (LLM), in automatically extracting U scores from free-text thyroid ultrasound reports collected from University Hospitals Birmingham (UHB), UK, between 2014 and 2024. The LLM was provided with guidelines on the U classification system and extracted U scores independently from 14,248 de-identified reports, without access to human-assigned scores. The LLM-extracted scores were compared to initial clinician-assigned and refined U scores provided by expert reviewers. The LLM achieved 97.7% agreement with refined human U scores, successfully identifying the highest U score in 98.1% of reports with multiple nodules. Most discrepancies (2.5%) were linked to ambiguous descriptions, multi-nodule reports, and cases with human-documented uncertainty. While the results demonstrate the potential for LLMs to improve reporting consistency and reduce manual workload, ethical and governance challenges such as transparency, privacy, and bias must be addressed before routine clinical deployment. Embedding LLMs into reporting workflows, such as Online Analytical Processing (OLAP) tools, could further enhance reporting quality and consistency.

Artificial Intelligence in Cognitive Decline Diagnosis: Evaluating Cutting-Edge Techniques and Modalities.

Gharehbaghi A, Babic A

pubmed logopapersJun 26 2025
This paper presents the results of a scoping review that examines potentials of Artificial Intelligence (AI) in early diagnosis of Cognitive Decline (CD), which is regarded as a key issue in elderly health. The review encompasses peer-reviewed publications from 2020 to 2025, including scientific journals and conference proceedings. Over 70% of the studies rely on using magnetic resonance imaging (MRI) as the input to the AI models, with a high diagnostic accuracy of 98%. Integration of the relevant clinical data and electroencephalograms (EEG) with deep learning methods enhances diagnostic accuracy in the clinical settings. Recent studies have also explored the use of natural language processing models for detecting CD at its early stages, with an accuracy of 75%, exhibiting a high potential to be used in the appropriate pre-clinical environments.

Deep transfer learning radiomics combined with explainable machine learning for preoperative thymoma risk prediction based on CT.

Wu S, Fan L, Wu Y, Xu J, Guo Y, Zhang H, Xu Z

pubmed logopapersJun 26 2025
To develop and validate a computerized tomography (CT)‑based deep transfer learning radiomics model combined with explainable machine learning for preoperative risk prediction of thymoma. This retrospective study included 173 pathologically confirmed thymoma patients from our institution in the training group and 93 patients from two external centers in the external validation group. Tumors were classified according to the World Health Organization simplified criteria as low‑risk types (A, AB, and B1) or high‑risk types (B2 and B3). Radiomics features and deep transfer learning features were extracted from venous‑phase contrast‑enhanced CT images by using a modified Inception V3 network. Principal component analysis and least absolute shrinkage and selection operator regression identified 20 key predictors. Six classifiers-decision tree, gradient boosting machine, k‑nearest neighbors, naïve Bayes, random forest (RF), and support vector machine-were trained on five feature sets: CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model. Interpretability was assessed with SHapley Additive exPlanations (SHAP), and an interactive web application was developed for real‑time individualized risk prediction and visualization. In the external validation group, the RF classifier achieved the highest area under the receiver operating characteristic curve (AUC) value of 0.956. In the training group, the AUC values for the CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model were 0.684, 0.831, 0.815, 0.893, and 0.910, respectively. The corresponding AUC values in the external validation group were 0.604, 0.865, 0.880, 0.934, and 0.956, respectively. SHAP visualizations revealed the relative contribution of each feature, while the web application provided real‑time individual prediction probabilities with interpretative outputs. We developed a CT‑based deep transfer learning radiomics model combined with explainable machine learning and an interactive web application; this model achieved high accuracy and transparency for preoperative thymoma risk stratification, facilitating personalized clinical decision‑making.

Design and Optimization of an automatic deep learning-based cerebral reperfusion scoring (TICI) using thrombus localization.

Folcher A, Piters J, Wallach D, Guillard G, Ognard J, Gentric JC

pubmed logopapersJun 26 2025
The Thrombolysis in Cerebral Infarction (TICI) scale is widely used to assess angiographic outcomes of mechanical thrombectomy despite significant variability. Our objective was to create and optimize an artificial intelligence (AI)-based classification model for digital subtraction angiography (DSA) TICI scoring. Using a monocentric DSA dataset of thrombectomies, and a platform for medical image analysis, independent readers labeled each series according to TICI score and marked each thrombus. A convolutional neural network (CNN) classification model was created to classify TICI scores, into 2 groups (TICI 0,1 or 2a versus TICI 2b, 2c or 3) and 3 groups (TICI 0,1 or 2a versus TICI 2b versus TICI 2c or 3). The algorithm was first tested alone, and then thrombi positions were introduced to the algorithm by manual placement firstly, then after using a thrombus detection module. A total of 422 patients were enrolled in the study. 2492 thrombi were annotated on the TICI-labeled series. The model trained on a total of 1609 DSA series. The classification model into two classes had a specificity of 0.97 ±0.01 and a sensibility of 0.86 ±0.01. The 3-class models showed insufficient performance, even when combined with the true thrombi positions, with, respectively, F1 scores for TICI 2b classification of 0.50 and 0.55 ±0.07. The automatic thrombus detection module did not enhance the performance of the 3-class model, with a F1 score for the TICI 2b class measured at 0.50 ±0.07. The AI model provided a reproducible 2-class (TICI 0,1 or 2a versus 2b, 2c or 3) classification according to TICI scale. Its performance in distinguishing three classes (TICI 0,1 or 2a versus 2b versus 2c or 3) remains insufficient for clinical practice. Automatic thrombus detection did not improve the model's performance.

Predicting brain metastases in EGFR-positive lung adenocarcinoma patients using pre-treatment CT lung imaging data.

He X, Guan C, Chen T, Wu H, Su L, Zhao M, Guo L

pubmed logopapersJun 26 2025
This study aims to establish a dual-feature fusion model integrating radiomic features with deep learning features, utilizing single-modality pre-treatment lung CT image data to achieve early warning of brain metastasis (BM) risk within 2 years in EGFR-positive lung adenocarcinoma. After rigorous screening of 362 EGFR-positive lung adenocarcinoma patients with pre-treatment lung CT images, 173 eligible participants were ultimately enrolled in this study, including 93 patients with BM and 80 without BM. Radiomic features were extracted from manually segmented lung nodule regions, and a selection of features was used to develop radiomics models. For deep learning, ROI-level CT images were processed using several deep learning networks, including the novel vision mamba, which was applied for the first time in this context. A feature-level fusion model was developed by combining radiomic and deep learning features. Model performance was assessed using receiver operating characteristic (ROC) curves and decision curve analysis (DCA), with statistical comparisons of area under the curve (AUC) values using the DeLong test. Among the models evaluated, the fused vision mamba model demonstrated the best classification performance, achieving an AUC of 0.86 (95% CI: 0.82-0.90), with a recall of 0.88, F1-score of 0.70, and accuracy of 0.76. This fusion model outperformed both radiomics-only and deep learning-only models, highlighting its superior predictive accuracy for early BM risk detection in EGFR-positive lung adenocarcinoma patients. The fused vision mamba model, utilizing single CT imaging data, significantly enhances the prediction of brain metastasis within two years in EGFR-positive lung adenocarcinoma patients. This novel approach, combining radiomic and deep learning features, offers promising clinical value for early detection and personalized treatment.

The Current State of Artificial Intelligence on Detecting Pulmonary Embolism via Computerised Tomography Pulmonary Angiogram: A Systematic Review.

Hassan MSTA, Elhotiby MAM, Shah V, Rocha H, Rad AA, Miller G, Malawana J

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Pulmonary embolism (PE) is a life-threatening condition with significant diagnostic challenges due to high rates of missed or delayed detection. Computed tomography pulmonary angiography (CTPA) is the current standard for diagnosing PE, however, demand for imaging places strain on healthcare systems and increases error rates. This systematic review aims to assess the diagnostic accuracy and clinical applicability of artificial intelligence (AI)-based models for PE detection on CTPA, exploring their potential to enhance diagnostic reliability and efficiency across clinical settings. <b>Methods</b> A systematic review was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Excerpta Medica Database (EMBASE), Medical Literature Analysis and Retrieval System Online (MEDLINE), Cochrane, PubMed, and Google Scholar were searched for original articles from inception to September 2024. Articles were included if they reported successful AI integration, whether partial or full, alongside CTPA scans for PE detection in patients. <b>Results</b> The literature search identified 919 articles, with 745 remaining after duplicate removal. Following rigorous screening and appraisal aligned with inclusion and exclusion criteria, 12 studies were included in the final analysis. A total of three primary AI modalities emerged: convolutional neural networks (CNNs), segmentation models, and natural language processing (NLP), collectively used in the analysis of 341,112 radiographic images. CNNs were the most frequently applied modality in this review. Models such as AdaBoost and EmbNet have demonstrated high sensitivity, with EmbNet achieving 88-90.9% per scan and reducing false positives to 0.45 per scan. <b>Conclusion</b> AI shows significant promise as a diagnostic tool for identifying PE on CTPA scans, particularly when combined with other forms of clinical data. However, challenges remain, including ensuring generalisability, addressing potential bias, and conducting rigorous external validation. Variability in study methodologies and the lack of standardised reporting of key metrics complicate comparisons. Future research must focus on refining models, improving peripheral emboli detection, and validating performance across diverse settings to realise AI's potential fully.

Application Value of Deep Learning-Based AI Model in the Classification of Breast Nodules.

Zhi S, Cai X, Zhou W, Qian P

pubmed logopapersJun 25 2025
<b>Aims/Background</b> Breast nodules are highly prevalent among women, and ultrasound is a widely used screening tool. However, single ultrasound examinations often result in high false-positive rates, leading to unnecessary biopsies. Artificial intelligence (AI) has demonstrated the potential to improve diagnostic accuracy, reducing misdiagnosis and minimising inter-observer variability. This study developed a deep learning-based AI model to evaluate its clinical utility in assisting sonographers with the Breast Imaging Reporting and Data System (BI-RADS) classification of breast nodules. <b>Methods</b> A retrospective analysis was conducted on 558 patients with breast nodules classified as BI-RADS categories 3 to 5, confirmed through pathological examination at The People's Hospital of Pingyang County between December 2019 and December 2023. The image dataset was divided into a training set, validation set, and test set, and a convolutional neural network (CNN) was used to construct a deep learning-based AI model. Patients underwent ultrasound examination and AI-assisted diagnosis. The receiver operating characteristic (ROC) curve was used to analyse the performance of the AI model, physician adjudication results, and the diagnostic efficacy of physicians before and after AI model assistance. Cohen's weighted Kappa coefficient was used to assess the consistency of BI-RADS classification among five ultrasound physicians before and after AI model assistance. Additionally, statistical analyses were performed to evaluate changes in BI-RADS classification results before and after AI model assistance for each physician. <b>Results</b> According to pathological examination, 765 of the 1026 breast nodules were benign, while 261 were malignant. The sensitivity, specificity, and accuracy of routine ultrasonography in diagnosing benign and malignant nodules were 80.85%, 91.59%, and 88.31%, respectively. In comparison, the AI system achieved a sensitivity of 89.36%, specificity of 92.52%, and accuracy of 91.56%. Furthermore, AI model assistance significantly improved the consistency of physicians' BI-RADS classification (<i>p</i> < 0.001). <b>Conclusion</b> A deep learning-based AI model constructed using ultrasound images can enhance the differentiation between benign and malignant breast nodules and improve classification accuracy, thereby reducing the incidence of missed and misdiagnoses.

BronchoGAN: anatomically consistent and domain-agnostic image-to-image translation for video bronchoscopy.

Soliman A, Keuth R, Himstedt M

pubmed logopapersJun 25 2025
Purpose The limited availability of bronchoscopy images makes image synthesis particularly interesting for training deep learning models. Robust image translation across different domains-virtual bronchoscopy, phantom as well as in vivo and ex vivo image data-is pivotal for clinical applications. Methods This paper proposes BronchoGAN introducing anatomical constraints for image-to-image translation being integrated into a conditional GAN. In particular, we force bronchial orifices to match across input and output images. We further propose to use foundation model-generated depth images as intermediate representation ensuring robustness across a variety of input domains establishing models with substantially less reliance on individual training datasets. Moreover, our intermediate depth image representation allows to easily construct paired image data for training. Results Our experiments showed that input images from different domains (e.g., virtual bronchoscopy, phantoms) can be successfully translated to images mimicking realistic human airway appearance. We demonstrated that anatomical settings (i.e., bronchial orifices) can be robustly preserved with our approach which is shown qualitatively and quantitatively by means of improved FID, SSIM and dice coefficients scores. Our anatomical constraints enabled an improvement in the Dice coefficient of up to 0.43 for synthetic images. Conclusion Through foundation models for intermediate depth representations and bronchial orifice segmentation integrated as anatomical constraints into conditional GANs, we are able to robustly translate images from different bronchoscopy input domains. BronchoGAN allows to incorporate public CT scan data (virtual bronchoscopy) in order to generate large-scale bronchoscopy image datasets with realistic appearance. BronchoGAN enables to bridge the gap of missing public bronchoscopy images.

Association of peripheral immune markers with brain age and dementia risk estimated using deep learning methods.

Huang X, Yuan S, Ling Y, Tan S, Bai Z, Xu Y, Shen S, Lyu J, Wang H

pubmed logopapersJun 25 2025
The peripheral immune system is essential for maintaining central nervous system homeostasis. This study investigates the effects of peripheral immune markers on accelerated brain aging and dementia using brain-predicted age difference based on neuroimaging. By leveraging data from the UK Biobank, Cox regression was used to explore the relationship between peripheral immune markers and dementia, and multivariate linear regression to assess associations between peripheral immune biomarkers and brain structure. Additionally, we established a brain age prediction model using Simple Fully Convolutional Network (SFCN) deep learning architecture. Analysis of the resulting brain-Predicted Age Difference (PAD) revealed relationships between accelerated brain aging, peripheral immune markers, and dementia. During the median follow-up period of 14.3 years, 4, 277 dementia cases were observed among 322, 761 participants. Both innate and adaptive immune markers correlated with dementia risk. NLR showed the strongest association with dementia risk (HR = 1.14; 95% CI: 1.11-1.18, P<0.001). Multivariate linear regression revealed significant associations between peripheral immune markers and brain regional structural indices. Utilizing the deep learning-based SFCN model, the estimated brain age of dementia subjects (MAE = 5.63, r2 = - 0.46, R = 0.22) was determined. PAD showed significant correlation with dementia risk and certain peripheral immune markers, particularly in individuals with positive brain age increment. This study employs brain age as a quantitative marker of accelerated brain aging to investigate its potential associations with peripheral immunity and dementia, highlighting the importance of early intervention targeting peripheral immune markers to delay brain aging and prevent dementia.
Page 70 of 2212205 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.