Sort by:
Page 64 of 78779 results

Non-invasive classification of non-neoplastic and neoplastic gallbladder polyps based on clinical imaging and ultrasound radiomics features: An interpretable machine learning model.

Dou M, Liu H, Tang Z, Quan L, Xu M, Wang F, Du Z, Geng Z, Li Q, Zhang D

pubmed logopapersJun 1 2025
Gallbladder (GB) adenomas, precancerous lesions for gallbladder carcinoma (GBC), lack reliable non-invasive tools for preoperative differentiation of neoplastic polyps from cholesterol polyps. This study aimed to evaluate an interpretable machine learning (ML) combined model for the precise differentiation of the pathological nature of gallbladder polyps (GPs). This study consecutively enrolled 744 patients from Xi'an Jiaotong University First Affiliated Hospital between January 2017 and December 2023 who were pathologically diagnosed postoperatively with cholesterol polyps, adenomas or T1-stage GBC. Radiomics features were extracted and selected, while clinical variables were subjected to univariate and multivariate logistic regression analyses to identify significant predictors of neoplastic polyps. A optimal ML-based radiomics model was developed, and separate clinical, US and combined models were constructed. Finally, SHapley Additive exPlanations (SHAP) was employed to visualize the classification process. The areas under the curves (AUCs) of the CatBoost-based radiomics model were 0.852 (95 % CI: 0.818-0.884) and 0.824 (95 % CI: 0.758-0.881) for the training and test sets, respectively. The combined model demonstrated the best performance with an improved AUC of 0.910 (95 % CI: 0.885-0.934) and 0.869 (95 % CI: 0.812-0.919), outperformed the clinical, radiomics, and US model (all P < 0.05), and reduced the rate of unnecessary cholecystectomies. SHAP analysis revealed that the polyp short diameter is a crucial independent risk factor in predicting the nature of the GPs. The ML-based combined model may be an effective non-invasive tool for improving the precision treatment of GPs, utilizing SHAP to visualize the classification process can enhance its clinical application.

Diffusion Models in Low-Level Vision: A Survey.

He C, Shen Y, Fang C, Xiao F, Tang L, Zhang Y, Zuo W, Guo Z, Li X

pubmed logopapersJun 1 2025
Deep generative models have gained considerable attention in low-level vision tasks due to their powerful generative capabilities. Among these, diffusion model-based approaches, which employ a forward diffusion process to degrade an image and a reverse denoising process for image generation, have become particularly prominent for producing high-quality, diverse samples with intricate texture details. Despite their widespread success in low-level vision, there remains a lack of a comprehensive, insightful survey that synthesizes and organizes the advances in diffusion model-based techniques. To address this gap, this paper presents the first comprehensive review focused on denoising diffusion models applied to low-level vision tasks, covering both theoretical and practical contributions. We outline three general diffusion modeling frameworks and explore their connections with other popular deep generative models, establishing a solid theoretical foundation for subsequent analysis. We then categorize diffusion models used in low-level vision tasks from multiple perspectives, considering both the underlying framework and the target application. Beyond natural image processing, we also summarize diffusion models applied to other low-level vision domains, including medical imaging, remote sensing, and video processing. Additionally, we provide an overview of widely used benchmarks and evaluation metrics in low-level vision tasks. Our review includes an extensive evaluation of diffusion model-based techniques across six representative tasks, with both quantitative and qualitative analysis. Finally, we highlight the limitations of current diffusion models and propose four promising directions for future research. This comprehensive review aims to foster a deeper understanding of the role of denoising diffusion models in low-level vision.

Evaluating artificial intelligence chatbots for patient education in oral and maxillofacial radiology.

Helvacioglu-Yigit D, Demirturk H, Ali K, Tamimi D, Koenig L, Almashraqi A

pubmed logopapersJun 1 2025
This study aimed to compare the quality and readability of the responses generated by 3 publicly available artificial intelligence (AI) chatbots in answering frequently asked questions (FAQs) related to Oral and Maxillofacial Radiology (OMR) to assess their suitability for patient education. Fifteen OMR-related questions were selected from professional patient information websites. These questions were posed to ChatGPT-3.5 by OpenAI, Gemini 1.5 Pro by Google, and Copilot by Microsoft to generate responses. Three board-certified OMR specialists evaluated the responses regarding scientific adequacy, ease of understanding, and overall reader satisfaction. Readability was assessed using the Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE) scores. The Wilcoxon signed-rank test was conducted to compare the scores assigned by the evaluators to the responses from the chatbots and professional websites. Interevaluator agreement was examined by calculating the Fleiss kappa coefficient. There were no significant differences between groups in terms of scientific adequacy. In terms of readability, chatbots had overall mean FKGL and FRE scores of 12.97 and 34.11, respectively. Interevaluator agreement level was generally high. Although chatbots are relatively good at responding to FAQs, validating AI-generated information using input from healthcare professionals can enhance patient care and safety. Readability of the text content in the chatbots and websites requires high reading levels.

Incorporating radiomic MRI models for presurgical response assessment in patients with early breast cancer undergoing neoadjuvant systemic therapy: Collaborative insights from breast oncologists and radiologists.

Gaudio M, Vatteroni G, De Sanctis R, Gerosa R, Benvenuti C, Canzian J, Jacobs F, Saltalamacchia G, Rizzo G, Pedrazzoli P, Santoro A, Bernardi D, Zambelli A

pubmed logopapersJun 1 2025
The assessment of neoadjuvant treatment's response is critical for selecting the most suitable therapeutic options for patients with breast cancer to reduce the need for invasive local therapies. Breast magnetic resonance imaging (MRI) is so far one of the most accurate approaches for assessing pathological complete response, although this is limited by the qualitative and subjective nature of radiologists' assessment, often making it insufficient for deciding whether to forgo additional locoregional therapy measures. To increase the accuracy and prediction of radiomic MRI with the aid of machine learning models and deep learning methods, as part of artificial intelligence, have been used to analyse the different subtypes of breast cancer and the specific changes observed before and after therapy. This review discusses recent advancements in radiomic MRI models for presurgical response assessment for patients with early breast cancer receiving preoperative treatments, with a focus on their implications for clinical practice.

UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training.

Lei J, Dai L, Jiang H, Wu C, Zhang X, Zhang Y, Yao J, Xie W, Zhang Y, Li Y, Zhang Y, Wang Y

pubmed logopapersJun 1 2025
Magnetic Resonance Imaging (MRI) has become a pivotal tool in diagnosing brain diseases, with a wide array of computer-aided artificial intelligence methods being proposed to enhance diagnostic accuracy. However, early studies were often limited by small-scale datasets and a narrow range of disease types, which posed challenges in model generalization. This study presents UniBrain, a hierarchical knowledge-enhanced pre-training framework designed for universal brain MRI diagnosis. UniBrain leverages a large-scale dataset comprising 24,770 imaging-report pairs from routine diagnostics for pre-training. Unlike previous approaches that either focused solely on visual representation learning or used brute-force alignment between vision and language, the framework introduces a hierarchical alignment mechanism. This mechanism extracts structured knowledge from free-text clinical reports at multiple granularities, enabling vision-language alignment at both the sequence and case levels, thereby significantly improving feature learning efficiency. A coupled vision-language perception module is further employed for text-guided multi-label classification, which facilitates zero-shot evaluation and fine-tuning of downstream tasks without modifying the model architecture. UniBrain is validated on both in-domain and out-of-domain datasets, consistently surpassing existing state-of-the-art diagnostic models and demonstrating performance on par with radiologists in specific disease categories. It shows strong generalization capabilities across diverse tasks, highlighting its potential for broad clinical application. The code is available at https://github.com/ljy19970415/UniBrain.

Exploring the significance of the frontal lobe for diagnosis of schizophrenia using explainable artificial intelligence and group level analysis.

Varaprasad SA, Goel T

pubmed logopapersJun 1 2025
Schizophrenia (SZ) is a complex mental disorder characterized by a profound disruption in cognition and emotion, often resulting in a distorted perception of reality. Magnetic resonance imaging (MRI) is an essential tool for diagnosing SZ which helps to understand the organization of the brain. Functional MRI (fMRI) is a specialized imaging technique to measure and map brain activity by detecting changes in blood flow and oxygenation. The proposed paper correlates the results using an explainable deep learning approach to identify the significant regions of SZ patients using group-level analysis for both structural MRI (sMRI) and fMRI data. The study found that the heat maps for Grad-CAM show clear visualization in the frontal lobe for the classification of SZ and CN with a 97.33% accuracy. The group difference analysis reveals that sMRI data shows intense voxel activity in the right superior frontal gyrus of the frontal lobe in SZ patients. Also, the group difference between SZ and CN during n-back tasks of fMRI data indicates significant voxel activation in the frontal cortex of the frontal lobe. These findings suggest that the frontal lobe plays a crucial role in the diagnosis of SZ, aiding clinicians in planning the treatment.

Beyond traditional orthopaedic data analysis: AI, multimodal models and continuous monitoring.

Oettl FC, Zsidai B, Oeding JF, Hirschmann MT, Feldt R, Tischer T, Samuelsson K

pubmed logopapersJun 1 2025
Multimodal artificial intelligence (AI) has the potential to revolutionise healthcare by enabling the simultaneous processing and integration of various data types, including medical imaging, electronic health records, genomic information and real-time data. This review explores the current applications and future potential of multimodal AI across healthcare, with a particular focus on orthopaedic surgery. In presurgical planning, multimodal AI has demonstrated significant improvements in diagnostic accuracy and risk prediction, with studies reporting an Area under the receiving operator curve presenting good to excellent performance across various orthopaedic conditions. Intraoperative applications leverage advanced imaging and tracking technologies to enhance surgical precision, while postoperative care has been advanced through continuous patient monitoring and early detection of complications. Despite these advances, significant challenges remain in data integration, standardisation, and privacy protection. Technical solutions such as federated learning (allowing decentralisation of models) and edge computing (allowing data analysis to happen on site or closer to site instead of multipurpose datacenters) are being developed to address these concerns while maintaining compliance with regulatory frameworks. As this field continues to evolve, the integration of multimodal AI promises to advance personalised medicine, improve patient outcomes, and transform healthcare delivery through more comprehensive and nuanced analysis of patient data. Level of Evidence: Level V.

Generative adversarial networks in medical image reconstruction: A systematic literature review.

Hussain J, Båth M, Ivarsson J

pubmed logopapersJun 1 2025
Recent advancements in generative adversarial networks (GANs) have demonstrated substantial potential in medical image processing. Despite this progress, reconstructing images from incomplete data remains a challenge, impacting image quality. This systematic literature review explores the use of GANs in enhancing and reconstructing medical imaging data. A document survey of computing literature was conducted using the ACM Digital Library to identify relevant articles from journals and conference proceedings using keyword combinations, such as "generative adversarial networks or generative adversarial network," "medical image or medical imaging," and "image reconstruction." Across the reviewed articles, there were 122 datasets used in 175 instances, 89 top metrics employed 335 times, 10 different tasks with a total count of 173, 31 distinct organs featured in 119 instances, and 18 modalities utilized in 121 instances, collectively depicting significant utilization of GANs in medical imaging. The adaptability and efficacy of GANs were showcased across diverse medical tasks, organs, and modalities, utilizing top public as well as private/synthetic datasets for disease diagnosis, including the identification of conditions like cancer in different anatomical regions. The study emphasized GAN's increasing integration and adaptability in diverse radiology modalities, showcasing their transformative impact on diagnostic techniques, including cross-modality tasks. The intricate interplay between network size, batch size, and loss function refinement significantly impacts GAN's performance, although challenges in training persist. The study underscores GANs as dynamic tools shaping medical imaging, contributing significantly to image quality, training methodologies, and overall medical advancements, positioning them as substantial components driving medical advancements.

Advances in MRI optic nerve segmentation.

Xena-Bosch C, Kodali S, Sahi N, Chard D, Llufriu S, Toosy AT, Martinez-Heras E, Prados F

pubmed logopapersJun 1 2025
Understanding optic nerve structure and monitoring changes within it can provide insights into neurodegenerative diseases like multiple sclerosis, in which optic nerves are often damaged by inflammatory episodes of optic neuritis. Over the past decades, interest in the optic nerve has increased, particularly with advances in magnetic resonance technology and the advent of deep learning solutions. These advances have significantly improved the visualisation and analysis of optic nerves, making it possible to detect subtle changes that aid the early diagnosis and treatment of optic nerve-related diseases, and for planning radiotherapy interventions. Effective segmentation techniques, therefore, are crucial for enhancing the accuracy of predictive models, planning interventions and treatment strategies. This comprehensive review, which includes 27 peer-reviewed articles published between 2007 and 2024, examines and highlights the evolution of optic nerve magnetic resonance imaging segmentation over the past decade, tracing the development from intensity-based methods to the latest deep learning algorithms, including multi-atlas solutions using single or multiple image modalities.

Explainable deep stacking ensemble model for accurate and transparent brain tumor diagnosis.

Haque R, Khan MA, Rahman H, Khan S, Siddiqui MIH, Limon ZH, Swapno SMMR, Appaji A

pubmed logopapersJun 1 2025
Early detection of brain tumors in MRI images is vital for improving treatment results. However, deep learning models face challenges like limited dataset diversity, class imbalance, and insufficient interpretability. Most studies rely on small, single-source datasets and do not combine different feature extraction techniques for better classification. To address these challenges, we propose a robust and explainable stacking ensemble model for multiclass brain tumor classification. To address these challenges, we propose a stacking ensemble model that combines EfficientNetB0, MobileNetV2, GoogleNet, and Multi-level CapsuleNet, using CatBoost as the meta-learner for improved feature aggregation and classification accuracy. This ensemble approach captures complex tumor characteristics while enhancing robustness and interpretability. The proposed model integrates EfficientNetB0, MobileNetV2, GoogleNet, and a Multi-level CapsuleNet within a stacking framework, utilizing CatBoost as the meta-learner to improve feature aggregation and classification accuracy. We created two large MRI datasets by merging data from four sources: BraTS, Msoud, Br35H, and SARTAJ. To tackle class imbalance, we applied Borderline-SMOTE and data augmentation. We also utilized feature extraction methods, along with PCA and Gray Wolf Optimization (GWO). Our model was validated through confidence interval analysis and statistical tests, demonstrating superior performance. Error analysis revealed misclassification trends, and we assessed computational efficiency regarding inference speed and resource usage. The proposed ensemble achieved 97.81% F1 score and 98.75% PR AUC on M1, and 98.32% F1 score with 99.34% PR AUC on M2. Moreover, the model consistently surpassed state-of-the-art CNNs, Vision Transformers, and other ensemble methods in classifying brain tumors across individual four datasets. Finally, we developed a web-based diagnostic tool that enables clinicians to interact with the proposed model and visualize decision-critical regions in MRI scans using Explainable Artificial Intelligence (XAI). This study connects high-performing AI models with real clinical applications, providing a reliable, scalable, and efficient diagnostic solution for brain tumor classification.
Page 64 of 78779 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.