Sort by:
Page 22 of 38374 results

Influence of prior probability information on large language model performance in radiological diagnosis.

Fukushima T, Kurokawa R, Hagiwara A, Sonoda Y, Asari Y, Kurokawa M, Kanzawa J, Gonoi W, Abe O

pubmed logopapersJun 1 2025
Large language models (LLMs) show promise in radiological diagnosis, but their performance may be affected by the context of the cases presented. Our purpose is to investigate how providing information about prior probabilities influences the diagnostic performance of an LLM in radiological quiz cases. We analyzed 322 consecutive cases from Radiology's "Diagnosis Please" quiz using Claude 3.5 Sonnet under three conditions: without context (Condition 1), informed as quiz cases (Condition 2), and presented as primary care cases (Condition 3). Diagnostic accuracy was compared using McNemar's test. The overall accuracy rate significantly improved in Condition 2 compared to Condition 1 (70.2% vs. 64.9%, p = 0.029). Conversely, the accuracy rate significantly decreased in Condition 3 compared to Condition 1 (59.9% vs. 64.9%, p = 0.027). Providing information that may influence prior probabilities significantly affects the diagnostic performance of the LLM in radiological cases. This suggests that LLMs may incorporate Bayesian-like principles and adjust the weighting of their diagnostic responses based on prior information, highlighting the potential for optimizing LLM's performance in clinical settings by providing relevant contextual information.

DKCN-Net: Deep kronecker convolutional neural network-based lung disease detection with federated learning.

Meda A, Nelson L, Jagdish M

pubmed logopapersJun 1 2025
In the healthcare field, lung disease detection techniques based on deep learning (DL) are widely used. However, achieving high stability while maintaining privacy remains a challenge. To address this, this research employs Federated Learning (FL), enabling doctors to train models without sharing patient data with unauthorized parties, preserving privacy in local models. The study introduces the Deep Kronecker Convolutional Neural Network (DKCN-Net) for lung disease detection. Input Computed Tomography (CT) images are sourced from the LIDC-IDRI database and denoised using the Adaptive Gaussian Filter (AGF). After that, the Lung lobe and nodule segmentation are performed using Deep Fuzzy Clustering (DFC) and a 3-Dimensional Fully Convolutional Neural Network (3D-FCN). During feature extraction, various features, including statistical, Convolutional Neural Networks (CNN), and Gray-Level Co-Occurrence Matrix (GLCM), are obtained. Lung diseases are then detected using DKCN-Net, which combines the Deep Kronecker Neural Network (DKN) and Parallel Convolutional Neural Network (PCNN). The DKCN-Net achieves an accuracy of 92.18 %, a loss of 7.82 %, a Mean Squared Error (MSE) of 0.858, a True Positive Rate (TPR) of 92.99 %, and a True Negative Rate (TNR) of 92.19 %, with a processing time of 50 s per timestamp.

Non-invasive classification of non-neoplastic and neoplastic gallbladder polyps based on clinical imaging and ultrasound radiomics features: An interpretable machine learning model.

Dou M, Liu H, Tang Z, Quan L, Xu M, Wang F, Du Z, Geng Z, Li Q, Zhang D

pubmed logopapersJun 1 2025
Gallbladder (GB) adenomas, precancerous lesions for gallbladder carcinoma (GBC), lack reliable non-invasive tools for preoperative differentiation of neoplastic polyps from cholesterol polyps. This study aimed to evaluate an interpretable machine learning (ML) combined model for the precise differentiation of the pathological nature of gallbladder polyps (GPs). This study consecutively enrolled 744 patients from Xi'an Jiaotong University First Affiliated Hospital between January 2017 and December 2023 who were pathologically diagnosed postoperatively with cholesterol polyps, adenomas or T1-stage GBC. Radiomics features were extracted and selected, while clinical variables were subjected to univariate and multivariate logistic regression analyses to identify significant predictors of neoplastic polyps. A optimal ML-based radiomics model was developed, and separate clinical, US and combined models were constructed. Finally, SHapley Additive exPlanations (SHAP) was employed to visualize the classification process. The areas under the curves (AUCs) of the CatBoost-based radiomics model were 0.852 (95 % CI: 0.818-0.884) and 0.824 (95 % CI: 0.758-0.881) for the training and test sets, respectively. The combined model demonstrated the best performance with an improved AUC of 0.910 (95 % CI: 0.885-0.934) and 0.869 (95 % CI: 0.812-0.919), outperformed the clinical, radiomics, and US model (all P < 0.05), and reduced the rate of unnecessary cholecystectomies. SHAP analysis revealed that the polyp short diameter is a crucial independent risk factor in predicting the nature of the GPs. The ML-based combined model may be an effective non-invasive tool for improving the precision treatment of GPs, utilizing SHAP to visualize the classification process can enhance its clinical application.

Diffusion Models in Low-Level Vision: A Survey.

He C, Shen Y, Fang C, Xiao F, Tang L, Zhang Y, Zuo W, Guo Z, Li X

pubmed logopapersJun 1 2025
Deep generative models have gained considerable attention in low-level vision tasks due to their powerful generative capabilities. Among these, diffusion model-based approaches, which employ a forward diffusion process to degrade an image and a reverse denoising process for image generation, have become particularly prominent for producing high-quality, diverse samples with intricate texture details. Despite their widespread success in low-level vision, there remains a lack of a comprehensive, insightful survey that synthesizes and organizes the advances in diffusion model-based techniques. To address this gap, this paper presents the first comprehensive review focused on denoising diffusion models applied to low-level vision tasks, covering both theoretical and practical contributions. We outline three general diffusion modeling frameworks and explore their connections with other popular deep generative models, establishing a solid theoretical foundation for subsequent analysis. We then categorize diffusion models used in low-level vision tasks from multiple perspectives, considering both the underlying framework and the target application. Beyond natural image processing, we also summarize diffusion models applied to other low-level vision domains, including medical imaging, remote sensing, and video processing. Additionally, we provide an overview of widely used benchmarks and evaluation metrics in low-level vision tasks. Our review includes an extensive evaluation of diffusion model-based techniques across six representative tasks, with both quantitative and qualitative analysis. Finally, we highlight the limitations of current diffusion models and propose four promising directions for future research. This comprehensive review aims to foster a deeper understanding of the role of denoising diffusion models in low-level vision.

Evaluating artificial intelligence chatbots for patient education in oral and maxillofacial radiology.

Helvacioglu-Yigit D, Demirturk H, Ali K, Tamimi D, Koenig L, Almashraqi A

pubmed logopapersJun 1 2025
This study aimed to compare the quality and readability of the responses generated by 3 publicly available artificial intelligence (AI) chatbots in answering frequently asked questions (FAQs) related to Oral and Maxillofacial Radiology (OMR) to assess their suitability for patient education. Fifteen OMR-related questions were selected from professional patient information websites. These questions were posed to ChatGPT-3.5 by OpenAI, Gemini 1.5 Pro by Google, and Copilot by Microsoft to generate responses. Three board-certified OMR specialists evaluated the responses regarding scientific adequacy, ease of understanding, and overall reader satisfaction. Readability was assessed using the Flesch-Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE) scores. The Wilcoxon signed-rank test was conducted to compare the scores assigned by the evaluators to the responses from the chatbots and professional websites. Interevaluator agreement was examined by calculating the Fleiss kappa coefficient. There were no significant differences between groups in terms of scientific adequacy. In terms of readability, chatbots had overall mean FKGL and FRE scores of 12.97 and 34.11, respectively. Interevaluator agreement level was generally high. Although chatbots are relatively good at responding to FAQs, validating AI-generated information using input from healthcare professionals can enhance patient care and safety. Readability of the text content in the chatbots and websites requires high reading levels.

Deep Learning in Digital Breast Tomosynthesis: Current Status, Challenges, and Future Trends.

Wang R, Chen F, Chen H, Lin C, Shuai J, Wu Y, Ma L, Hu X, Wu M, Wang J, Zhao Q, Shuai J, Pan J

pubmed logopapersJun 1 2025
The high-resolution three-dimensional (3D) images generated with digital breast tomosynthesis (DBT) in the screening of breast cancer offer new possibilities for early disease diagnosis. Early detection is especially important as the incidence of breast cancer increases. However, DBT also presents challenges in terms of poorer results for dense breasts, increased false positive rates, slightly higher radiation doses, and increased reading times. Deep learning (DL) has been shown to effectively increase the processing efficiency and diagnostic accuracy of DBT images. This article reviews the application and outlook of DL in DBT-based breast cancer screening. First, the fundamentals and challenges of DBT technology are introduced. The applications of DL in DBT are then grouped into three categories: diagnostic classification of breast diseases, lesion segmentation and detection, and medical image generation. Additionally, the current public databases for mammography are summarized in detail. Finally, this paper analyzes the main challenges in the application of DL techniques in DBT, such as the lack of public datasets and model training issues, and proposes possible directions for future research, including large language models, multisource domain transfer, and data augmentation, to encourage innovative applications of DL in medical imaging.

Incorporating radiomic MRI models for presurgical response assessment in patients with early breast cancer undergoing neoadjuvant systemic therapy: Collaborative insights from breast oncologists and radiologists.

Gaudio M, Vatteroni G, De Sanctis R, Gerosa R, Benvenuti C, Canzian J, Jacobs F, Saltalamacchia G, Rizzo G, Pedrazzoli P, Santoro A, Bernardi D, Zambelli A

pubmed logopapersJun 1 2025
The assessment of neoadjuvant treatment's response is critical for selecting the most suitable therapeutic options for patients with breast cancer to reduce the need for invasive local therapies. Breast magnetic resonance imaging (MRI) is so far one of the most accurate approaches for assessing pathological complete response, although this is limited by the qualitative and subjective nature of radiologists' assessment, often making it insufficient for deciding whether to forgo additional locoregional therapy measures. To increase the accuracy and prediction of radiomic MRI with the aid of machine learning models and deep learning methods, as part of artificial intelligence, have been used to analyse the different subtypes of breast cancer and the specific changes observed before and after therapy. This review discusses recent advancements in radiomic MRI models for presurgical response assessment for patients with early breast cancer receiving preoperative treatments, with a focus on their implications for clinical practice.

UniBrain: Universal Brain MRI diagnosis with hierarchical knowledge-enhanced pre-training.

Lei J, Dai L, Jiang H, Wu C, Zhang X, Zhang Y, Yao J, Xie W, Zhang Y, Li Y, Zhang Y, Wang Y

pubmed logopapersJun 1 2025
Magnetic Resonance Imaging (MRI) has become a pivotal tool in diagnosing brain diseases, with a wide array of computer-aided artificial intelligence methods being proposed to enhance diagnostic accuracy. However, early studies were often limited by small-scale datasets and a narrow range of disease types, which posed challenges in model generalization. This study presents UniBrain, a hierarchical knowledge-enhanced pre-training framework designed for universal brain MRI diagnosis. UniBrain leverages a large-scale dataset comprising 24,770 imaging-report pairs from routine diagnostics for pre-training. Unlike previous approaches that either focused solely on visual representation learning or used brute-force alignment between vision and language, the framework introduces a hierarchical alignment mechanism. This mechanism extracts structured knowledge from free-text clinical reports at multiple granularities, enabling vision-language alignment at both the sequence and case levels, thereby significantly improving feature learning efficiency. A coupled vision-language perception module is further employed for text-guided multi-label classification, which facilitates zero-shot evaluation and fine-tuning of downstream tasks without modifying the model architecture. UniBrain is validated on both in-domain and out-of-domain datasets, consistently surpassing existing state-of-the-art diagnostic models and demonstrating performance on par with radiologists in specific disease categories. It shows strong generalization capabilities across diverse tasks, highlighting its potential for broad clinical application. The code is available at https://github.com/ljy19970415/UniBrain.

Exploring the significance of the frontal lobe for diagnosis of schizophrenia using explainable artificial intelligence and group level analysis.

Varaprasad SA, Goel T

pubmed logopapersJun 1 2025
Schizophrenia (SZ) is a complex mental disorder characterized by a profound disruption in cognition and emotion, often resulting in a distorted perception of reality. Magnetic resonance imaging (MRI) is an essential tool for diagnosing SZ which helps to understand the organization of the brain. Functional MRI (fMRI) is a specialized imaging technique to measure and map brain activity by detecting changes in blood flow and oxygenation. The proposed paper correlates the results using an explainable deep learning approach to identify the significant regions of SZ patients using group-level analysis for both structural MRI (sMRI) and fMRI data. The study found that the heat maps for Grad-CAM show clear visualization in the frontal lobe for the classification of SZ and CN with a 97.33% accuracy. The group difference analysis reveals that sMRI data shows intense voxel activity in the right superior frontal gyrus of the frontal lobe in SZ patients. Also, the group difference between SZ and CN during n-back tasks of fMRI data indicates significant voxel activation in the frontal cortex of the frontal lobe. These findings suggest that the frontal lobe plays a crucial role in the diagnosis of SZ, aiding clinicians in planning the treatment.

Beyond traditional orthopaedic data analysis: AI, multimodal models and continuous monitoring.

Oettl FC, Zsidai B, Oeding JF, Hirschmann MT, Feldt R, Tischer T, Samuelsson K

pubmed logopapersJun 1 2025
Multimodal artificial intelligence (AI) has the potential to revolutionise healthcare by enabling the simultaneous processing and integration of various data types, including medical imaging, electronic health records, genomic information and real-time data. This review explores the current applications and future potential of multimodal AI across healthcare, with a particular focus on orthopaedic surgery. In presurgical planning, multimodal AI has demonstrated significant improvements in diagnostic accuracy and risk prediction, with studies reporting an Area under the receiving operator curve presenting good to excellent performance across various orthopaedic conditions. Intraoperative applications leverage advanced imaging and tracking technologies to enhance surgical precision, while postoperative care has been advanced through continuous patient monitoring and early detection of complications. Despite these advances, significant challenges remain in data integration, standardisation, and privacy protection. Technical solutions such as federated learning (allowing decentralisation of models) and edge computing (allowing data analysis to happen on site or closer to site instead of multipurpose datacenters) are being developed to address these concerns while maintaining compliance with regulatory frameworks. As this field continues to evolve, the integration of multimodal AI promises to advance personalised medicine, improve patient outcomes, and transform healthcare delivery through more comprehensive and nuanced analysis of patient data. Level of Evidence: Level V.
Page 22 of 38374 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.