Sort by:
Page 155 of 2432424 results

Advancements in Herpes Zoster Diagnosis, Treatment, and Management: Systematic Review of Artificial Intelligence Applications.

Wu D, Liu N, Ma R, Wu P

pubmed logopapersJun 30 2025
The application of artificial intelligence (AI) in medicine has garnered significant attention in recent years, offering new possibilities for improving patient care across various domains. For herpes zoster, a viral infection caused by the reactivation of the varicella-zoster virus, AI technologies have shown remarkable potential in enhancing disease diagnosis, treatment, and management. This study aims to investigate the current research status in the use of AI for herpes zoster, offering a comprehensive synthesis of existing advancements. A systematic literature review was conducted following PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines. Three databases of Web of Science Core Collection, PubMed, and IEEE were searched to identify relevant studies on AI applications in herpes zoster research on November 17, 2023. Inclusion criteria were as follows: (1) research articles, (2) published in English, (3) involving actual AI applications, and (4) focusing on herpes zoster. Exclusion criteria comprised nonresearch articles, non-English papers, and studies only mentioning AI without application. Two independent clinicians screened the studies, with a third senior clinician resolving disagreements. In total, 26 articles were included. Data were extracted on AI task types; algorithms; data sources; data types; and clinical applications in diagnosis, treatment, and management. Trend analysis revealed an increasing annual interest in AI applications for herpes zoster. Hospital-derived data were the primary source (15/26, 57.7%), followed by public databases (6/26, 23.1%) and internet data (5/26, 19.2%). Medical images (9/26, 34.6%) and electronic medical records (7/26, 26.9%) were the most commonly used data types. Classification tasks (85.2%) dominated AI applications, with neural networks, particularly multilayer perceptron and convolutional neural networks being the most frequently used algorithms. AI applications were analyzed across three domains: (1) diagnosis, where mobile deep neural networks, convolutional neural network ensemble models, and mixed-scale attention-based models have improved diagnostic accuracy and efficiency; (2) treatment, where machine learning models, such as deep autoencoders combined with functional magnetic resonance imaging, electroencephalography, and clinical data, have enhanced treatment outcome predictions; and (3) management, where AI has facilitated case identification, epidemiological research, health care burden assessment, and risk factor exploration for postherpetic neuralgia and other complications. Overall, this study provides a comprehensive overview of AI applications in herpes zoster from clinical, data, and algorithmic perspectives, offering valuable insights for future research in this rapidly evolving field. AI has significantly advanced herpes zoster research by enhancing diagnostic accuracy, predicting treatment outcomes, and optimizing disease management. However, several limitations exist, including potential omissions from excluding databases like Embase and Scopus, language bias due to the inclusion of only English publications, and the risk of subjective bias in study selection. Broader studies and continuous updates are needed to fully capture the scope of AI applications in herpes zoster in the future.

Development of a deep learning algorithm for detecting significant coronary artery stenosis in whole-heart coronary magnetic resonance angiography.

Takafuji M, Ishida M, Shiomi T, Nakayama R, Fujita M, Yamaguchi S, Washiyama Y, Nagata M, Ichikawa Y, Inoue Katsuhiro RT, Nakamura S, Sakuma H

pubmed logopapersJun 30 2025
Whole-heart coronary magnetic resonance angiography (CMRA) enables noninvasive and accurate detection of coronary artery stenosis. Nevertheless, the visual interpretation of CMRA is constrained by the observer's experience, necessitating substantial training. The purposes of this study were to develop a deep learning (DL) algorithm using a deep convolutional neural network to accurately detect significant coronary artery stenosis in CMRA and to investigate the effectiveness of this DL algorithm as a tool for assisting in accurate detection of coronary artery stenosis. Nine hundred and fifty-one coronary segments from 75 patients who underwent both CMRA and invasive coronary angiography (ICA) were studied. Significant stenosis was defined as a reduction in luminal diameter of >50% on quantitative ICA. A DL algorithm was proposed to classify CMRA segments into those with and without significant stenosis. A 4-fold cross-validation method was used to train and test the DL algorithm. An observer study was then conducted using 40 segments with stenosis and 40 segments without stenosis. Three radiology experts and 3 radiology trainees independently rated the likelihood of the presence of stenosis in each coronary segment with a continuous scale from 0 to 1, first without the support of the DL algorithm, then using the DL algorithm. Significant stenosis was observed in 84 (8.8%) of the 951 coronary segments. Using the DL algorithm trained by the 4-fold cross-validation method, the area under the receiver operating characteristic curve (AUC) for the detection of segments with significant coronary artery stenosis was 0.890, with 83.3% sensitivity, 83.6% specificity and 83.6% accuracy. In the observer study, the average AUC of trainees was significantly improved using the DL algorithm (0.898) compared to that without the algorithm (0.821, p<0.001). The average AUC of experts tended to be higher with the DL algorithm (0.897), but not significantly different from that without the algorithm (0.879, p=0.082). We developed a DL algorithm offering high diagnostic accuracy for detecting significant coronary artery stenosis on CMRA. Our proposed DL algorithm appears to be an effective tool for assisting inexperienced observers to accurately detect coronary artery stenosis in whole-heart CMRA.

Efficient Chest X-Ray Feature Extraction and Feature Fusion for Pneumonia Detection Using Lightweight Pretrained Deep Learning Models

Chandola, Y., Uniyal, V., Bachheti, Y.

medrxiv logopreprintJun 30 2025
Pneumonia is a respiratory condition characterized by inflammation of the alveolar sacs in the lungs, which disrupts normal oxygen exchange. This disease disproportionately impacts vulnerable populations, including young children (under five years of age) and elderly individuals (over 65 years), primarily due to their compromised immune systems. The mortality rate associated with pneumonia remains alarmingly high, particularly in low-resource settings where healthcare access is limited. Although effective prevention strategies exist, pneumonia continues to claim the lives of approximately one million children each year, earning its reputation as a "silent killer." Globally, an estimated 500 million cases are documented annually, underscoring its widespread public health burden. This study explores the design and evaluation of the CNN-based Computer-Aided Diagnostic (CAD) systems with an aim of carrying out competent as well as resourceful classification and categorization of chest radiographs into binary classes (Normal, Pneumonia). An augmented Kaggle dataset of 18,200 chest radiographs, split between normal and pneumonia cases, was utilized. This study conducts a series of experiments to evaluate lightweight CNN models--ShuffleNet, NASNet-Mobile, and EfficientNet-b0--using transfer learning that achieved accuracy of 90%, 88% and 89%, prompting the task for deep feature extraction from each of the networks and applying feature fusion to further pair it with SVM classifier and XGBoost classifier, achieving an accuracy of 97% and 98% resepectively. The proposed research emphasizes the crucial role of CAD systems in advancing radiological diagnostics, delivering effective solutions to aid radiologists in distinguishing between diagnoses by applying feature fusion, feature selection along with various machine learning algorithms and deep learning architectures.

ToolCAP: Novel Tools to improve management of paediatric Community-Acquired Pneumonia - a randomized controlled trial- Statistical Analysis Plan

Cicconi, S., Glass, T., Du Toit, J., Bresser, M., Dhalla, F., Faye, P. M., Lal, L., Langet, H., Manji, K., Moser, A., Ndao, M. A., Palmer, M., Tine, J. A. D., Van Hoving, N., Keitel, K.

medrxiv logopreprintJun 30 2025
The ToolCAP cohort study is a prospective, observational, multi-site platform study designed to collect harmonized, high-quality clinical, imaging, and biological data on children with IMCI-defined pneumonia in low- and middle-income countries (LMICs). The primary objective is to inform the development and validation of diagnostic and prognostic tools, including lung ultrasound (LUS), point-of-care biomarkers, and AI-based models, to improve pneumonia diagnosis, management, and antimicrobial stewardship. This statistical analysis plan (SAP) outlines the analytic strategy for describing the study population, assessing the performance of candidate diagnostic tools, and enabling data sharing in support of secondary research questions and AI model development. Children under 12 years presenting with suspected pneumonia are enrolled within 24 hours of presentation and undergo clinical assessment, digital auscultation, LUS, and optional biological sampling. Follow-up occurs on Day 8 and Day 29 to assess outcomes including recovery, treatment response, and complications. The SAP details variable definitions, data management strategies, and pre-specified analyses, including descriptive summaries, sensitivity and specificity of diagnostic tools against clinical reference standards, and exploratory subgroup analyses.

Exposing and Mitigating Calibration Biases and Demographic Unfairness in MLLM Few-Shot In-Context Learning for Medical Image Classification

Xing Shen, Justin Szeto, Mingyang Li, Hengguan Huang, Tal Arbel

arxiv logopreprintJun 29 2025
Multimodal large language models (MLLMs) have enormous potential to perform few-shot in-context learning in the context of medical image analysis. However, safe deployment of these models into real-world clinical practice requires an in-depth analysis of the accuracies of their predictions, and their associated calibration errors, particularly across different demographic subgroups. In this work, we present the first investigation into the calibration biases and demographic unfairness of MLLMs' predictions and confidence scores in few-shot in-context learning for medical image classification. We introduce CALIN, an inference-time calibration method designed to mitigate the associated biases. Specifically, CALIN estimates the amount of calibration needed, represented by calibration matrices, using a bi-level procedure: progressing from the population level to the subgroup level prior to inference. It then applies this estimation to calibrate the predicted confidence scores during inference. Experimental results on three medical imaging datasets: PAPILA for fundus image classification, HAM10000 for skin cancer classification, and MIMIC-CXR for chest X-ray classification demonstrate CALIN's effectiveness at ensuring fair confidence calibration in its prediction, while improving its overall prediction accuracies and exhibiting minimum fairness-utility trade-off.

Exposing and Mitigating Calibration Biases and Demographic Unfairness in MLLM Few-Shot In-Context Learning for Medical Image Classification

Xing Shen, Justin Szeto, Mingyang Li, Hengguan Huang, Tal Arbel

arxiv logopreprintJun 29 2025
Multimodal large language models (MLLMs) have enormous potential to perform few-shot in-context learning in the context of medical image analysis. However, safe deployment of these models into real-world clinical practice requires an in-depth analysis of the accuracies of their predictions, and their associated calibration errors, particularly across different demographic subgroups. In this work, we present the first investigation into the calibration biases and demographic unfairness of MLLMs' predictions and confidence scores in few-shot in-context learning for medical image classification. We introduce CALIN, an inference-time calibration method designed to mitigate the associated biases. Specifically, CALIN estimates the amount of calibration needed, represented by calibration matrices, using a bi-level procedure: progressing from the population level to the subgroup level prior to inference. It then applies this estimation to calibrate the predicted confidence scores during inference. Experimental results on three medical imaging datasets: PAPILA for fundus image classification, HAM10000 for skin cancer classification, and MIMIC-CXR for chest X-ray classification demonstrate CALIN's effectiveness at ensuring fair confidence calibration in its prediction, while improving its overall prediction accuracies and exhibiting minimum fairness-utility trade-off.

A Hierarchical Slice Attention Network for Appendicitis Classification in 3D CT Scans

Chia-Wen Huang, Haw Hwai, Chien-Chang Lee, Pei-Yuan Wu

arxiv logopreprintJun 29 2025
Timely and accurate diagnosis of appendicitis is critical in clinical settings to prevent serious complications. While CT imaging remains the standard diagnostic tool, the growing number of cases can overwhelm radiologists, potentially causing delays. In this paper, we propose a deep learning model that leverages 3D CT scans for appendicitis classification, incorporating Slice Attention mechanisms guided by external 2D datasets to enhance small lesion detection. Additionally, we introduce a hierarchical classification framework using pre-trained 2D models to differentiate between simple and complicated appendicitis. Our approach improves AUC by 3% for appendicitis and 5.9% for complicated appendicitis, offering a more efficient and reliable diagnostic solution compared to previous work.

Federated Breast Cancer Detection Enhanced by Synthetic Ultrasound Image Augmentation

Hongyi Pan, Ziliang Hong, Gorkem Durak, Ziyue Xu, Ulas Bagci

arxiv logopreprintJun 29 2025
Federated learning (FL) has emerged as a promising paradigm for collaboratively training deep learning models across institutions without exchanging sensitive medical data. However, its effectiveness is often hindered by limited data availability and non-independent, identically distributed data across participating clients, which can degrade model performance and generalization. To address these challenges, we propose a generative AI based data augmentation framework that integrates synthetic image sharing into the federated training process for breast cancer diagnosis via ultrasound images. Specifically, we train two simple class-specific Deep Convolutional Generative Adversarial Networks: one for benign and one for malignant lesions. We then simulate a realistic FL setting using three publicly available breast ultrasound image datasets: BUSI, BUS-BRA, and UDIAT. FedAvg and FedProx are adopted as baseline FL algorithms. Experimental results show that incorporating a suitable number of synthetic images improved the average AUC from 0.9206 to 0.9237 for FedAvg and from 0.9429 to 0.9538 for FedProx. We also note that excessive use of synthetic data reduced performance, underscoring the importance of maintaining a balanced ratio of real and synthetic samples. Our findings highlight the potential of generative AI based data augmentation to enhance FL results in the breast ultrasound image classification task.

Hierarchical Corpus-View-Category Refinement for Carotid Plaque Risk Grading in Ultrasound

Zhiyuan Zhu, Jian Wang, Yong Jiang, Tong Han, Yuhao Huang, Ang Zhang, Kaiwen Yang, Mingyuan Luo, Zhe Liu, Yaofei Duan, Dong Ni, Tianhong Tang, Xin Yang

arxiv logopreprintJun 29 2025
Accurate carotid plaque grading (CPG) is vital to assess the risk of cardiovascular and cerebrovascular diseases. Due to the small size and high intra-class variability of plaque, CPG is commonly evaluated using a combination of transverse and longitudinal ultrasound views in clinical practice. However, most existing deep learning-based multi-view classification methods focus on feature fusion across different views, neglecting the importance of representation learning and the difference in class features. To address these issues, we propose a novel Corpus-View-Category Refinement Framework (CVC-RF) that processes information from Corpus-, View-, and Category-levels, enhancing model performance. Our contribution is four-fold. First, to the best of our knowledge, we are the foremost deep learning-based method for CPG according to the latest Carotid Plaque-RADS guidelines. Second, we propose a novel center-memory contrastive loss, which enhances the network's global modeling capability by comparing with representative cluster centers and diverse negative samples at the Corpus level. Third, we design a cascaded down-sampling attention module to fuse multi-scale information and achieve implicit feature interaction at the View level. Finally, a parameter-free mixture-of-experts weighting strategy is introduced to leverage class clustering knowledge to weight different experts, enabling feature decoupling at the Category level. Experimental results indicate that CVC-RF effectively models global features via multi-level refinement, achieving state-of-the-art performance in the challenging CPG task.

Developing ultrasound-based machine learning models for accurate differentiation between sclerosing adenosis and invasive ductal carcinoma.

Liu G, Yang N, Qu Y, Chen G, Wen G, Li G, Deng L, Mai Y

pubmed logopapersJun 28 2025
This study aimed to develop a machine learning model using breast ultrasound images to improve the non-invasive differential diagnosis between Sclerosing Adenosis (SA) and Invasive Ductal Carcinoma (IDC). 2046 ultrasound images from 772 SA and IDC patients were collected, Regions of Interest (ROI) were delineated, and features were extracted. The dataset was split into training and test cohorts, and feature selection was performed by correlation coefficients and Recursive Feature Elimination. 10 classifiers with Grid Search and 5-fold cross-validation were applied during model training. Receiver Operating Characteristic (ROC) curve and Youden index were used to model evaluation. SHapley Additive exPlanations (SHAP) was employed for model interpretation. Another 224 ROIs of 84 patients from other hospitals were used for external validation. For the ROI-level model, XGBoost with 18 features achieved an area under the curve (AUC) of 0.9758 (0.9654-0.9847) in the test cohort and 0.9906 (0.9805-0.9973) in the validation cohort. For the patient-level model, logistic regression with 9 features achieved an AUC of 0.9653 (0.9402-0.9859) in the test cohort and 0.9846 (0.9615-0.9978) in the validation cohort. The feature "Original shape Major Axis Length" was identified as the most important, with its value positively correlated with a higher likelihood of the sample being IDC. Feature contributions for specific ROIs were visualized as well. We developed explainable, ultrasound-based machine learning models with high performance for differentiating SA and IDC, offering a potential non-invasive tool for improved differential diagnosis. Question Accurately distinguishing between sclerosing adenosis (SA) and invasive ductal carcinoma (IDC) in a non-invasive manner has been a diagnostic challenge. Findings Explainable, ultrasound-based machine learning models with high performance were developed for differentiating SA and IDC, and validated well in external validation cohort. Critical relevance These models provide non-invasive tools to reduce misdiagnoses of SA and improve early detection for IDC.
Page 155 of 2432424 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.