Sort by:
Page 31 of 82813 results

Impact of polymer source variations on hydrogel structure and product performance in dexamethasone-loaded ophthalmic inserts.

VandenBerg MA, Zaman RU, Plavchak CL, Smith WC, Nejad HB, Beringhs AO, Wang Y, Xu X

pubmed logopapersJul 9 2025
Localized drug delivery can enhance therapeutic efficacy while minimizing systemic side effects, making sustained-release ophthalmic inserts an attractive alternative to traditional eye drops. Such inserts offer improved patient compliance through prolonged therapeutic effects and a reduced need for frequent administration. This study focuses on dexamethasone-containing ophthalmic inserts. These inserts utilize a key excipient, polyethylene glycol (PEG), which forms a hydrogel upon contact with tear fluid. Developing generic equivalents of PEG-based inserts is challenging due to difficulties in characterizing inactive ingredients and the absence of standardized physicochemical characterization methods to demonstrate similarity. To address this gap, a suite of analytical approaches was applied to both PEG precursor materials sourced from different vendors and manufactured inserts. <sup>1</sup>H NMR, FTIR, MALDI, and SEC revealed variations in end-group functionalization, impurity content, and molecular weight distribution of the excipient. These differences led to changes in the finished insert network properties such as porosity, pore size and structure, gel mechanical strength, and crystallinity, which were corroborated by X-ray microscopy, AI-based image analysis, thermal, mechanical, and density measurements. In vitro release testing revealed distinct drug release profiles across formulations, with swelling rate correlated to release rate (i.e., faster release with rapid swelling). The use of non-micronized and micronized dexamethasone also contributed to release profile differences. Through comprehensive characterization of these PEG-based dexamethasone inserts, correlations between polymer quality, hydrogel microstructure, and release kinetics were established. The study highlights how excipient differences can alter product performance, emphasizing the importance of thorough analysis in developing generic equivalents of complex drug products.

Securing Healthcare Data Integrity: Deepfake Detection Using Autonomous AI Approaches.

Hsu CC, Tsai MY, Yu CM

pubmed logopapersJul 9 2025
The rapid evolution of deepfake technology poses critical challenges to healthcare systems, particularly in safeguarding the integrity of medical imaging, electronic health records (EHR), and telemedicine platforms. As autonomous AI becomes increasingly integrated into smart healthcare, the potential misuse of deepfakes to manipulate sensitive healthcare data or impersonate medical professionals highlights the urgent need for robust and adaptive detection mechanisms. In this work, we propose DProm, a dynamic deepfake detection framework leveraging visual prompt tuning (VPT) with a pre-trained Swin Transformer. Unlike traditional static detection models, which struggle to adapt to rapidly evolving deepfake techniques, DProm fine-tunes a small set of visual prompts to efficiently adapt to new data distributions with minimal computational and storage requirements. Comprehensive experiments demonstrate that DProm achieves state-of-the-art performance in both static cross-dataset evaluations and dynamic scenarios, ensuring robust detection across diverse data distributions. By addressing the challenges of scalability, adaptability, and resource efficiency, DProm offers a transformative solution for enhancing the security and trustworthiness of autonomous AI systems in healthcare, paving the way for safer and more reliable smart healthcare applications.

4KAgent: Agentic Any Image to 4K Super-Resolution

Yushen Zuo, Qi Zheng, Mingyang Wu, Xinrui Jiang, Renjie Li, Jian Wang, Yide Zhang, Gengchen Mai, Lihong V. Wang, James Zou, Xiaoyu Wang, Ming-Hsuan Yang, Zhengzhong Tu

arxiv logopreprintJul 9 2025
We present 4KAgent, a unified agentic super-resolution generalist system designed to universally upscale any image to 4K resolution (and even higher, if applied iteratively). Our system can transform images from extremely low resolutions with severe degradations, for example, highly distorted inputs at 256x256, into crystal-clear, photorealistic 4K outputs. 4KAgent comprises three core components: (1) Profiling, a module that customizes the 4KAgent pipeline based on bespoke use cases; (2) A Perception Agent, which leverages vision-language models alongside image quality assessment experts to analyze the input image and make a tailored restoration plan; and (3) A Restoration Agent, which executes the plan, following a recursive execution-reflection paradigm, guided by a quality-driven mixture-of-expert policy to select the optimal output for each step. Additionally, 4KAgent embeds a specialized face restoration pipeline, significantly enhancing facial details in portrait and selfie photos. We rigorously evaluate our 4KAgent across 11 distinct task categories encompassing a total of 26 diverse benchmarks, setting new state-of-the-art on a broad spectrum of imaging domains. Our evaluations cover natural images, portrait photos, AI-generated content, satellite imagery, fluorescence microscopy, and medical imaging like fundoscopy, ultrasound, and X-ray, demonstrating superior performance in terms of both perceptual (e.g., NIQE, MUSIQ) and fidelity (e.g., PSNR) metrics. By establishing a novel agentic paradigm for low-level vision tasks, we aim to catalyze broader interest and innovation within vision-centric autonomous agents across diverse research communities. We will release all the code, models, and results at: https://4kagent.github.io.

Steps Adaptive Decay DPSGD: Enhancing Performance on Imbalanced Datasets with Differential Privacy with HAM10000

Xiaobo Huang, Fang Xie

arxiv logopreprintJul 9 2025
When applying machine learning to medical image classification, data leakage is a critical issue. Previous methods, such as adding noise to gradients for differential privacy, work well on large datasets like MNIST and CIFAR-100, but fail on small, imbalanced medical datasets like HAM10000. This is because the imbalanced distribution causes gradients from minority classes to be clipped and lose crucial information, while majority classes dominate. This leads the model to fall into suboptimal solutions early. To address this, we propose SAD-DPSGD, which uses a linear decaying mechanism for noise and clipping thresholds. By allocating more privacy budget and using higher clipping thresholds in the initial training phases, the model avoids suboptimal solutions and enhances performance. Experiments show that SAD-DPSGD outperforms Auto-DPSGD on HAM10000, improving accuracy by 2.15% under $\epsilon = 3.0$ , $\delta = 10^{-3}$.

Applying deep learning techniques to identify tonsilloliths in panoramic radiography.

Katı E, Baybars SC, Danacı Ç, Tuncer SA

pubmed logopapersJul 9 2025
Tonsilloliths can be seen on panoramic radiographs (PRs) as deposits located on the middle portion of the ramus of the mandible. Although tonsilloliths are clinically harmless, the high risk of misdiagnosis leads to unnecessary advanced examinations and interventions, thus jeopardizing patient safety and increasing unnecessary resource use in the healthcare system. Therefore, this study aims to meet an important clinical need by providing accurate and rapid diagnostic support. The dataset consisted of a total of 275 PRs, with 125 PRs lacking tonsillolith and 150 PRs having tonsillolith. ResNet and EfficientNet CNN models were assessed during the model selection process. An evaluation was conducted to analyze the learning capacity, intricacy, and compatibility of each model with the problem at hand. The effectiveness of the models was evaluated using accuracy, recall, precision, and F1 score measures following the training phase. Both the ResNet18 and EfficientNetB0 models were able to differentiate between tonsillolith-present and tonsillolith-absent conditions with an average accuracy of 89%. ResNet101 demonstrated underperformance when contrasted with other models. EfficientNetB1 exhibits satisfactory accuracy in both categories. The EfficientNetB0 model exhibits a 93% precision, 87% recall, 90% F1 score, and 89% accuracy. This study indicates that implementing AI-powered deep learning techniques would significantly improve the clinical diagnosis of tonsilloliths.

MMDental - A multimodal dataset of tooth CBCT images with expert medical records.

Wang C, Zhang Y, Wu C, Liu J, Wu L, Wang Y, Huang X, Feng X, Wang Y

pubmed logopapersJul 9 2025
In the rapidly evolving field of dental intelligent healthcare, where Artificial Intelligence (AI) plays a pivotal role, the demand for multimodal datasets is critical. Existing public datasets are primarily composed of single-modal data, predominantly dental radiographs or scans, which limits the development of AI-driven applications for intelligent dental treatment. In this paper, we collect a MultiModal Dental (MMDental) dataset to address this gap. MMDental comprises data from 660 patients, including 3D Cone-beam Computed Tomography (CBCT) images and corresponding detailed expert medical records with initial diagnoses and follow-up documentation. All CBCT scans are conducted under the guidance of professional physicians, and all patient records are reviewed by senior doctors. To the best of our knowledge, this is the first and largest dataset containing 3D CBCT images of teeth with corresponding medical records. Furthermore, we provide a comprehensive analysis of the dataset by exploring patient demographics, prevalence of various dental conditions, and the disease distribution across age groups. We believe this work will be beneficial for further advancements in dental intelligent treatment.

Applicability and performance of convolutional neural networks for the identification of periodontal bone loss in periapical radiographs: a scoping review.

Putra RH, Astuti ER, Nurrachman AS, Savitri Y, Vadya AV, Khairunisa ST, Iikubo M

pubmed logopapersJul 9 2025
The study aimed to review the applicability and performance of various Convolutional Neural Network (CNN) models for the identification of periodontal bone loss (PBL) in digital periapical radiographs achieved through classification, detection, and segmentation approaches. We searched the PubMed, IEEE Xplore, and SCOPUS databases for articles published up to June 2024. After the selection process, a total of 11 studies were included in this review. The reviewed studies demonstrated that CNNs have a significant potential application for automatic identification of PBL on periapical radiographs through classification and segmentation approaches. CNN architectures can be utilized to classify the presence or absence of PBL, the severity or degree of PBL, and PBL area segmentation. CNN showed a promising performance for PBL identification on periapical radiographs. Future research should focus on dataset preparation, proper selection of CNN architecture, and robust performance evaluation to improve the model. Utilizing an optimized CNN architecture is expected to assist dentists by providing accurate and efficient identification of PBL.

Noise-inspired diffusion model for generalizable low-dose CT reconstruction.

Gao Q, Chen Z, Zeng D, Zhang J, Ma J, Shan H

pubmed logopapersJul 8 2025
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.

A Unified Platform for Radiology Report Generation and Clinician-Centered AI Evaluation

Ma, Z., Yang, X., Atalay, Z., Yang, A., Collins, S., Bai, H., Bernstein, M., Baird, G., Jiao, Z.

medrxiv logopreprintJul 8 2025
Generative AI models have demonstrated strong potential in radiology report generation, but their clinical adoption depends on physician trust. In this study, we conducted a radiology-focused Turing test to evaluate how well attendings and residents distinguish AI-generated reports from those written by radiologists, and how their confidence and decision time reflect trust. we developed an integrated web-based platform comprising two core modules: Report Generation and Report Evaluation. Using the web-based platform, eight participants evaluated 48 anonymized X-ray cases, each paired with two reports from three comparison groups: radiologist vs. AI model 1, radiologist vs. AI model 2, and AI model 1 vs. AI model 2. Participants selected the AI-generated report, rated their confidence, and indicated report preference. Attendings outperformed residents in identifying AI-generated reports (49.9% vs. 41.1%) and exhibited longer decision times, suggesting more deliberate judgment. Both groups took more time when both reports were AI-generated. Our findings highlight the role of clinical experience in AI acceptance and the need for design strategies that foster trust in clinical applications. The project page of the evaluation platform is available at: https://zachatalay89.github.io/Labsite.

The correlation of liquid biopsy genomic data to radiomics in colon, pancreatic, lung and prostatic cancer patients.

Italiano A, Gautier O, Dupont J, Assi T, Dawi L, Lawrance L, Bone A, Jardali G, Choucair A, Ammari S, Bayle A, Rouleau E, Cournede PH, Borget I, Besse B, Barlesi F, Massard C, Lassau N

pubmed logopapersJul 8 2025
With the advances in artificial intelligence (AI) and precision medicine, radiomics has emerged as a promising tool in the field of oncology. Radiogenomics integrates radiomics with genomic data, potentially offering a non-invasive method for identifying biomarkers relevant to cancer therapy. Liquid biopsy (LB) has further revolutionized cancer diagnostics by detecting circulating tumor DNA (ctDNA), enabling real-time molecular profiling. This study explores the integration of radiomics and LB to predict genomic alterations in solid tumors, including lung, colon, pancreatic, and prostate cancers. A retrospective study was conducted on 418 patients from the STING trial (NCT04932525), all of whom underwent both LB and CT imaging. Predictive models were developed using an XGBoost logistic classifier, with statistical analysis performed to compare tumor volumes, lesion counts, and affected organs across molecular subtypes. Performance was evaluated using area under the curve (AUC) values and cross-validation techniques. Radiomic models demonstrated moderate-to-good performance in predicting genomic alterations. KRAS mutations were best identified in pancreatic cancer (AUC=0.97), while moderate discrimination was noted in lung (AUC=0.66) and colon cancer (AUC=0.64). EGFR mutations in lung cancer were detected with an AUC of 0.74, while BRAF mutations showed good discriminatory ability in both lung (AUC=0.79) and colon cancer (AUC=0.76). In the radiomics predictive model, AR mutations in prostate cancer showed limited discrimination (AUC = 0.63). This study highlights the feasibility of integrating radiomics and LB for non-invasive genomic profiling in solid tumors, demonstrating significant potential in patient stratification and personalized oncology care. While promising, further prospective validation is required to enhance the generalizability of these models.
Page 31 of 82813 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.