Sort by:
Page 250 of 3623619 results

Emerging Artificial Intelligence Innovations in Rheumatoid Arthritis and Challenges to Clinical Adoption.

Gilvaz VJ, Sudheer A, Reginato AM

pubmed logopapersJun 28 2025
This review was written to inform practicing clinical rheumatologists about recent advances in artificial intelligence (AI) based research in rheumatoid arthritis (RA), using accessible and practical language. We highlight developments from 2023 to early 2025 across diagnostic imaging, treatment prediction, drug discovery, and patient-facing tools. Given the increasing clinical interest in AI and its potential to augment care delivery, this article aims to bridge the gap between technical innovation and real-world rheumatology practice. Several AI models have demonstrated high accuracy in early RA detection using imaging modalities such as thermal imaging and nuclear scans. Predictive models for treatment response have leveraged routinely collected electronic health record (EHR) data, moving closer to practical application in clinical workflows. Patient-facing tools like mobile symptom checkers and large language models (LLMs) such as ChatGPT show promise in enhancing education and engagement, although accuracy and safety remain variable. AI has also shown utility in identifying novel biomarkers and accelerating drug discovery. Despite these advances, as of early 2025, no AI-based tools have received FDA approval for use in rheumatology, in contrast to other specialties. Artificial intelligence holds tremendous promise to enhance clinical care in RA-from early diagnosis to personalized therapy. However, clinical adoption remains limited due to regulatory, technical, and implementation challenges. A streamlined regulatory framework and closer collaboration between clinicians, researchers, and industry partners are urgently needed. With thoughtful integration, AI can serve as a valuable adjunct in addressing clinical complexity and workforce shortages in rheumatology.

Automated Evaluation of Female Pelvic Organ Descent on Transperineal Ultrasound: Model Development and Validation.

Wu S, Wu J, Xu Y, Tan J, Wang R, Zhang X

pubmed logopapersJun 28 2025
Transperineal ultrasound (TPUS) is a widely used tool for evaluating female pelvic organ prolapse (POP), but its accurate interpretation relies on experience, causing diagnostic variability. This study aims to develop and validate a multi-task deep learning model to automate POP assessment using TPUS images. TPUS images from 1340 female patients (January-June 2023) were evaluated by two experienced physicians. The presence and severity of cystocele, uterine prolapse, rectocele, and excessive mobility of perineal body (EMoPB) were documented. After preprocessing, 1072 images were used for training and 268 for validation. The model used ResNet34 as the feature extractor and four parallel fully connected layers to predict the conditions. Model performance was assessed using confusion matrix and area under the curve (AUC). Gradient-weighted class activation mapping (Grad-CAM) visualized the model's focus areas. The model demonstrated strong diagnostic performance, with accuracies and AUC values as follows: cystocele, 0.869 (95% CI, 0.824-0.905) and 0.947 (95% CI, 0.930-0.962); uterine prolapse, 0.799 (95% CI, 0.746-0.842) and 0.931 (95% CI, 0.911-0.948); rectocele, 0.978 (95% CI, 0.952-0.990) and 0.892 (95% CI, 0.849-0.927); and EMoPB, 0.869 (95% CI, 0.824-0.905) and 0.942 (95% CI, 0.907-0.967). Grad-CAM heatmaps revealed that the model's focus areas were consistent with those observed by human experts. This study presents a multi-task deep learning model for automated POP assessment using TPUS images, showing promising efficacy and potential to benefit a broader population of women.

FSDA-DG: Improving cross-domain generalizability of medical image segmentation with few source domain annotations.

Ye Z, Wang K, Lv W, Feng Q, Lu L

pubmed logopapersJun 27 2025
Deep learning-based medical image segmentation faces significant challenges arising from limited labeled data and domain shifts. While prior approaches have primarily addressed these issues independently, their simultaneous occurrence is common in medical imaging. A method that generalizes to unseen domains using only minimal annotations offers significant practical value due to reduced data annotation and development costs. In pursuit of this goal, we propose FSDA-DG, a novel solution to improve cross-domain generalizability of medical image segmentation with few single-source domain annotations. Specifically, our approach introduces semantics-guided semi-supervised data augmentation. This method divides images into global broad regions and semantics-guided local regions, and applies distinct augmentation strategies to enrich data distribution. Within this framework, both labeled and unlabeled data are transformed into extensive domain knowledge while preserving domain-invariant semantic information. Additionally, FSDA-DG employs a multi-decoder U-Net pipeline semi-supervised learning (SSL) network to improve domain-invariant representation learning through consistent prior assumption across multiple perturbations. By integrating data-level and model-level designs, FSDA-DG achieves superior performance compared to state-of-the-art methods in two challenging single domain generalization (SDG) tasks with limited annotations. The code is publicly available at https://github.com/yezanting/FSDA-DG.

Causality-Adjusted Data Augmentation for Domain Continual Medical Image Segmentation.

Zhu Z, Dong Q, Luo G, Wang W, Dong S, Wang K, Tian Y, Wang G, Li S

pubmed logopapersJun 27 2025
In domain continual medical image segmentation, distillation-based methods mitigate catastrophic forgetting by continuously reviewing old knowledge. However, these approaches often exhibit biases towards both new and old knowledge simultaneously due to confounding factors, which can undermine segmentation performance. To address these biases, we propose the Causality-Adjusted Data Augmentation (CauAug) framework, introducing a novel causal intervention strategy called the Texture-Domain Adjustment Hybrid-Scheme (TDAHS) alongside two causality-targeted data augmentation approaches: the Cross Kernel Network (CKNet) and the Fourier Transformer Generator (FTGen). (1) TDAHS establishes a domain-continual causal model that accounts for two types of knowledge biases by identifying irrelevant local textures (L) and domain-specific features (D) as confounders. It introduces a hybrid causal intervention that combines traditional confounder elimination with a proposed replacement approach to better adapt to domain shifts, thereby promoting causal segmentation. (2) CKNet eliminates confounder L to reduce biases in new knowledge absorption. It decreases reliance on local textures in input images, forcing the model to focus on relevant anatomical structures and thus improving generalization. (3) FTGen causally intervenes on confounder D by selectively replacing it to alleviate biases that impact old knowledge retention. It restores domain-specific features in images, aiding in the comprehensive distillation of old knowledge. Our experiments show that CauAug significantly mitigates catastrophic forgetting and surpasses existing methods in various medical image segmentation tasks. The implementation code is publicly available at: https://github.com/PerceptionComputingLab/CauAug_DCMIS.

Noise-Inspired Diffusion Model for Generalizable Low-Dose CT Reconstruction

Qi Gao, Zhihao Chen, Dong Zeng, Junping Zhang, Jianhua Ma, Hongming Shan

arxiv logopreprintJun 27 2025
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.

Towards Scalable and Robust White Matter Lesion Localization via Multimodal Deep Learning

Julia Machnio, Sebastian Nørgaard Llambias, Mads Nielsen, Mostafa Mehdipour Ghazi

arxiv logopreprintJun 27 2025
White matter hyperintensities (WMH) are radiological markers of small vessel disease and neurodegeneration, whose accurate segmentation and spatial localization are crucial for diagnosis and monitoring. While multimodal MRI offers complementary contrasts for detecting and contextualizing WM lesions, existing approaches often lack flexibility in handling missing modalities and fail to integrate anatomical localization efficiently. We propose a deep learning framework for WM lesion segmentation and localization that operates directly in native space using single- and multi-modal MRI inputs. Our study evaluates four input configurations: FLAIR-only, T1-only, concatenated FLAIR and T1, and a modality-interchangeable setup. It further introduces a multi-task model for jointly predicting lesion and anatomical region masks to estimate region-wise lesion burden. Experiments conducted on the MICCAI WMH Segmentation Challenge dataset demonstrate that multimodal input significantly improves the segmentation performance, outperforming unimodal models. While the modality-interchangeable setting trades accuracy for robustness, it enables inference in cases with missing modalities. Joint lesion-region segmentation using multi-task learning was less effective than separate models, suggesting representational conflict between tasks. Our findings highlight the utility of multimodal fusion for accurate and robust WMH analysis, and the potential of joint modeling for integrated predictions.

Reasoning in machine vision: learning to think fast and slow

Shaheer U. Saeed, Yipei Wang, Veeru Kasivisvanathan, Brian R. Davidson, Matthew J. Clarkson, Yipeng Hu, Daniel C. Alexander

arxiv logopreprintJun 27 2025
Reasoning is a hallmark of human intelligence, enabling adaptive decision-making in complex and unfamiliar scenarios. In contrast, machine intelligence remains bound to training data, lacking the ability to dynamically refine solutions at inference time. While some recent advances have explored reasoning in machines, these efforts are largely limited to verbal domains such as mathematical problem-solving, where explicit rules govern step-by-step reasoning. Other critical real-world tasks - including visual perception, spatial reasoning, and radiological diagnosis - require non-verbal reasoning, which remains an open challenge. Here we present a novel learning paradigm that enables machine reasoning in vision by allowing performance improvement with increasing thinking time (inference-time compute), even under conditions where labelled data is very limited. Inspired by dual-process theories of human cognition in psychology, our approach integrates a fast-thinking System I module for familiar tasks, with a slow-thinking System II module that iteratively refines solutions using self-play reinforcement learning. This paradigm mimics human reasoning by proposing, competing over, and refining solutions in data-scarce scenarios. We demonstrate superior performance through extended thinking time, compared not only to large-scale supervised learning but also foundation models and even human experts, in real-world vision tasks. These tasks include computer-vision benchmarks and cancer localisation on medical images across five organs, showcasing transformative potential for non-verbal machine reasoning.

Cardiovascular disease classification using radiomics and geometric features from cardiac CT

Ajay Mittal, Raghav Mehta, Omar Todd, Philipp Seeböck, Georg Langs, Ben Glocker

arxiv logopreprintJun 27 2025
Automatic detection and classification of Cardiovascular disease (CVD) from Computed Tomography (CT) images play an important part in facilitating better-informed clinical decisions. However, most of the recent deep learning based methods either directly work on raw CT data or utilize it in pair with anatomical cardiac structure segmentation by training an end-to-end classifier. As such, these approaches become much more difficult to interpret from a clinical perspective. To address this challenge, in this work, we break down the CVD classification pipeline into three components: (i) image segmentation, (ii) image registration, and (iii) downstream CVD classification. Specifically, we utilize the Atlas-ISTN framework and recent segmentation foundational models to generate anatomical structure segmentation and a normative healthy atlas. These are further utilized to extract clinically interpretable radiomic features as well as deformation field based geometric features (through atlas registration) for CVD classification. Our experiments on the publicly available ASOCA dataset show that utilizing these features leads to better CVD classification accuracy (87.50\%) when compared against classification model trained directly on raw CT images (67.50\%). Our code is publicly available: https://github.com/biomedia-mira/grc-net

AI Model Passport: Data and System Traceability Framework for Transparent AI in Health

Varvara Kalokyri, Nikolaos S. Tachos, Charalampos N. Kalantzopoulos, Stelios Sfakianakis, Haridimos Kondylakis, Dimitrios I. Zaridis, Sara Colantonio, Daniele Regge, Nikolaos Papanikolaou, The ProCAncer-I consortium, Konstantinos Marias, Dimitrios I. Fotiadis, Manolis Tsiknakis

arxiv logopreprintJun 27 2025
The increasing integration of Artificial Intelligence (AI) into health and biomedical systems necessitates robust frameworks for transparency, accountability, and ethical compliance. Existing frameworks often rely on human-readable, manual documentation which limits scalability, comparability, and machine interpretability across projects and platforms. They also fail to provide a unique, verifiable identity for AI models to ensure their provenance and authenticity across systems and use cases, limiting reproducibility and stakeholder trust. This paper introduces the concept of the AI Model Passport, a structured and standardized documentation framework that acts as a digital identity and verification tool for AI models. It captures essential metadata to uniquely identify, verify, trace and monitor AI models across their lifecycle - from data acquisition and preprocessing to model design, development and deployment. In addition, an implementation of this framework is presented through AIPassport, an MLOps tool developed within the ProCAncer-I EU project for medical imaging applications. AIPassport automates metadata collection, ensures proper versioning, decouples results from source scripts, and integrates with various development environments. Its effectiveness is showcased through a lesion segmentation use case using data from the ProCAncer-I dataset, illustrating how the AI Model Passport enhances transparency, reproducibility, and regulatory readiness while reducing manual effort. This approach aims to set a new standard for fostering trust and accountability in AI-driven healthcare solutions, aspiring to serve as the basis for developing transparent and regulation compliant AI systems across domains.

Prospective quality control in chest radiography based on the reconstructed 3D human body.

Tan Y, Ye Z, Ye J, Hou Y, Li S, Liang Z, Li H, Tang J, Xia C, Li Z

pubmed logopapersJun 27 2025
Chest radiography requires effective quality control (QC) to reduce high retake rates. However, existing QC measures are all retrospective and implemented after exposure, often necessitating retakes when image quality fails to meet standards and thereby increasing radiation exposure to patients. To address this issue, we proposed a 3D human body (3D-HB) reconstruction algorithm to realize prospective QC. Our objective was to investigate the feasibility of using the reconstructed 3D-HB for prospective QC in chest radiography and evaluate its impact on retake rates.&#xD;Approach: This prospective study included patients indicated for posteroanterior (PA) and lateral (LA) chest radiography in May 2024. A 3D-HB reconstruction algorithm integrating the SMPL-X model and the HybrIK-X algorithm was proposed to convert patients' 2D images into 3D-HBs. QC metrics regarding patient positioning and collimation were assessed using chest radiographs (reference standard) and 3D-HBs, with results compared using ICCs, linear regression, and receiver operating characteristic curves. For retake rate evaluation, a real-time 3D-HB visualization interface was developed and chest radiography was conducted in two four-week phases: the first without prospective QC and the second with prospective QC. Retake rates between the two phases were compared using chi-square tests. &#xD;Main results: 324 participants were included (mean age, 42 years±19 [SD]; 145 men; 324 PA and 294 LA examinations). The ICCs for the clavicle and midaxillary line angles were 0.80 and 0.78, respectively. Linear regression showed good relation for clavicle angles (R2: 0.655) and midaxillary line angles (R2: 0.616). In PA chest radiography, the AUCs of 3D-HBs were 0.89, 0.87, 0.91 and 0.92 for assessing scapula rotation, lateral tilt, centered positioning and central X-ray alignment respectively, with 97% accuracy in collimation assessment. In LA chest radiography, the AUCs of 3D-HBs were 0.87, 0.84, 0.87 and 0.88 for assessing arms raised, chest rotation, centered positioning and central X-ray alignment respectively, with 94% accuracy in collimation assessment. In retake rate evaluation, 3995 PA and 3295 LA chest radiographs were recorded. The implementation of prospective QC based on the 3D-HB reduced retake rates from 8.6% to 3.5% (PA) and 19.6% to 4.9% (LA) (p < .001).&#xD;Significance: The reconstructed 3D-HB is a feasible tool for prospective QC in chest radiography, providing real-time feedback on patient positioning and collimation before exposure. Prospective QC based on the reconstructed 3D-HB has the potential to reshape the future of radiography QC by significantly reducing retake rates and improving clinical standardization.
Page 250 of 3623619 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.