Sort by:
Page 72 of 1431421 results

SP-Mamba: Spatial-Perception State Space Model for Unsupervised Medical Anomaly Detection

Rui Pan, Ruiying Lu

arxiv logopreprintJul 25 2025
Radiography imaging protocols target on specific anatomical regions, resulting in highly consistent images with recurrent structural patterns across patients. Recent advances in medical anomaly detection have demonstrated the effectiveness of CNN- and transformer-based approaches. However, CNNs exhibit limitations in capturing long-range dependencies, while transformers suffer from quadratic computational complexity. In contrast, Mamba-based models, leveraging superior long-range modeling, structural feature extraction, and linear computational efficiency, have emerged as a promising alternative. To capitalize on the inherent structural regularity of medical images, this study introduces SP-Mamba, a spatial-perception Mamba framework for unsupervised medical anomaly detection. The window-sliding prototype learning and Circular-Hilbert scanning-based Mamba are introduced to better exploit consistent anatomical patterns and leverage spatial information for medical anomaly detection. Furthermore, we excavate the concentration and contrast characteristics of anomaly maps for improving anomaly detection. Extensive experiments on three diverse medical anomaly detection benchmarks confirm the proposed method's state-of-the-art performance, validating its efficacy and robustness. The code is available at https://github.com/Ray-RuiPan/SP-Mamba.

PerioDet: Large-Scale Panoramic Radiograph Benchmark for Clinical-Oriented Apical Periodontitis Detection

Xiaocheng Fang, Jieyi Cai, Huanyu Liu, Chengju Zhou, Minhua Lu, Bingzhi Chen

arxiv logopreprintJul 25 2025
Apical periodontitis is a prevalent oral pathology that presents significant public health challenges. Despite advances in automated diagnostic systems across various medical fields, the development of Computer-Aided Diagnosis (CAD) applications for apical periodontitis is still constrained by the lack of a large-scale, high-quality annotated dataset. To address this issue, we release a large-scale panoramic radiograph benchmark called "PerioXrays", comprising 3,673 images and 5,662 meticulously annotated instances of apical periodontitis. To the best of our knowledge, this is the first benchmark dataset for automated apical periodontitis diagnosis. This paper further proposes a clinical-oriented apical periodontitis detection (PerioDet) paradigm, which jointly incorporates Background-Denoising Attention (BDA) and IoU-Dynamic Calibration (IDC) mechanisms to address the challenges posed by background noise and small targets in automated detection. Extensive experiments on the PerioXrays dataset demonstrate the superiority of PerioDet in advancing automated apical periodontitis detection. Additionally, a well-designed human-computer collaborative experiment underscores the clinical applicability of our method as an auxiliary diagnostic tool for professional dentists.

Elucidating the Design Space of Arbitrary-Noise-Based Diffusion Models

Xingyu Qiu, Mengying Yang, Xinghua Ma, Dong Liang, Yuzhen Li, Fanding Li, Gongning Luo, Wei Wang, Kuanquan Wang, Shuo Li

arxiv logopreprintJul 24 2025
EDM elucidates the unified design space of diffusion models, yet its fixed noise patterns restricted to pure Gaussian noise, limit advancements in image restoration. Our study indicates that forcibly injecting Gaussian noise corrupts the degraded images, overextends the image transformation distance, and increases restoration complexity. To address this problem, our proposed EDA Elucidates the Design space of Arbitrary-noise-based diffusion models. Theoretically, EDA expands the freedom of noise pattern while preserving the original module flexibility of EDM, with rigorous proof that increased noise complexity incurs no additional computational overhead during restoration. EDA is validated on three typical tasks: MRI bias field correction (global smooth noise), CT metal artifact reduction (global sharp noise), and natural image shadow removal (local boundary-aware noise). With only 5 sampling steps, EDA outperforms most task-specific methods and achieves state-of-the-art performance in bias field correction and shadow removal.

TextSAM-EUS: Text Prompt Learning for SAM to Accurately Segment Pancreatic Tumor in Endoscopic Ultrasound

Pascal Spiegler, Taha Koleilat, Arash Harirpoush, Corey S. Miller, Hassan Rivaz, Marta Kersten-Oertel, Yiming Xiao

arxiv logopreprintJul 24 2025
Pancreatic cancer carries a poor prognosis and relies on endoscopic ultrasound (EUS) for targeted biopsy and radiotherapy. However, the speckle noise, low contrast, and unintuitive appearance of EUS make segmentation of pancreatic tumors with fully supervised deep learning (DL) models both error-prone and dependent on large, expert-curated annotation datasets. To address these challenges, we present TextSAM-EUS, a novel, lightweight, text-driven adaptation of the Segment Anything Model (SAM) that requires no manual geometric prompts at inference. Our approach leverages text prompt learning (context optimization) through the BiomedCLIP text encoder in conjunction with a LoRA-based adaptation of SAM's architecture to enable automatic pancreatic tumor segmentation in EUS, tuning only 0.86% of the total parameters. On the public Endoscopic Ultrasound Database of the Pancreas, TextSAM-EUS with automatic prompts attains 82.69% Dice and 85.28% normalized surface distance (NSD), and with manual geometric prompts reaches 83.10% Dice and 85.70% NSD, outperforming both existing state-of-the-art (SOTA) supervised DL models and foundation models (e.g., SAM and its variants). As the first attempt to incorporate prompt learning in SAM-based medical image segmentation, TextSAM-EUS offers a practical option for efficient and robust automatic EUS segmentation.

Artificial intelligence for multi-time-point arterial phase contrast-enhanced MRI profiling to predict prognosis after transarterial chemoembolization in hepatocellular carcinoma.

Yao L, Adwan H, Bernatz S, Li H, Vogl TJ

pubmed logopapersJul 24 2025
Contrast-enhanced magnetic resonance imaging (CE-MRI) monitoring across multiple time points is critical for optimizing hepatocellular carcinoma (HCC) prognosis during transarterial chemoembolization (TACE) treatment. The aim of this retrospective study is to develop and validate an artificial intelligence (AI)-powered models utilizing multi-time-point arterial phase CE-MRI data for HCC prognosis stratification in TACE patients. A total of 543 individual arterial phase CE-MRI scans from 181 HCC patients were retrospectively collected in this study. All patients underwent TACE and longitudinal arterial phase CE-MRI assessments at three time points: prior to treatment, and following the first and second TACE sessions. Among them, 110 patients received TACE monotherapy, while the remaining 71 patients underwent TACE in combination with microwave ablation (MWA). All images were subjected to standardized preprocessing procedures. We developed an end-to-end deep learning model, ProgSwin-UNETR, based on the Swin Transformer architecture, to perform four-class prognosis stratification directly from input imaging data. The model was trained using multi-time-point arterial phase CE-MRI data and evaluated via fourfold cross-validation. Classification performance was assessed using the area under the receiver operating characteristic curve (AUC). For comparative analysis, we benchmarked performance against traditional radiomics-based classifiers and the mRECIST criteria. Prognostic utility was further assessed using Kaplan-Meier (KM) survival curves. Additionally, multivariate Cox proportional hazards regression was performed as a post hoc analysis to evaluate the independent and complementary prognostic value of the model outputs and clinical variables. GradCAM +  + was applied to visualize the imaging regions contributing most to model prediction. The ProgSwin-UNETR model achieved an accuracy of 0.86 and an AUC of 0.92 (95% CI: 0.90-0.95) for the four-class prognosis stratification task, outperforming radiomic models across all risk groups. Furthermore, KM survival analyses were performed using three different approaches-AI model, radiomics-based classifiers, and mRECIST criteria-to stratify patients by risk. Of the three approaches, only the AI-based ProgSwin-UNETR model achieved statistically significant risk stratification across the entire cohort and in both TACE-alone and TACE + MWA subgroups (p < 0.005). In contrast, the mRECIST and radiomics models did not yield significant survival differences across subgroups (p > 0.05). Multivariate Cox regression analysis further demonstrated that the model was a robust independent prognostic factor (p = 0.01), effectively stratifying patients into four distinct risk groups (Class 0 to Class 3) with Log(HR) values of 0.97, 0.51, -0.53, and -0.92, respectively. Additionally, GradCAM +  + visualizations highlighted critical regional features contributing to prognosis prediction, providing interpretability of the model. ProgSwin-UNETR can well predict the various risk groups of HCC patients undergoing TACE therapy and can further be applied for personalized prediction.

Enhanced HER-2 prediction in breast cancer through synergistic integration of deep learning, ultrasound radiomics, and clinical data.

Hu M, Zhang L, Wang X, Xiao X

pubmed logopapersJul 24 2025
This study integrates ultrasound Radiomics with clinical data to enhance the diagnostic accuracy of HER-2 expression status in breast cancer, aiming to provide more reliable treatment strategies for this aggressive disease. We included ultrasound images and clinicopathologic data from 210 female breast cancer patients, employing a Generative Adversarial Network (GAN) to enhance image clarity and segment the region of interest (ROI) for Radiomics feature extraction. Features were optimized through Z-score normalization and various statistical methods. We constructed and compared multiple machine learning models, including Linear Regression, Random Forest, and XGBoost, with deep learning models such as CNNs (ResNet101, VGG19) and Transformer technology. The Grad-CAM technique was used to visualize the decision-making process of the deep learning models. The Deep Learning Radiomics (DLR) model integrated Radiomics features with deep learning features, and a combined model further integrated clinical features to predict HER-2 status. The LightGBM and ResNet101 models showed high performance, but the combined model achieved the highest AUC values in both training and testing, demonstrating the effectiveness of integrating diverse data sources. The study successfully demonstrates that the fusion of deep learning with Radiomics analysis significantly improves the prediction accuracy of HER-2 status, offering a new strategy for personalized breast cancer treatment and prognostic assessments.

Artificial intelligence in radiology: 173 commercially available products and their scientific evidence.

Antonissen N, Tryfonos O, Houben IB, Jacobs C, de Rooij M, van Leeuwen KG

pubmed logopapersJul 24 2025
To assess changes in peer-reviewed evidence on commercially available radiological artificial intelligence (AI) products from 2020 to 2023, as a follow-up to a 2020 review of 100 products. A literature review was conducted, covering January 2015 to March 2023, focusing on CE-certified radiological AI products listed on www.healthairegister.com . Papers were categorised using the hierarchical model of efficacy: technical/diagnostic accuracy (levels 1-2), clinical decision-making and patient outcomes (levels 3-5), or socio-economic impact (level 6). Study features such as design, vendor independence, and multicentre/multinational data usage were also examined. By 2023, 173 CE-certified AI products from 90 vendors were identified, compared to 100 products in 2020. Products with peer-reviewed evidence increased from 36% to 66%, supported by 639 papers (up from 237). Diagnostic accuracy studies (level 2) remained predominant, though their share decreased from 65% to 57%. Studies addressing higher-efficacy levels (3-6) remained constant at 22% and 24%, with the number of products supported by such evidence increasing from 18% to 31%. Multicentre studies rose from 30% to 41% (p < 0.01). However, vendor-independent studies decreased (49% to 45%), as did multinational studies (15% to 11%) and prospective designs (19% to 16%), all with p > 0.05. The increase in peer-reviewed evidence and higher levels of evidence per product indicate maturation in the radiological AI market. However, the continued focus on lower-efficacy studies and reductions in vendor independence, multinational data, and prospective designs highlight persistent challenges in establishing unbiased, real-world evidence. Question Evaluating advancements in peer-reviewed evidence for CE-certified radiological AI products is crucial to understand their clinical adoption and impact. Findings CE-certified AI products with peer-reviewed evidence increased from 36% in 2020 to 66% in 2023, but the proportion of higher-level evidence papers (~24%) remained unchanged. Clinical relevance The study highlights increased validation of radiological AI products but underscores a continued lack of evidence on their clinical and socio-economic impact, which may limit these tools' safe and effective implementation into clinical workflows.

Disease probability-enhanced follow-up chest X-ray radiology report summary generation.

Wang Z, Deng Q, So TY, Chiu WH, Lee K, Hui ES

pubmed logopapersJul 24 2025
A chest X-ray radiology report describes abnormal findings not only from X-ray obtained at a given examination, but also findings on disease progression or change in device placement with reference to the X-ray from previous examination. Majority of the efforts on automatic generation of radiology report pertain to reporting the former, but not the latter, type of findings. To the best of the authors' knowledge, there is only one work dedicated to generating summary of the latter findings, i.e., follow-up radiology report summary. In this study, we propose a transformer-based framework to tackle this task. Motivated by our observations on the significance of medical lexicon on the fidelity of report summary generation, we introduce two mechanisms to bestow clinical insight to our model, namely disease probability soft guidance and masked entity modeling loss. The former mechanism employs a pretrained abnormality classifier to guide the presence level of specific abnormalities, while the latter directs the model's attention toward medical lexicon. Extensive experiments were conducted to demonstrate that the performance of our model exceeded the state-of-the-art.

UniSegDiff: Boosting Unified Lesion Segmentation via a Staged Diffusion Model

Yilong Hu, Shijie Chang, Lihe Zhang, Feng Tian, Weibing Sun, Huchuan Lu

arxiv logopreprintJul 24 2025
The Diffusion Probabilistic Model (DPM) has demonstrated remarkable performance across a variety of generative tasks. The inherent randomness in diffusion models helps address issues such as blurring at the edges of medical images and labels, positioning Diffusion Probabilistic Models (DPMs) as a promising approach for lesion segmentation. However, we find that the current training and inference strategies of diffusion models result in an uneven distribution of attention across different timesteps, leading to longer training times and suboptimal solutions. To this end, we propose UniSegDiff, a novel diffusion model framework designed to address lesion segmentation in a unified manner across multiple modalities and organs. This framework introduces a staged training and inference approach, dynamically adjusting the prediction targets at different stages, forcing the model to maintain high attention across all timesteps, and achieves unified lesion segmentation through pre-training the feature extraction network for segmentation. We evaluate performance on six different organs across various imaging modalities. Comprehensive experimental results demonstrate that UniSegDiff significantly outperforms previous state-of-the-art (SOTA) approaches. The code is available at https://github.com/HUYILONG-Z/UniSegDiff.

Anatomically Based Multitask Deep Learning Radiomics Nomogram Predicts the Implant Failure Risk in Sinus Floor Elevation.

Zhu Y, Liu Y, Zhao Y, Lu Q, Wang W, Chen Y, Ji P, Chen T

pubmed logopapersJul 23 2025
To develop and assess the performance of an anatomically based multitask deep learning radiomics nomogram (AMDRN) system to predict implant failure risk before maxillary sinus floor elevation (MSFE) while incorporating automated segmentation of key anatomical structures. We retrospectively collected patients' preoperative cone beam computed tomography (CBCT) images and electronic medical records (EMRs). First, the nn-UNet v2 model was optimized to segment the maxillary sinus (MS), Schneiderian membrane (SM), and residual alveolar bone (RAB). Based on the segmentation mask, a deep learning model (3D-Attention-ResNet) and a radiomics model were developed to extract 3D features from CBCT scans, generating the DL Score, and Rad Score. Significant clinical features were also extracted from EMRs to build a clinical model. These components were then integrated using logistic regression (LR) to create the AMDRN model, which includes a visualization module to support clinical decision-making. Segmentation results for MS, RAB, and SM achieved high DICE coefficients on the test set, with values of 99.50% ± 0.84%, 92.53% ± 3.78%, and 91.58% ± 7.16%, respectively. On an independent test set, the Clinical model, Radiomics model, 3D-DL model, and AMDRN model achieved prediction accuracies of 60%, 76%, 82%, and 90%, respectively, with AMDRN achieving the highest AUC of 93%. The AMDRN system enables efficient preoperative prediction of implant failure risk in MSFE and accurate segmentation of critical anatomical structures, supporting personalized treatment planning and clinical risk management.
Page 72 of 1431421 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.