Sort by:
Page 76 of 3993982 results

Information Entropy-Based Framework for Quantifying Tortuosity in Meibomian Gland Uneven Atrophy

Kesheng Wang, Xiaoyu Chen, Chunlei He, Fenfen Li, Xinxin Yu, Dexing Kong, Shoujun Huang, Qi Dai

arxiv logopreprintJul 24 2025
In the medical image analysis field, precise quantification of curve tortuosity plays a critical role in the auxiliary diagnosis and pathological assessment of various diseases. In this study, we propose a novel framework for tortuosity quantification and demonstrate its effectiveness through the evaluation of meibomian gland atrophy uniformity,serving as a representative application scenario. We introduce an information entropy-based tortuosity quantification framework that integrates probability modeling with entropy theory and incorporates domain transformation of curve data. Unlike traditional methods such as curvature or arc-chord ratio, this approach evaluates the tortuosity of a target curve by comparing it to a designated reference curve. Consequently, it is more suitable for tortuosity assessment tasks in medical data where biologically plausible reference curves are available, providing a more robust and objective evaluation metric without relying on idealized straight-line comparisons. First, we conducted numerical simulation experiments to preliminarily assess the stability and validity of the method. Subsequently, the framework was applied to quantify the spatial uniformity of meibomian gland atrophy and to analyze the difference in this uniformity between \textit{Demodex}-negative and \textit{Demodex}-positive patient groups. The results demonstrated a significant difference in tortuosity-based uniformity between the two groups, with an area under the curve of 0.8768, sensitivity of 0.75, and specificity of 0.93. These findings highlight the clinical utility of the proposed framework in curve tortuosity analysis and its potential as a generalizable tool for quantitative morphological evaluation in medical diagnostics.

Direct Dual-Energy CT Material Decomposition using Model-based Denoising Diffusion Model

Hang Xu, Alexandre Bousse, Alessandro Perelli

arxiv logopreprintJul 24 2025
Dual-energy X-ray Computed Tomography (DECT) constitutes an advanced technology which enables automatic decomposition of materials in clinical images without manual segmentation using the dependency of the X-ray linear attenuation with energy. However, most methods perform material decomposition in the image domain as a post-processing step after reconstruction but this procedure does not account for the beam-hardening effect and it results in sub-optimal results. In this work, we propose a deep learning procedure called Dual-Energy Decomposition Model-based Diffusion (DEcomp-MoD) for quantitative material decomposition which directly converts the DECT projection data into material images. The algorithm is based on incorporating the knowledge of the spectral DECT model into the deep learning training loss and combining a score-based denoising diffusion learned prior in the material image domain. Importantly the inference optimization loss takes as inputs directly the sinogram and converts to material images through a model-based conditional diffusion model which guarantees consistency of the results. We evaluate the performance with both quantitative and qualitative estimation of the proposed DEcomp-MoD method on synthetic DECT sinograms from the low-dose AAPM dataset. Finally, we show that DEcomp-MoD outperform state-of-the-art unsupervised score-based model and supervised deep learning networks, with the potential to be deployed for clinical diagnosis.

Exploring the interplay of label bias with subgroup size and separability: A case study in mammographic density classification

Emma A. M. Stanley, Raghav Mehta, Mélanie Roschewitz, Nils D. Forkert, Ben Glocker

arxiv logopreprintJul 24 2025
Systematic mislabelling affecting specific subgroups (i.e., label bias) in medical imaging datasets represents an understudied issue concerning the fairness of medical AI systems. In this work, we investigated how size and separability of subgroups affected by label bias influence the learned features and performance of a deep learning model. Therefore, we trained deep learning models for binary tissue density classification using the EMory BrEast imaging Dataset (EMBED), where label bias affected separable subgroups (based on imaging manufacturer) or non-separable "pseudo-subgroups". We found that simulated subgroup label bias led to prominent shifts in the learned feature representations of the models. Importantly, these shifts within the feature space were dependent on both the relative size and the separability of the subgroup affected by label bias. We also observed notable differences in subgroup performance depending on whether a validation set with clean labels was used to define the classification threshold for the model. For instance, with label bias affecting the majority separable subgroup, the true positive rate for that subgroup fell from 0.898, when the validation set had clean labels, to 0.518, when the validation set had biased labels. Our work represents a key contribution toward understanding the consequences of label bias on subgroup fairness in medical imaging AI.

DGEAHorNet: high-order spatial interaction network with dual cross global efficient attention for medical image segmentation.

Peng H, An X, Chen X, Chen Z

pubmed logopapersJul 24 2025
Medical image segmentation is a complex and challenging task, which aims to accurately segment various structures or abnormal regions in medical images. However, obtaining accurate segmentation results is difficult because of the great uncertainty in the shape, location, and scale of the target region. To address these challenges, we propose a higher-order spatial interaction framework with dual cross global efficient attention (DGEAHorNet), which employs a neural network architecture based on recursive gate convolution to adequately extract multi-scale contextual information from images. Specifically, a Dual Cross-Attentions (DCA) is added to the skip connection that can effectively blend multi-stage encoder features and narrow the semantic gap. In the bottleneck stage, global channel spatial attention module (GCSAM) is used to extract image global information. To obtain better feature representation, we feed the output from the GCSAM into the multi-branch dense layer (SENetV2) for excitation. Furthermore, we adopt Depthwise Over-parameterized Convolutional Layer (DO-Conv) in order to replace the common convolutional layer in the input and output part of our network, then add Efficient Attention (EA) to diminish computational complexity and enhance our model's performance. For evaluating the effectiveness of our proposed DGEAHorNet, we conduct comprehensive experiments on four publicly-available datasets, and achieving 0.9320, 0.9337, 0.9312 and 0.7799 in Dice similarity coefficient on ISIC2018, ISIC2017, CVC-ClinicDB and HRF respectively. Our results show that DGEAHorNet has better performance compared with advanced methods. The code is publicly available at https://github.com/penghaixin/mymodel .

Artificial intelligence for multi-time-point arterial phase contrast-enhanced MRI profiling to predict prognosis after transarterial chemoembolization in hepatocellular carcinoma.

Yao L, Adwan H, Bernatz S, Li H, Vogl TJ

pubmed logopapersJul 24 2025
Contrast-enhanced magnetic resonance imaging (CE-MRI) monitoring across multiple time points is critical for optimizing hepatocellular carcinoma (HCC) prognosis during transarterial chemoembolization (TACE) treatment. The aim of this retrospective study is to develop and validate an artificial intelligence (AI)-powered models utilizing multi-time-point arterial phase CE-MRI data for HCC prognosis stratification in TACE patients. A total of 543 individual arterial phase CE-MRI scans from 181 HCC patients were retrospectively collected in this study. All patients underwent TACE and longitudinal arterial phase CE-MRI assessments at three time points: prior to treatment, and following the first and second TACE sessions. Among them, 110 patients received TACE monotherapy, while the remaining 71 patients underwent TACE in combination with microwave ablation (MWA). All images were subjected to standardized preprocessing procedures. We developed an end-to-end deep learning model, ProgSwin-UNETR, based on the Swin Transformer architecture, to perform four-class prognosis stratification directly from input imaging data. The model was trained using multi-time-point arterial phase CE-MRI data and evaluated via fourfold cross-validation. Classification performance was assessed using the area under the receiver operating characteristic curve (AUC). For comparative analysis, we benchmarked performance against traditional radiomics-based classifiers and the mRECIST criteria. Prognostic utility was further assessed using Kaplan-Meier (KM) survival curves. Additionally, multivariate Cox proportional hazards regression was performed as a post hoc analysis to evaluate the independent and complementary prognostic value of the model outputs and clinical variables. GradCAM +  + was applied to visualize the imaging regions contributing most to model prediction. The ProgSwin-UNETR model achieved an accuracy of 0.86 and an AUC of 0.92 (95% CI: 0.90-0.95) for the four-class prognosis stratification task, outperforming radiomic models across all risk groups. Furthermore, KM survival analyses were performed using three different approaches-AI model, radiomics-based classifiers, and mRECIST criteria-to stratify patients by risk. Of the three approaches, only the AI-based ProgSwin-UNETR model achieved statistically significant risk stratification across the entire cohort and in both TACE-alone and TACE + MWA subgroups (p < 0.005). In contrast, the mRECIST and radiomics models did not yield significant survival differences across subgroups (p > 0.05). Multivariate Cox regression analysis further demonstrated that the model was a robust independent prognostic factor (p = 0.01), effectively stratifying patients into four distinct risk groups (Class 0 to Class 3) with Log(HR) values of 0.97, 0.51, -0.53, and -0.92, respectively. Additionally, GradCAM +  + visualizations highlighted critical regional features contributing to prognosis prediction, providing interpretability of the model. ProgSwin-UNETR can well predict the various risk groups of HCC patients undergoing TACE therapy and can further be applied for personalized prediction.

Deep learning reconstruction of zero echo time magnetic resonance imaging: diagnostic performance in axial spondyloarthritis.

Yi J, Hahn S, Lee HJ, Lee S, Park S, Lee J, de Arcos J, Fung M

pubmed logopapersJul 24 2025
To compare the diagnostic performance of deep learning reconstruction (DLR) of zero echo time (ZTE) MRI for structural lesions in patients with axial spondyloarthritis, against T1WI and ZTE MRI without DLR, using CT as the reference standard. From February 2021 to December 2022, 26 patients (52 sacroiliac joints (SIJ) and 104 quadrants) underwent SIJ MRIs. Three readers assessed overall image quality and structural conspicuity, scoring SIJs for structural lesions on T1WI, ZTE, and ZTE DLR 50%, 75%, and 100%, respectively. Diagnostic performance was evaluated using CT as the reference standard, and inter-reader agreement was assessed using weighted kappa. ZTE DLR 100% showed the highest image quality scores for readers 1 and 2, and the best structural conspicuity scores for all three readers. In readers 2 and 3, ZTE DLR 75% showed the best diagnostic performance for bone sclerosis, outperforming T1WI and ZTE (all p < 0.05). In all readers, ZTE DLR 100% showed superior diagnostic performance for bone erosion compared to T1WI and ZTE (all p < 0.01). For bone sclerosis, ZTE DLR 50% showed the highest kappa coefficients between readers 1 and 2 and between readers 1 and 3. For bone erosion, ZTE DLR 100% showed the highest kappa coefficients between readers. ZTE MRI with DLR outperformed T1WI and ZTE MRI without DLR in diagnosing bone sclerosis and erosion of the SIJ, while offering similar subjective image quality and structural conspicuity. Question With zero echo time (ZTE) alone, small structural lesions, such as bone sclerosis and erosion, are challenging to confirm in axial spondyloarthritis. Findings ZTE deep learning reconstruction (DLR) showed higher diagnostic performance for detecting bone sclerosis and erosion, compared with T1WI and ZTE. Clinical relevance Applying DLR to ZTE enhances diagnostic capability for detecting bone sclerosis and erosion in the sacroiliac joint, aiding in the early diagnosis of axial spondyloarthritis.

Disease probability-enhanced follow-up chest X-ray radiology report summary generation.

Wang Z, Deng Q, So TY, Chiu WH, Lee K, Hui ES

pubmed logopapersJul 24 2025
A chest X-ray radiology report describes abnormal findings not only from X-ray obtained at a given examination, but also findings on disease progression or change in device placement with reference to the X-ray from previous examination. Majority of the efforts on automatic generation of radiology report pertain to reporting the former, but not the latter, type of findings. To the best of the authors' knowledge, there is only one work dedicated to generating summary of the latter findings, i.e., follow-up radiology report summary. In this study, we propose a transformer-based framework to tackle this task. Motivated by our observations on the significance of medical lexicon on the fidelity of report summary generation, we introduce two mechanisms to bestow clinical insight to our model, namely disease probability soft guidance and masked entity modeling loss. The former mechanism employs a pretrained abnormality classifier to guide the presence level of specific abnormalities, while the latter directs the model's attention toward medical lexicon. Extensive experiments were conducted to demonstrate that the performance of our model exceeded the state-of-the-art.

Artificial intelligence in radiology: 173 commercially available products and their scientific evidence.

Antonissen N, Tryfonos O, Houben IB, Jacobs C, de Rooij M, van Leeuwen KG

pubmed logopapersJul 24 2025
To assess changes in peer-reviewed evidence on commercially available radiological artificial intelligence (AI) products from 2020 to 2023, as a follow-up to a 2020 review of 100 products. A literature review was conducted, covering January 2015 to March 2023, focusing on CE-certified radiological AI products listed on www.healthairegister.com . Papers were categorised using the hierarchical model of efficacy: technical/diagnostic accuracy (levels 1-2), clinical decision-making and patient outcomes (levels 3-5), or socio-economic impact (level 6). Study features such as design, vendor independence, and multicentre/multinational data usage were also examined. By 2023, 173 CE-certified AI products from 90 vendors were identified, compared to 100 products in 2020. Products with peer-reviewed evidence increased from 36% to 66%, supported by 639 papers (up from 237). Diagnostic accuracy studies (level 2) remained predominant, though their share decreased from 65% to 57%. Studies addressing higher-efficacy levels (3-6) remained constant at 22% and 24%, with the number of products supported by such evidence increasing from 18% to 31%. Multicentre studies rose from 30% to 41% (p < 0.01). However, vendor-independent studies decreased (49% to 45%), as did multinational studies (15% to 11%) and prospective designs (19% to 16%), all with p > 0.05. The increase in peer-reviewed evidence and higher levels of evidence per product indicate maturation in the radiological AI market. However, the continued focus on lower-efficacy studies and reductions in vendor independence, multinational data, and prospective designs highlight persistent challenges in establishing unbiased, real-world evidence. Question Evaluating advancements in peer-reviewed evidence for CE-certified radiological AI products is crucial to understand their clinical adoption and impact. Findings CE-certified AI products with peer-reviewed evidence increased from 36% in 2020 to 66% in 2023, but the proportion of higher-level evidence papers (~24%) remained unchanged. Clinical relevance The study highlights increased validation of radiological AI products but underscores a continued lack of evidence on their clinical and socio-economic impact, which may limit these tools' safe and effective implementation into clinical workflows.

Enhanced HER-2 prediction in breast cancer through synergistic integration of deep learning, ultrasound radiomics, and clinical data.

Hu M, Zhang L, Wang X, Xiao X

pubmed logopapersJul 24 2025
This study integrates ultrasound Radiomics with clinical data to enhance the diagnostic accuracy of HER-2 expression status in breast cancer, aiming to provide more reliable treatment strategies for this aggressive disease. We included ultrasound images and clinicopathologic data from 210 female breast cancer patients, employing a Generative Adversarial Network (GAN) to enhance image clarity and segment the region of interest (ROI) for Radiomics feature extraction. Features were optimized through Z-score normalization and various statistical methods. We constructed and compared multiple machine learning models, including Linear Regression, Random Forest, and XGBoost, with deep learning models such as CNNs (ResNet101, VGG19) and Transformer technology. The Grad-CAM technique was used to visualize the decision-making process of the deep learning models. The Deep Learning Radiomics (DLR) model integrated Radiomics features with deep learning features, and a combined model further integrated clinical features to predict HER-2 status. The LightGBM and ResNet101 models showed high performance, but the combined model achieved the highest AUC values in both training and testing, demonstrating the effectiveness of integrating diverse data sources. The study successfully demonstrates that the fusion of deep learning with Radiomics analysis significantly improves the prediction accuracy of HER-2 status, offering a new strategy for personalized breast cancer treatment and prognostic assessments.

A Multi-Modal Pelvic MRI Dataset for Deep Learning-Based Pelvic Organ Segmentation in Endometriosis.

Liang X, Alpuing Radilla LA, Khalaj K, Dawoodally H, Mokashi C, Guan X, Roberts KE, Sheth SA, Tammisetti VS, Giancardo L

pubmed logopapersJul 24 2025
Endometriosis affects approximately 190 million females of reproductive age worldwide. Magnetic Resonance Imaging (MRI) has been recommended as the primary non-invasive diagnostic method for endometriosis. This study presents new female pelvic MRI multicenter datasets for endometriosis and shows the baseline segmentation performance of two auto-segmentation pipelines: the self-configuring nnU-Net and RAovSeg, a custom network. The multi-sequence endometriosis MRI scans from two clinical institutions were collected. A multicenter dataset of 51 subjects with manual labels for multiple pelvic structures from three raters was used to assess interrater agreement. A second single-center dataset of 81 subjects with labels for multiple pelvic structures from one rater was used to develop the ovary auto-segmentation pipelines. Uterus and ovary segmentations are available for all subjects, endometrioma segmentation is available for all subjects where it is detectable in the image. This study highlights the challenges of manual ovary segmentation in endometriosis MRI and emphasizes the need for an auto-segmentation method. The dataset is publicly available for further research in pelvic MRI auto-segmentation to support endometriosis research.
Page 76 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.