Sort by:
Page 1 of 90894 results
Next

The performance of artificial intelligence in image-based prediction of hematoma enlargement: a systematic review and meta-analysis.

Fan W, Wu Z, Zhao W, Jia L, Li S, Wei W, Chen X

pubmed logopapersDec 1 2025
Accurately predicting hematoma enlargement (HE) is crucial for improving the prognosis of patients with cerebral haemorrhage. Artificial intelligence (AI) is a potentially reliable assistant for medical image recognition. This study systematically reviews medical imaging articles on the predictive performance of AI in HE. Retrieved relevant studies published before October, 2024 from Embase, Institute of Electrical and Electronics Engineers (IEEE), PubMed, Web of Science, and Cochrane Library databases. The diagnostic test of predicting hematoma enlargement based on CT image training artificial intelligence model, and reported 2 × 2 contingency tables or provided sensitivity (SE) and specificity (SP) for calculation. Two reviewers independently screened the retrieved citations and extracted data. The methodological quality of studies was assessed using the QUADAS-AI, and Preferred Reporting Items for Systematic reviews and Meta-Analyses was used to ensure standardised reporting of studies. Subgroup analysis was performed based on sample size, risk of bias, year of publication, ratio of training set to test set, and number of centres involved. 36 articles were included in this Systematic review to qualitative analysis, of which 23 have sufficient information for further quantitative analysis. Among these articles, there are a total of 7 articles used deep learning (DL) and 16 articles used machine learning (ML). The comprehensive SE and SP of ML are 78% (95% CI: 69-85%) and 85% (78-90%), respectively, while the AUC is 0.89 (0.86-0.91). The SE and SP of DL was 87% (95% CI: 80-92%) and 75% (67-81%), respectively, with an AUC of 0.88 (0.85-0.91). The subgroup analysis found that when the ratio of the training set to the test set is 7:3, the sensitivity is 0.77(0.62-0.91), <i>p</i> = 0.03; In terms of specificity, the group with sample size more than 200 has higher specificity, which is 0.83 (0.75-0.92), <i>p</i> = 0.02; among the risk groups in the study design, the specificity of the risk group was higher, which was 0.83 (0.76-0.89), <i>p</i> = 0.02. The group specificity of articles published before 2021 was higher, 0.84 (0.77-0.90); and the specificity of data from a single research centre was higher, which was 0.85 (0.80-0.91), <i>p</i> < 0.001. Artificial intelligence algorithms based on imaging have shown good performance in predicting HE.

Convolutional autoencoder-based deep learning for intracerebral hemorrhage classification using brain CT images.

Nageswara Rao B, Acharya UR, Tan RS, Dash P, Mohapatra M, Sabut S

pubmed logopapersDec 1 2025
Intracerebral haemorrhage (ICH) is a common form of stroke that affects millions of people worldwide. The incidence is associated with a high rate of mortality and morbidity. Accurate diagnosis using brain non-contrast computed tomography (NCCT) is crucial for decision-making on potentially life-saving surgery. Limited access to expert readers and inter-observer variability imposes barriers to timeous and accurate ICH diagnosis. We proposed a hybrid deep learning model for automated ICH diagnosis using NCCT images, which comprises a convolutional autoencoder (CAE) to extract features with reduced data dimensionality and a dense neural network (DNN) for classification. In order to ensure that the model generalizes to new data, we trained it using tenfold cross-validation and holdout methods. Principal component analysis (PCA) based dimensionality reduction and classification is systematically implemented for comparison. The study dataset comprises 1645 ("ICH" class) and 1648 ("Normal" class belongs to patients with non-hemorrhagic stroke) labelled images obtained from 108 patients, who had undergone CT examination on a 64-slice computed tomography scanner at Kalinga Institute of Medical Sciences between 2020 and 2023. Our developed CAE-DNN hybrid model attained 99.84% accuracy, 99.69% sensitivity, 100% specificity, 100% precision, and 99.84% F1-score, which outperformed the comparator PCA-DNN model as well as the published results in the literature. In addition, using saliency maps, our CAE-DNN model can highlight areas on the images that are closely correlated with regions of ICH, which have been manually contoured by expert readers. The CAE-DNN model demonstrates the proof-of-concept for accurate ICH detection and localization, which can potentially be implemented to prioritize the treatment using NCCT images in clinical settings.

TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.

Rifa KR, Ahamed MA, Zhang J, Imran A

pubmed logopapersSep 1 2025
The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets. We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability. Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second. The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

Cross-view Generalized Diffusion Model for Sparse-view CT Reconstruction

Jixiang Chen, Yiqun Lin, Yi Qin, Hualiang Wang, Xiaomeng Li

arxiv logopreprintAug 14 2025
Sparse-view computed tomography (CT) reduces radiation exposure by subsampling projection views, but conventional reconstruction methods produce severe streak artifacts with undersampled data. While deep-learning-based methods enable single-step artifact suppression, they often produce over-smoothed results under significant sparsity. Though diffusion models improve reconstruction via iterative refinement and generative priors, they require hundreds of sampling steps and struggle with stability in highly sparse regimes. To tackle these concerns, we present the Cross-view Generalized Diffusion Model (CvG-Diff), which reformulates sparse-view CT reconstruction as a generalized diffusion process. Unlike existing diffusion approaches that rely on stochastic Gaussian degradation, CvG-Diff explicitly models image-domain artifacts caused by angular subsampling as a deterministic degradation operator, leveraging correlations across sparse-view CT at different sample rates. To address the inherent artifact propagation and inefficiency of sequential sampling in generalized diffusion model, we introduce two innovations: Error-Propagating Composite Training (EPCT), which facilitates identifying error-prone regions and suppresses propagated artifacts, and Semantic-Prioritized Dual-Phase Sampling (SPDPS), an adaptive strategy that prioritizes semantic correctness before detail refinement. Together, these innovations enable CvG-Diff to achieve high-quality reconstructions with minimal iterations, achieving 38.34 dB PSNR and 0.9518 SSIM for 18-view CT using only \textbf{10} steps on AAPM-LDCT dataset. Extensive experiments demonstrate the superiority of CvG-Diff over state-of-the-art sparse-view CT reconstruction methods. The code is available at https://github.com/xmed-lab/CvG-Diff.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy.

Salari S, Spino C, Pharand LA, Lathuiliere F, Rivaz H, Beriault S, Xiao Y

pubmed logopapersAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

DINOMotion: advanced robust tissue motion tracking with DINOv2 in 2D-Cine MRI-guided radiotherapy

Soorena Salari, Catherine Spino, Laurie-Anne Pharand, Fabienne Lathuiliere, Hassan Rivaz, Silvain Beriault, Yiming Xiao

arxiv logopreprintAug 14 2025
Accurate tissue motion tracking is critical to ensure treatment outcome and safety in 2D-Cine MRI-guided radiotherapy. This is typically achieved by registration of sequential images, but existing methods often face challenges with large misalignments and lack of interpretability. In this paper, we introduce DINOMotion, a novel deep learning framework based on DINOv2 with Low-Rank Adaptation (LoRA) layers for robust, efficient, and interpretable motion tracking. DINOMotion automatically detects corresponding landmarks to derive optimal image registration, enhancing interpretability by providing explicit visual correspondences between sequential images. The integration of LoRA layers reduces trainable parameters, improving training efficiency, while DINOv2's powerful feature representations offer robustness against large misalignments. Unlike iterative optimization-based methods, DINOMotion directly computes image registration at test time. Our experiments on volunteer and patient datasets demonstrate its effectiveness in estimating both linear and nonlinear transformations, achieving Dice scores of 92.07% for the kidney, 90.90% for the liver, and 95.23% for the lung, with corresponding Hausdorff distances of 5.47 mm, 8.31 mm, and 6.72 mm, respectively. DINOMotion processes each scan in approximately 30ms and consistently outperforms state-of-the-art methods, particularly in handling large misalignments. These results highlight its potential as a robust and interpretable solution for real-time motion tracking in 2D-Cine MRI-guided radiotherapy.

Performance Evaluation of Deep Learning for the Detection and Segmentation of Thyroid Nodules: Systematic Review and Meta-Analysis.

Ni J, You Y, Wu X, Chen X, Wang J, Li Y

pubmed logopapersAug 14 2025
Thyroid cancer is one of the most common endocrine malignancies. Its incidence has steadily increased in recent years. Distinguishing between benign and malignant thyroid nodules (TNs) is challenging due to their overlapping imaging features. The rapid advancement of artificial intelligence (AI) in medical image analysis, particularly deep learning (DL) algorithms, has provided novel solutions for automated TN detection. However, existing studies exhibit substantial heterogeneity in diagnostic performance. Furthermore, no systematic evidence-based research comprehensively assesses the diagnostic performance of DL models in this field. This study aimed to execute a systematic review and meta-analysis to appraise the performance of DL algorithms in diagnosing TN malignancy, identify key factors influencing their diagnostic efficacy, and compare their accuracy with that of clinicians in image-based diagnosis. We systematically searched multiple databases, including PubMed, Cochrane, Embase, Web of Science, and IEEE, and identified 41 eligible studies for systematic review and meta-analysis. Based on the task type, studies were categorized into segmentation (n=14) and detection (n=27) tasks. The pooled sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC) were calculated for each group. Subgroup analyses were performed to examine the impact of transfer learning and compare model performance against clinicians. For segmentation tasks, the pooled sensitivity, specificity, and AUC were 82% (95% CI 79%-84%), 95% (95% CI 92%-96%), and 0.91 (95% CI 0.89-0.94), respectively. For detection tasks, the pooled sensitivity, specificity, and AUC were 91% (95% CI 89%-93%), 89% (95% CI 86%-91%), and 0.96 (95% CI 0.93-0.97), respectively. Some studies demonstrated that DL models could achieve diagnostic performance comparable with, or even exceeding, that of clinicians in certain scenarios. The application of transfer learning contributed to improved model performance. DL algorithms exhibit promising diagnostic accuracy in TN imaging, highlighting their potential as auxiliary diagnostic tools. However, current studies are limited by suboptimal methodological design, inconsistent image quality across datasets, and insufficient external validation, which may introduce bias. Future research should enhance methodological standardization, improve model interpretability, and promote transparent reporting to facilitate the sustainable clinical translation of DL-based solutions.

Deep learning-based non-invasive prediction of PD-L1 status and immunotherapy survival stratification in esophageal cancer using [<sup>18</sup>F]FDG PET/CT.

Xie F, Zhang M, Zheng C, Zhao Z, Wang J, Li Y, Wang K, Wang W, Lin J, Wu T, Wang Y, Chen X, Li Y, Zhu Z, Wu H, Li Y, Liu Q

pubmed logopapersAug 14 2025
This study aimed to develop and validate deep learning models using [<sup>18</sup>F]FDG PET/CT to predict PD-L1 status in esophageal cancer (EC) patients. Additionally, we assessed the potential of derived deep learning model scores (DLS) for survival stratification in immunotherapy. In this retrospective study, we included 331 EC patients from two centers, dividing them into training, internal validation, and external validation cohorts. Fifty patients who received immunotherapy were followed up. We developed four 3D ResNet10-based models-PET + CT + clinical factors (CPC), PET + CT (PC), PET (P), and CT (C)-using pre-treatment [<sup>18</sup>F]FDG PET/CT scans. For comparison, we also constructed a logistic model incorporating clinical factors (clinical model). The DLS were evaluated as radiological markers for survival stratification, and nomograms for predicting survival were constructed. The models demonstrated accurate prediction of PD-L1 status. The areas under the curve (AUCs) for predicting PD-L1 status were as follows: CPC (0.927), PC (0.904), P (0.886), C (0.934), and the clinical model (0.603) in the training cohort; CPC (0.882), PC (0.848), P (0.770), C (0.745), and the clinical model (0.524) in the internal validation cohort; and CPC (0.843), PC (0.806), P (0.759), C (0.667), and the clinical model (0.671) in the external validation cohort. The CPC and PC models exhibited superior predictive performance. Survival analysis revealed that the DLS from most models effectively stratified overall survival and progression-free survival at appropriate cut-off points (P < 0.05), outperforming stratification based on PD-L1 status (combined positive score ≥ 10). Furthermore, incorporating model scores with clinical factors in nomograms enhanced the predictive probability of survival after immunotherapy. Deep learning models based on [<sup>18</sup>F]FDG PET/CT can accurately predict PD-L1 status in esophageal cancer patients. The derived DLS can effectively stratify survival outcomes following immunotherapy, particularly when combined with clinical factors.

Pathology-Guided AI System for Accurate Segmentation and Diagnosis of Cervical Spondylosis.

Zhang Q, Chen X, He Z, Wu L, Wang K, Sun J, Shen H

pubmed logopapersAug 13 2025
Cervical spondylosis, a complex and prevalent condition, demands precise and efficient diagnostic techniques for accurate assessment. While MRI offers detailed visualization of cervical spine anatomy, manual interpretation remains labor-intensive and prone to error. To address this, we developed an innovative AI-assisted Expert-based Diagnosis System that automates both segmentation and diagnosis of cervical spondylosis using MRI. Leveraging multi-center datasets of cervical MRI images from patients with cervical spondylosis, our system features a pathology-guided segmentation model capable of accurately segmenting key cervical anatomical structures. The segmentation is followed by an expert-based diagnostic framework that automates the calculation of critical clinical indicators. Our segmentation model achieved an impressive average Dice coefficient exceeding 0.90 across four cervical spinal anatomies and demonstrated enhanced accuracy in herniation areas. Diagnostic evaluation further showcased the system's precision, with the lowest mean average errors (MAE) for the C2-C7 Cobb angle and the Maximum Spinal Cord Compression (MSCC) coefficient. In addition, our method delivered high accuracy, precision, recall, and F1 scores in herniation localization, K-line status assessment, T2 hyperintensity detection, and Kang grading. Comparative analysis and external validation demonstrate that our system outperforms existing methods, establishing a new benchmark for segmentation and diagnostic tasks for cervical spondylosis.

PPEA: Personalized positioning and exposure assistant based on multi-task shared pose estimation transformer.

Zhao J, Liu J, Yang C, Tang H, Chen Y, Zhang Y

pubmed logopapersAug 13 2025
Hand and foot digital radiography (DR) is an indispensable tool in medical imaging, with varying diagnostic requirements necessitating different hand and foot positionings. Accurate positioning is crucial for obtaining diagnostically valuable images. Furthermore, adjusting exposure parameters such as exposure area based on patient conditions helps minimize the likelihood of image retakes. We propose a personalized positioning and exposure assistant capable of automatically recognizing hand and foot positionings and recommending appropriate exposure parameters to achieve these objectives. The assistant comprises three modules: (1) Progressive Iterative Hand-Foot Tracker (PIHFT) to iteratively locate hands or feet in RGB images, providing the foundation for accurate pose estimation; (2) Multi-Task Shared Pose Estimation Transformer (MTSPET), a Transformer-based model that encompasses hand and foot estimation branches with similar network architectures, sharing a common backbone. MTSPET outperformed MediaPipe in the hand pose estimation task and successfully transferred this capability to the foot pose estimation task; (3) Domain Expertise-embedded Positioning and Exposure Assistant (DEPEA), which combines the key-point coordinates of hands and feet with specific positioning and exposure parameter requirements, capable of checking patient positioning and inferring exposure areas and Regions of Interest (ROIs) of Digital Automatic Exposure Control (DAEC). Additionally, two datasets were collected and used to train MTSPET. A preliminary clinical trial showed strong agreement between PPEA's outputs and manual annotations, indicating the system's effectiveness in typical clinical scenarios. The contributions of this study lay the foundation for personalized, patient-specific imaging strategies, ultimately enhancing diagnostic outcomes and minimizing the risk of errors in clinical settings.
Page 1 of 90894 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.