Sort by:
Page 1 of 1011010 results
Next

Convolutional autoencoder-based deep learning for intracerebral hemorrhage classification using brain CT images.

Nageswara Rao B, Acharya UR, Tan RS, Dash P, Mohapatra M, Sabut S

pubmed logopapersDec 1 2025
Intracerebral haemorrhage (ICH) is a common form of stroke that affects millions of people worldwide. The incidence is associated with a high rate of mortality and morbidity. Accurate diagnosis using brain non-contrast computed tomography (NCCT) is crucial for decision-making on potentially life-saving surgery. Limited access to expert readers and inter-observer variability imposes barriers to timeous and accurate ICH diagnosis. We proposed a hybrid deep learning model for automated ICH diagnosis using NCCT images, which comprises a convolutional autoencoder (CAE) to extract features with reduced data dimensionality and a dense neural network (DNN) for classification. In order to ensure that the model generalizes to new data, we trained it using tenfold cross-validation and holdout methods. Principal component analysis (PCA) based dimensionality reduction and classification is systematically implemented for comparison. The study dataset comprises 1645 ("ICH" class) and 1648 ("Normal" class belongs to patients with non-hemorrhagic stroke) labelled images obtained from 108 patients, who had undergone CT examination on a 64-slice computed tomography scanner at Kalinga Institute of Medical Sciences between 2020 and 2023. Our developed CAE-DNN hybrid model attained 99.84% accuracy, 99.69% sensitivity, 100% specificity, 100% precision, and 99.84% F1-score, which outperformed the comparator PCA-DNN model as well as the published results in the literature. In addition, using saliency maps, our CAE-DNN model can highlight areas on the images that are closely correlated with regions of ICH, which have been manually contoured by expert readers. The CAE-DNN model demonstrates the proof-of-concept for accurate ICH detection and localization, which can potentially be implemented to prioritize the treatment using NCCT images in clinical settings.

Cerebral ischemia detection using deep learning techniques.

Pastor-Vargas R, Antón-Munárriz C, Haut JM, Robles-Gómez A, Paoletti ME, Benítez-Andrades JA

pubmed logopapersDec 1 2025
Cerebrovascular accident (CVA), commonly known as stroke, stands as a significant contributor to contemporary mortality and morbidity rates, often leading to lasting disabilities. Early identification is crucial in mitigating its impact and reducing mortality. Non-contrast computed tomography (NCCT) remains the primary diagnostic tool in stroke emergencies due to its speed, accessibility, and cost-effectiveness. NCCT enables the exclusion of hemorrhage and directs attention to ischemic causes resulting from arterial flow obstruction. Quantification of NCCT findings employs the Alberta Stroke Program Early Computed Tomography Score (ASPECTS), which evaluates affected brain structures. This study seeks to identify early alterations in NCCT density in patients with stroke symptoms using a binary classifier distinguishing NCCT scans with and without stroke. To achieve this, various well-known deep learning architectures, namely VGG3D, ResNet3D, and DenseNet3D, validated in the ImageNet challenges, are implemented with 3D images covering the entire brain volume. The training results of these networks are presented, wherein diverse parameters are examined for optimal performance. The DenseNet3D network emerges as the most effective model, attaining a training set accuracy of 98% and a test set accuracy of 95%. The aim is to alert medical professionals to potential stroke cases in their early stages based on NCCT findings displaying altered density patterns.

The performance of artificial intelligence in image-based prediction of hematoma enlargement: a systematic review and meta-analysis.

Fan W, Wu Z, Zhao W, Jia L, Li S, Wei W, Chen X

pubmed logopapersDec 1 2025
Accurately predicting hematoma enlargement (HE) is crucial for improving the prognosis of patients with cerebral haemorrhage. Artificial intelligence (AI) is a potentially reliable assistant for medical image recognition. This study systematically reviews medical imaging articles on the predictive performance of AI in HE. Retrieved relevant studies published before October, 2024 from Embase, Institute of Electrical and Electronics Engineers (IEEE), PubMed, Web of Science, and Cochrane Library databases. The diagnostic test of predicting hematoma enlargement based on CT image training artificial intelligence model, and reported 2 × 2 contingency tables or provided sensitivity (SE) and specificity (SP) for calculation. Two reviewers independently screened the retrieved citations and extracted data. The methodological quality of studies was assessed using the QUADAS-AI, and Preferred Reporting Items for Systematic reviews and Meta-Analyses was used to ensure standardised reporting of studies. Subgroup analysis was performed based on sample size, risk of bias, year of publication, ratio of training set to test set, and number of centres involved. 36 articles were included in this Systematic review to qualitative analysis, of which 23 have sufficient information for further quantitative analysis. Among these articles, there are a total of 7 articles used deep learning (DL) and 16 articles used machine learning (ML). The comprehensive SE and SP of ML are 78% (95% CI: 69-85%) and 85% (78-90%), respectively, while the AUC is 0.89 (0.86-0.91). The SE and SP of DL was 87% (95% CI: 80-92%) and 75% (67-81%), respectively, with an AUC of 0.88 (0.85-0.91). The subgroup analysis found that when the ratio of the training set to the test set is 7:3, the sensitivity is 0.77(0.62-0.91), <i>p</i> = 0.03; In terms of specificity, the group with sample size more than 200 has higher specificity, which is 0.83 (0.75-0.92), <i>p</i> = 0.02; among the risk groups in the study design, the specificity of the risk group was higher, which was 0.83 (0.76-0.89), <i>p</i> = 0.02. The group specificity of articles published before 2021 was higher, 0.84 (0.77-0.90); and the specificity of data from a single research centre was higher, which was 0.85 (0.80-0.91), <i>p</i> < 0.001. Artificial intelligence algorithms based on imaging have shown good performance in predicting HE.

TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.

Rifa KR, Ahamed MA, Zhang J, Imran A

pubmed logopapersSep 1 2025
The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets. We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability. Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second. The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

Aortic atherosclerosis evaluation using deep learning based on non-contrast CT: A retrospective multi-center study.

Yang M, Lyu J, Xiong Y, Mei A, Hu J, Zhang Y, Wang X, Bian X, Huang J, Li R, Xing X, Su S, Gao J, Lou X

pubmed logopapersAug 15 2025
Non-contrast CT (NCCT) is widely used in clinical practice and holds potential for large-scale atherosclerosis screening, yet its application in detecting and grading aortic atherosclerosis remains limited. To address this, we propose Aortic-AAE, an automated segmentation system based on a cascaded attention mechanism within the nnU-Net framework. The cascaded attention module enhances feature learning across complex anatomical structures, outperforming existing attention modules. Integrated preprocessing and post-processing ensure anatomical consistency and robustness across multi-center data. Trained on 435 labeled NCCT scans from three centers and validated on 388 independent cases, Aortic-AAE achieved 81.12% accuracy in aortic stenosis classification and 92.37% in Agatston scoring of calcified plaques, surpassing five state-of-the-art models. This study demonstrates the feasibility of using deep learning for accurate detection and grading of aortic atherosclerosis from NCCT, supporting improved diagnostic decisions and enhanced clinical workflows.

SimAQ: Mitigating Experimental Artifacts in Soft X-Ray Tomography using Simulated Acquisitions

Jacob Egebjerg, Daniel Wüstner

arxiv logopreprintAug 14 2025
Soft X-ray tomography (SXT) provides detailed structural insight into whole cells but is hindered by experimental artifacts such as the missing wedge and by limited availability of annotated datasets. We present \method, a simulation pipeline that generates realistic cellular phantoms and applies synthetic artifacts to produce paired noisy volumes, sinograms, and reconstructions. We validate our approach by training a neural network primarily on synthetic data and demonstrate effective few-shot and zero-shot transfer learning on real SXT tomograms. Our model delivers accurate segmentations, enabling quantitative analysis of noisy tomograms without relying on large labeled datasets or complex reconstruction methods.

Development and validation of deep learning model for detection of obstructive coronary artery disease in patients with acute chest pain: a multi-center study.

Kim JY, Park J, Lee KH, Lee JW, Park J, Kim PK, Han K, Baek SE, Im DJ, Choi BW, Hur J

pubmed logopapersAug 14 2025
This study aimed to develop and validate a deep learning (DL) model to detect obstructive coronary artery disease (CAD, ≥ 50% stenosis) in coronary CT angiography (CCTA) among patients presenting to the emergency department (ED) with acute chest pain. The training dataset included 378 patients with acute chest pain who underwent CCTA (10,060 curved multiplanar reconstruction [MPR] images) from a single-center ED between January 2015 and December 2022. The external validation dataset included 298 patients from 3 ED centers between January 2021 and December 2022. A DL model based on You Only Look Once v4, requires manual preprocessing for curved MPR extraction and was developed using 15 manually preprocessed MPR images per major coronary artery. Model performance was evaluated per artery and per patient. The training dataset included 378 patients (mean age 61.3 ± 12.2 years, 58.2% men); the external dataset included 298 patients (mean age 58.3 ± 13.8 years, 54.6% men). Obstructive CAD prevalence in the external dataset was 27.5% (82/298). The DL model achieved per-artery sensitivity, specificity, positive predictive value, negative predictive value (NPV), and area under the curve (AUC) of 92.7%, 89.9%, 62.6%, 98.5%, and 0.919, respectively; and per-patient values of 93.3%, 80.7%, 67.7%, 96.6%, and 0.871, respectively. The DL model demonstrated high sensitivity and NPV for identifying obstructive CAD in patients with acute chest pain undergoing CCTA, indicating its potential utility in aiding ED physicians in CAD detection.

Lung-DDPM: Semantic Layout-guided Diffusion Models for Thoracic CT Image Synthesis.

Jiang Y, Lemarechal Y, Bafaro J, Abi-Rjeile J, Joubert P, Despres P, Manem V

pubmed logopapersAug 14 2025
With the rapid development of artificial intelligence (AI), AI-assisted medical imaging analysis demonstrates remarkable performance in early lung cancer screening. However, the costly annotation process and privacy concerns limit the construction of large-scale medical datasets, hampering the further application of AI in healthcare. To address the data scarcity in lung cancer screening, we propose Lung-DDPM, a thoracic CT image synthesis approach that effectively generates high-fidelity 3D synthetic CT images, which prove helpful in downstream lung nodule segmentation tasks. Our method is based on semantic layout-guided denoising diffusion probabilistic models (DDPM), enabling anatomically reasonable, seamless, and consistent sample generation even from incomplete semantic layouts. Our results suggest that the proposed method outperforms other state-of-the-art (SOTA) generative models in image quality evaluation and downstream lung nodule segmentation tasks. Specifically, Lung-DDPM achieved superior performance on our large validation cohort, with a Fréchet inception distance (FID) of 0.0047, maximum mean discrepancy (MMD) of 0.0070, and mean squared error (MSE) of 0.0024. These results were 7.4×, 3.1×, and 29.5× better than the second-best competitors, respectively. Furthermore, the lung nodule segmentation model, trained on a dataset combining real and Lung-DDPM-generated synthetic samples, attained a Dice Coefficient (Dice) of 0.3914 and sensitivity of 0.4393. This represents 8.8% and 18.6% improvements in Dice and sensitivity compared to the model trained solely on real samples. The experimental results highlight Lung-DDPM's potential for a broader range of medical imaging applications, such as general tumor segmentation, cancer survival estimation, and risk prediction. The code and pretrained models are available at https://github.com/Manem-Lab/Lung-DDPM/.

Deep Learning-Based Instance-Level Segmentation of Kidney and Liver Cysts in CT Images of Patients Affected by Polycystic Kidney Disease.

Gregory AV, Khalifa M, Im J, Ramanathan S, Elbarougy DE, Cruz C, Yang H, Denic A, Rule AD, Chebib FT, Dahl NK, Hogan MC, Harris PC, Torres VE, Erickson BJ, Potretzke TA, Kline TL

pubmed logopapersAug 14 2025
Total kidney and liver volumes are key image-based biomarkers to predict the severity of kidney and liver phenotype in autosomal dominant polycystic kidney disease (ADPKD). However, MRI-based advanced biomarkers like total cyst number (TCN) and cyst parenchyma surface area (CPSA) have been shown to more accurately assess cyst burden and improve the prediction of disease progression. The main aim of this study is to extend the calculation of advanced biomarkers to other imaging modalities; thus, we propose a fully automated model to segment kidney and liver cysts in CT images. Abdominal CTs of ADPKD patients were gathered retrospectively between 2001-2018. A 3D deep-learning method using the nnU-Net architecture was trained to learn cyst edges-cores and the non-cystic kidney/liver parenchyma. Separate segmentation models were trained for kidney cysts in contrast-enhanced CTs and liver cysts in non-contrast CTs using an active learning approach. Two experienced research fellows manually generated the reference standard segmentation, which were reviewed by an expert radiologist for accuracy. Two-hundred CT scans from 148 patients (mean age, 51.2 ± 14.1 years; 48% male) were utilized for model training (80%) and testing (20%). In the test set, both models showed good agreement with the reference standard segmentations, similar to the agreement between two independent human readers (model vs reader: TCNkidney/liver r=0.96/0.97 and CPSAkidney r=0.98), inter-reader: TCNkidney/liver r=0.96/0.98 and CPSAkidney r=0.99). Our study demonstrates that automated models can segment kidney and liver cysts accurately in CT scans of patients with ADPKD.

FIND-Net -- Fourier-Integrated Network with Dictionary Kernels for Metal Artifact Reduction

Farid Tasharofi, Fuxin Fan, Melika Qahqaie, Mareike Thies, Andreas Maier

arxiv logopreprintAug 14 2025
Metal artifacts, caused by high-density metallic implants in computed tomography (CT) imaging, severely degrade image quality, complicating diagnosis and treatment planning. While existing deep learning algorithms have achieved notable success in Metal Artifact Reduction (MAR), they often struggle to suppress artifacts while preserving structural details. To address this challenge, we propose FIND-Net (Fourier-Integrated Network with Dictionary Kernels), a novel MAR framework that integrates frequency and spatial domain processing to achieve superior artifact suppression and structural preservation. FIND-Net incorporates Fast Fourier Convolution (FFC) layers and trainable Gaussian filtering, treating MAR as a hybrid task operating in both spatial and frequency domains. This approach enhances global contextual understanding and frequency selectivity, effectively reducing artifacts while maintaining anatomical structures. Experiments on synthetic datasets show that FIND-Net achieves statistically significant improvements over state-of-the-art MAR methods, with a 3.07% MAE reduction, 0.18% SSIM increase, and 0.90% PSNR improvement, confirming robustness across varying artifact complexities. Furthermore, evaluations on real-world clinical CT scans confirm FIND-Net's ability to minimize modifications to clean anatomical regions while effectively suppressing metal-induced distortions. These findings highlight FIND-Net's potential for advancing MAR performance, offering superior structural preservation and improved clinical applicability. Code is available at https://github.com/Farid-Tasharofi/FIND-Net
Page 1 of 1011010 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.