Sort by:
Page 33 of 1411410 results

Machine learning prediction of effective radiation doses in various computed tomography applications: a virtual human phantom study.

Tanyildizi-Kokkulunk H

pubmed logopapersAug 26 2025
In this work, it was aimed to employ machine learning (ML) algorithms to accurately forecast the radiation doses for phantoms while accounting for the most popular CT protocols. A cloud-based software was utilized to calculate the effective doses from different CT protocols. To simulate a range of adult patients with different weights, eight entire body mesh-based computational phantom sets were used. The head, neck, and chest-abdomen-pelvis CT scan characteristics were combined to create a dataset with 33 rows for each phantom and 792 rows total. At the ML stage, linear (LR), random forest (RF) and support vector regression (SVR) were used. Mean absolute error, mean squared error and accuracy were used to evaluate the performances. The female phantoms received higher doses (7.8 %) than males. Furthermore, an average of 11 % more dose was taken to the normal weight phantom than to the overweight, the overweight in comparison to the obese I, and the obese I in comparison to the obese II. Among the ML algorithms, the LR showed 0 error rate and 100 % accuracy in predicting CT doses. The LR was shown to be the best approach out of those used in the ML estimation of CT-induced doses.

Improved pulmonary embolism detection in CT pulmonary angiogram scans with hybrid vision transformers and deep learning techniques.

Abdelhamid A, El-Ghamry A, Abdelhay EH, Abo-Zahhad MM, Moustafa HE

pubmed logopapersAug 26 2025
Pulmonary embolism (PE) represents a severe, life-threatening cardiovascular condition and is notably the third leading cause of cardiovascular mortality, after myocardial infarction and stroke. This pathology occurs when blood clots obstruct the pulmonary arteries, impeding blood flow and oxygen exchange in the lungs. Prompt and accurate detection of PE is critical for appropriate clinical decision-making and patient survival. The complexity involved in interpreting medical images can often results misdiagnosis. However, recent advances in Deep Learning (DL) have substantially improved the capabilities of Computer-Aided Diagnosis (CAD) systems. Despite these advancements, existing single-model DL methods are limited when handling complex, diverse, and imbalanced medical imaging datasets. Addressing this gap, our research proposes an ensemble framework for classifying PE, capitalizing on the unique capabilities of ResNet50, DenseNet121, and Swin Transformer models. This ensemble method harnesses the complementary strengths of convolutional neural networks (CNNs) and vision transformers (ViTs), leading to improved prediction accuracy and model robustness. The proposed methodology includes a sophisticated preprocessing pipeline leveraging autoencoder (AE)-based dimensionality reduction, data augmentation to avoid overfitting, discrete wavelet transform (DWT) for multiscale feature extraction, and Sobel filtering for effective edge detection and noise reduction. The proposed model was rigorously evaluated using the public Radiological Society of North America (RSNA-STR) PE dataset, demonstrating remarkable performance metrics of 97.80% accuracy and a 0.99 for Area Under Receiver Operating Curve (AUROC). Comparative analysis demonstrated superior performance over state-of-the-art pre-trained models and recent ViT-based approaches, highlighting our method's effectiveness in improving early PE detection and providing robust support for clinical decision-making.

Classifiers Combined with DenseNet Models for Lung Cancer Computed Tomography Image Classification: A Comparative Analysis.

Mahmoud MA, Wu S, Su R, Wen Y, Liu S, Guan Y

pubmed logopapersAug 26 2025
Lung cancer remains a leading cause of cancer-related mortality worldwide. While deep learning approaches show promise in medical imaging, comprehensive comparisons of classifier combinations with DenseNet architectures for lung cancer classification are limited. The study investigates the performance of different classifier combinations, Support Vector Machine (SVM), Artificial Neural Network (ANN), and Multi-Layer Perceptron (MLP), with DenseNet architectures for lung cancer classification using chest CT scan images. A comparative analysis was conducted on 1,000 chest CT scan images comprising Adenocarcinoma, Large Cell Carcinoma, Squamous Cell Carcinoma, and normal tissue samples. Three DenseNet variants (DenseNet-121, DenseNet-169, DenseNet-201) were combined with three classifiers: SVM, ANN, and MLP. Performance was evaluated using accuracy, Area Under the Curve (AUC), precision, recall, specificity, and F1- score with an 80-20 train-test split. The optimal model achieved 92% training accuracy and 83% test accuracy. Performance across models ranged from 81% to 92% for training accuracy and 73% to 83% for test accuracy. The most balanced combination demonstrated robust results (training: 85% accuracy, 0.99 AUC; test: 79% accuracy, 0.95 AUC) with minimal overfitting. Deep learning approaches effectively categorize chest CT scans for lung cancer detection. The MLP-DenseNet-169 combination's 83% test accuracy represents a promising benchmark. Limitations include retrospective design and a limited sample size from a single source. This evaluation demonstrates the effectiveness of combining DenseNet architectures with different classifiers for lung cancer CT classification. The MLP-DenseNet-169 achieved optimal performance, while SVM-DenseNet-169 showed superior stability, providing valuable benchmarks for automated lung cancer detection systems.

Random forest-based out-of-distribution detection for robust lung cancer segmentation

Aneesh Rangnekar, Harini Veeraraghavan

arxiv logopreprintAug 26 2025
Accurate detection and segmentation of cancerous lesions from computed tomography (CT) scans is essential for automated treatment planning and cancer treatment response assessment. Transformer-based models with self-supervised pretraining can produce reliably accurate segmentation from in-distribution (ID) data but degrade when applied to out-of-distribution (OOD) datasets. We address this challenge with RF-Deep, a random forest classifier that utilizes deep features from a pretrained transformer encoder of the segmentation model to detect OOD scans and enhance segmentation reliability. The segmentation model comprises a Swin Transformer encoder, pretrained with masked image modeling (SimMIM) on 10,432 unlabeled 3D CT scans covering cancerous and non-cancerous conditions, with a convolution decoder, trained to segment lung cancers in 317 3D scans. Independent testing was performed on 603 3D CT public datasets that included one ID dataset and four OOD datasets comprising chest CTs with pulmonary embolism (PE) and COVID-19, and abdominal CTs with kidney cancers and healthy volunteers. RF-Deep detected OOD cases with a FPR95 of 18.26%, 27.66%, and less than 0.1% on PE, COVID-19, and abdominal CTs, consistently outperforming established OOD approaches. The RF-Deep classifier provides a simple and effective approach to enhance reliability of cancer segmentation in ID and OOD scenarios.

Development of a deep learning method to identify acute ischaemic stroke lesions on brain CT.

Fontanella A, Li W, Mair G, Antoniou A, Platt E, Armitage P, Trucco E, Wardlaw JM, Storkey A

pubmed logopapersAug 26 2025
CT is commonly used to image patients with ischaemic stroke but radiologist interpretation may be delayed. Machine learning techniques can provide rapid automated CT assessment but are usually developed from annotated images which necessarily limits the size and representation of development data sets. We aimed to develop a deep learning (DL) method using CT brain scans that were labelled but not annotated for the presence of ischaemic lesions. We designed a convolutional neural network-based DL algorithm to detect ischaemic lesions on CT. Our algorithm was trained using routinely acquired CT brain scans collected for a large multicentre international trial. These scans had previously been labelled by experts for acute and chronic appearances. We explored the impact of ischaemic lesion features, background brain appearances and timing of CT (baseline or 24-48 hour follow-up) on DL performance. From 5772 CT scans of 2347 patients (median age 82), 54% had visible ischaemic lesions according to experts. Our DL method achieved 72% accuracy in detecting ischaemic lesions. Detection was better for larger (80% accuracy) or multiple (87% accuracy for two, 100% for three or more) lesions and with follow-up scans (76% accuracy vs 67% at baseline). Chronic brain conditions reduced accuracy, particularly non-stroke lesions and old stroke lesions (32% and 31% error rates, respectively). DL methods can be designed for ischaemic lesion detection on CT using the vast quantities of routinely collected brain scans without the need for lesion annotation. Ultimately, this should lead to more robust and widely applicable methods.

Reducing radiomics errors in nasopharyngeal cancer via deep learning-based synthetic CT generation from CBCT.

Xiao Y, Lin W, Xie F, Liu L, Zheng G, Xiao C

pubmed logopapersAug 25 2025
This study investigates the impact of cone beam computed tomography (CBCT) image quality on radiomic analysis and evaluates the potential of deep learning-based enhancement to improve radiomic feature accuracy in nasopharyngeal cancer (NPC). The CBAMRegGAN model was trained on 114 paired CT and CBCT datasets from 114 nasopharyngeal cancer patients to enhance CBCT images, with CT images as ground truth. The dataset was split into 82 patients for training, 12 for validation, and 20 for testing. The radiomic features in 6 different categories, including first-order, gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), gray-level size-zone matrix(GLSZM), neighbouring gray tone difference matrix (NGTDM), and gray-level dependence matrix (GLDM), were extracted from the gross tumor volume (GTV) of original CBCT, enhanced CBCT, and CT. Comparing feature errors between original and enhanced CBCT showed that deep learning-based enhancement improves radiomic feature accuracy. The CBAMRegGAN model achieved improved image quality with a peak signal-to-noise ratio (PSNR) of 29.52 ± 2.28 dB, normalized mean absolute error (NMAE) of 0.0129 ± 0.004, and structural similarity index (SSIM) of 0.910 ± 0.025 for enhanced CBCT images. This led to reduced errors in most radiomic features, with average reductions across 20 patients of 19.0%, 24.0%, 3.0%, 19%, 15.0%, and 5.0% for first-order, GLCM, GLRLM, GLSZM, NGTDM, and GLDM features. This study demonstrates that CBCT image quality significantly influences radiomic analysis, and deep learning-based enhancement techniques can effectively improve both image quality and the accuracy of radiomic features in NPC.

Bi-directional semi-3D network for accurate epicardial fat segmentation and quantification using reflection equivariant quantum neural networks.

S J, Perumalsamy M

pubmed logopapersAug 25 2025
The process of detecting and measuring the fat layer surrounding the heart from medical images is referred to as epicardial fat segmentation. Accurate segmentation is essential for assessing heart health and associated risk factors. It plays a critical role in evaluating cardiovascular disease, requiring advanced techniques to enhance precision and effectiveness. However, there is currently a shortage of resources made for fat mass measurement. The Visual Lab's cardiac fat database addresses this limitation by providing a comprehensive set of high-resolution images crucial for reliable fat analysis. This study proposes a novel method for epicardial fat segmentation, involving a multi-stage framework. In the preprocessing phase, window-aware guided bilateral filtering (WGBR) is applied to reduce noise while preserving structural features. For region-of-interest (ROI) selection, the White Shark Optimizer (WSO) is employed to improve exploration and exploitation accuracy. The segmentation task is handled using a bidirectional guided semi-3D network (BGSNet), which enhances robustness by extracting features in both forward and backward directions. Following segmentation, quantification is performed to estimate the epicardial fat volume. This is achieved using reflection-equivariant quantum neural networks (REQNN), which are well-suited for modelling complex visual patterns. The Parrot optimizer is further utilized to fine-tune hyperparameters, ensuring optimal performance. The experimental results confirm the effectiveness of the suggested BGSNet with REQNN approach, achieving a Dice score of 99.50 %, an accuracy of 99.50 %, and an execution time of 1.022 s per slice. Furthermore, the Spearman correlation coefficient for fat quantification yielded an R<sup>2</sup> value of 0.9867, indicating a strong agreement with the reference measurements. This integrated approach offers a reliable solution for epicardial fat segmentation and quantification, thereby supporting improved cardiovascular risk assessment and monitoring.

Benchmarking Class Activation Map Methods for Explainable Brain Hemorrhage Classification on Hemorica Dataset

Z. Rafati, M. Hoseyni, J. Khoramdel, A. Nikoofard

arxiv logopreprintAug 25 2025
Explainable Artificial Intelligence (XAI) has become an essential component of medical imaging research, aiming to increase transparency and clinical trust in deep learning models. This study investigates brain hemorrhage diagnosis with a focus on explainability through Class Activation Mapping (CAM) techniques. A pipeline was developed to extract pixellevel segmentation and detection annotations from classification models using nine state-of-the-art CAM algorithms, applied across multiple network stages, and quantitatively evaluated on the Hemorica dataset, which uniquely provides both slice-level labels and high-quality segmentation masks. Metrics including Dice, IoU, and pixel-wise overlap were employed to benchmark CAM variants. Results show that the strongest localization performance occurred at stage 5 of EfficientNetV2S, with HiResCAM yielding the highest bounding-box alignment and AblationCAM achieving the best pixel-level Dice (0.57) and IoU (0.40), representing strong accuracy given that models were trained solely for classification without segmentation supervision. To the best of current knowledge, this is among the f irst works to quantitatively compare CAM methods for brain hemorrhage detection, establishing a reproducible benchmark and underscoring the potential of XAI-driven pipelines for clinically meaningful AI-assisted diagnosis.

UniSino: Physics-Driven Foundational Model for Universal CT Sinogram Standardization

Xingyu Ai, Shaoyu Wang, Zhiyuan Jia, Ao Xu, Hongming Shan, Jianhua Ma, Qiegen Liu

arxiv logopreprintAug 25 2025
During raw-data acquisition in CT imaging, diverse factors can degrade the collected sinograms, with undersampling and noise leading to severe artifacts and noise in reconstructed images and compromising diagnostic accuracy. Conventional correction methods rely on manually designed algorithms or fixed empirical parameters, but these approaches often lack generalizability across heterogeneous artifact types. To address these limitations, we propose UniSino, a foundation model for universal CT sinogram standardization. Unlike existing foundational models that operate in image domain, UniSino directly standardizes data in the projection domain, which enables stronger generalization across diverse undersampling scenarios. Its training framework incorporates the physical characteristics of sinograms, enhancing generalization and enabling robust performance across multiple subtasks spanning four benchmark datasets. Experimental results demonstrate thatUniSino achieves superior reconstruction quality both single and mixed undersampling case, demonstrating exceptional robustness and generalization in sinogram enhancement for CT imaging. The code is available at: https://github.com/yqx7150/UniSino.

Validation of automated computed tomography segmentation software to assess body composition among cancer patients.

Salehin M, Yang Chow VT, Lee H, Weltzien EK, Nguyen L, Li JM, Akella V, Caan BJ, Cespedes Feliciano EM, Ma D, Beg MF, Popuri K

pubmed logopapersAug 25 2025
Assessing body composition using computed tomography (CT) can help predict the clinical outcomes of cancer patients, including surgical complications, chemotherapy toxicity, and survival. However, manual segmentation of CT images is labor-intensive and can lead to significant inter-observer variability. In this study, we validate the accuracy and reliability of automatic CT-based segmentation using the Data Analysis Facilitation Suite (DAFS) Express software package, which rapidly segments single CT slices. The study analyzed single-slice images at the third lumbar vertebra (L3) level (n = 5973) of patients diagnosed with non-metastatic colorectal (n = 3098) and breast cancer (n = 2875) at Kaiser Permanente Northern California. Manual segmentation used SliceOmatic with Alberta protocol HU ranges; automated segmentation used DAFS Express with identical HU limits. The accuracy of the automated segmentation was evaluated using the DICE index, the reliability was assessed by intra-class correlation coefficients (ICC) with 95% CI, and the agreement between automatic and manual segmentations was assessed by Bland-Altman analysis. DICE scores below 20% and 70% were considered failed and poor segmentations, respectively, and underwent additional review. The mortality risk associated with each tissue's area was generated using Cox proportional hazard ratios (HR) with 95% CI, adjusted for patient-specific variables including age, sex, race/ethnicity, cancer stage and grade, treatment receipt, and smoking status. A blinded review process categorized images with various characteristics for sensitivity analysis. The mean (standard deviation, SD) ages of the colorectal and breast cancer patients were 62.6 (11.4) and 56 (11.8), respectively. Automatic segmentation showed high accuracy vs. manual segmentation, with mean DICE scores above 96% for skeletal muscle (SKM), visceral adipose tissue (VAT), and subcutaneous adipose tissue (SAT), and above 77% for intermuscular adipose tissue (IMAT), with three failures, representing 0.05% of the cohort. Bland-Altman analysis of 5,973 measurements showed mean cross-sectional area differences of -5.73, -0.84, -2.82, and -1.02 cm<sup>2</sup> for SKM, VAT, SAT and IMAT, respectively, indicating good agreement, with slight underestimation in SKM and SAT. Reliability Coefficients ranged from 0.88-1.00 for colorectal and 0.95-1.00 for breast cancer, with Simple Kappa values of 0.65-0.99 and 0.67-0.97, respectively. Additionally, mortality associations for automated and manual segmentations were similar, with comparable hazard ratios, confidence intervals, and p-values. Kaplan-Meier survival estimates showed mortality differences below 2.14%. DAFS Express enables rapid, accurate body composition analysis by automating segmentation, reducing expert time and computational burden. This rapid analysis of body composition is a prerequisite to large-scale research that could potentially enable use in the clinical setting. Automated CT segmentations may be utilized to assess markers of sarcopenia, muscle loss, and adiposity and predict clinical outcomes.
Page 33 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.