Sort by:
Page 160 of 2772761 results

Automated engineered-stone silicosis screening and staging using Deep Learning with X-rays.

Priego-Torres B, Sanchez-Morillo D, Khalili E, Conde-Sánchez MÁ, García-Gámez A, León-Jiménez A

pubmed logopapersJun 1 2025
Silicosis, a debilitating occupational lung disease caused by inhaling crystalline silica, continues to be a significant global health issue, especially with the increasing use of engineered stone (ES) surfaces containing high silica content. Traditional diagnostic methods, dependent on radiological interpretation, have low sensitivity, especially, in the early stages of the disease, and present variability between evaluators. This study explores the efficacy of deep learning techniques in automating the screening and staging of silicosis using chest X-ray images. Utilizing a comprehensive dataset, obtained from the medical records of a cohort of workers exposed to artificial quartz conglomerates, we implemented a preprocessing stage for rib-cage segmentation, followed by classification using state-of-the-art deep learning models. The segmentation model exhibited high precision, ensuring accurate identification of thoracic structures. In the screening phase, our models achieved near-perfect accuracy, with ROC AUC values reaching 1.0, effectively distinguishing between healthy individuals and those with silicosis. The models demonstrated remarkable precision in the staging of the disease. Nevertheless, differentiating between simple silicosis and progressive massive fibrosis, the evolved and complicated form of the disease, presented certain difficulties, especially during the transitional period, when assessment can be significantly subjective. Notwithstanding these difficulties, the models achieved an accuracy of around 81% and ROC AUC scores nearing 0.93. This study highlights the potential of deep learning to generate clinical decision support tools to increase the accuracy and effectiveness in the diagnosis and staging of silicosis, whose early detection would allow the patient to be moved away from all sources of occupational exposure, therefore constituting a substantial advancement in occupational health diagnostics.

Deep learning-driven multi-class classification of brain strokes using computed tomography: A step towards enhanced diagnostic precision.

Kulathilake CD, Udupihille J, Abeysundara SP, Senoo A

pubmed logopapersJun 1 2025
To develop and validate deep learning models leveraging CT imaging for the prediction and classification of brain stroke conditions, with the potential to enhance accuracy and support clinical decision-making. This retrospective, bi-center study included data from 250 patients, with a dataset of 8186 CT images collected from 2017 to 2022. Two AI models were developed using the Expanded ResNet101 deep learning framework as a two-step model. Model performance was evaluated using confusion matrices, supplemented by external validation with an independent dataset. External validation was conducted by an expert and two external members. Overall accuracy, confidence intervals, Cohen's Kappa value, and McNemar's test P-values were calculated. A total of 8186 CT images were incorporated, with 6386 images used for the training and 900 datasets for testing and validation in Model 01. Further, 1619 CT images were used for training and 600 datasets for testing and validation in Model 02. The average accuracy, precision, and F1 score for both models were assessed: Model 01 achieved 99.6 %, 99.4 %, and 99.6 % respectively, whereas Model 02 achieved 99.2 %, 98.8 %, and 99.1 %. The external validation accuracies were 78.6 % (95 % CI: 0.73,0.83; P < 0.001) and 60.2 % (95 % CI: 0.48,0.70; P < 0.001) for Models 01 and 02 respectively, as evaluated by the expert. Deep learning models demonstrated high accuracy, precision, and F1 scores in predicting outcomes for brain stroke patients. With larger cohort and diverse radiologic mimics, these models could support clinicians in prognosis and decision-making.

A radiomics approach to distinguish Progressive Supranuclear Palsy Richardson's syndrome from other phenotypes starting from MR images.

Pisani N, Abate F, Avallone AR, Barone P, Cesarelli M, Amato F, Picillo M, Ricciardi C

pubmed logopapersJun 1 2025
Progressive Supranuclear Palsy (PSP) is an uncommon neurodegenerative disorder with different clinical onset, including Richardson's syndrome (PSP-RS) and other variant phenotypes (vPSP). Recognising the clinical progression of different phenotypes would enhance the accuracy of detection and treatment of PSP. The study goal was to identify radiomic biomarkers for distinguishing PSP phenotypes extracted from T1-weighted magnetic resonance images (MRI). Forty PSP patients (20 PSP-RS and 20 vPSP) took part in the present work. Radiomic features were collected from 21 regions of interest (ROIs) mainly from frontal cortex, supratentorial white matter, basal nuclei, brainstem, cerebellum, 3rd and 4th ventricles. After features selection, three tree-based machine learning (ML) classifiers were implemented to classify PSP phenotypes. 10 out of 21 ROIs performed best about sensitivity, specificity, accuracy and area under the receiver operating characteristic curve (AUCROC). Particularly, features extracted from the pons region obtained the best accuracy (0.92) and AUCROC (0.83) values while by using the other 10 ROIs, evaluation metrics range from 0.67 to 0.83. Eight features of the Gray Level Dependence Matrix were recurrently extracted for the 10 ROIs. Furthermore, by combining these ROIs, the results exceeded 0.83 in phenotypes classification and the selected areas were brain stem, pons, occipital white matter, precentral gyrus and thalamus regions. Based on the achieved results, our proposed approach could represent a promising tool for distinguishing PSP-RS from vPSP.

BUS-M2AE: Multi-scale Masked Autoencoder for Breast Ultrasound Image Analysis.

Yu L, Gou B, Xia X, Yang Y, Yi Z, Min X, He T

pubmed logopapersJun 1 2025
Masked AutoEncoder (MAE) has demonstrated significant potential in medical image analysis by reducing the cost of manual annotations. However, MAE and its recent variants are not well-developed for ultrasound images in breast cancer diagnosis, as they struggle to generalize to the task of distinguishing ultrasound breast tumors of varying sizes. This limitation hinders the model's ability to adapt to the diverse morphological characteristics of breast tumors. In this paper, we propose a novel Breast UltraSound Multi-scale Masked AutoEncoder (BUS-M2AE) model to address the limitations of the general MAE. BUS-M2AE incorporates multi-scale masking methods at both the token level during the image patching stage and the feature level during the feature learning stage. These two multi-scale masking methods enable flexible strategies to match the explicit masked patches and the implicit features with varying tumor scales. By introducing these multi-scale masking methods in the image patching and feature learning phases, BUS-M2AE allows the pre-trained vision transformer to adaptively perceive and accurately distinguish breast tumors of different sizes, thereby improving the model's overall performance in handling diverse tumor morphologies. Comprehensive experiments demonstrate that BUS-M2AE outperforms recent MAE variants and commonly used supervised learning methods in breast cancer classification and tumor segmentation tasks.

Keeping AI on Track: Regular monitoring of algorithmic updates in mammography.

Taib AG, James JJ, Partridge GJW, Chen Y

pubmed logopapersJun 1 2025
To demonstrate a method of benchmarking the performance of two consecutive software releases of the same commercial artificial intelligence (AI) product to trained human readers using the Personal Performance in Mammographic Screening scheme (PERFORMS) external quality assurance scheme. In this retrospective study, ten PERFORMS test sets, each consisting of 60 challenging cases, were evaluated by human readers between 2012 and 2023 and were evaluated by Version 1 (V1) and Version 2 (V2) of the same AI model in 2022 and 2023 respectively. Both AI and humans considered each breast independently. Both AI and humans considered the highest suspicion of malignancy score per breast for non-malignant cases and per lesion for breasts with malignancy. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated for comparison, with the study powered to detect a medium-sized effect (odds ratio, 3.5 or 0.29) for sensitivity. The study included 1,254 human readers, with a total of 328 malignant lesions, 823 normal, and 55 benign breasts analysed. No significant difference was found between the AUCs for AI V1 (0.93) and V2 (0.94) (p = 0.13). In terms of sensitivity, no difference was observed between human readers and AI V1 (83.2 % vs 87.5 % respectively, p = 0.12), however V2 outperformed humans (88.7 %, p = 0.04). Specificity was higher for AI V1 (87.4 %) and V2 (88.2 %) compared to human readers (79.0 %, p < 0.01 respectively). The upgraded AI model showed no significant difference in diagnostic performance compared to its predecessor when evaluating mammograms from PERFORMS test sets.

AI for fracture diagnosis in clinical practice: Four approaches to systematic AI-implementation and their impact on AI-effectiveness.

Loeffen DV, Zijta FM, Boymans TA, Wildberger JE, Nijssen EC

pubmed logopapersJun 1 2025
Artificial Intelligence (AI) has been shown to enhance fracture-detection-accuracy, but the most effective AI-implementation in clinical practice is less well understood. In the current study, four approaches to AI-implementation are evaluated for their impact on AI-effectiveness. Retrospective single-center study based on all consecutive, around-the-clock radiographic examinations for suspected fractures, and accompanying clinical-practice radiologist-diagnoses, between January and March 2023. These image-sets were independently analysed by a dedicated bone-fracture-detection-AI. Findings were combined with radiologist clinical-practice diagnoses to simulate the four AI-implementation methods deemed most relevant to clinical workflows: AI-standalone (radiologist-findings not consulted); AI-problem-solving (AI-findings consulted when radiologist in doubt); AI-triage (radiologist-findings consulted when AI in doubt); and AI-safety net (AI-findings consulted when radiologist diagnosis negative). Reference-standard diagnoses were established by two senior musculoskeletal-radiologists (by consensus in cases of disagreement). Radiologist- and radiologist + AI diagnoses were compared for false negatives (FN), false positives (FP) and their clinical consequences. Experience-level-subgroups radiologists-in-training-, non-musculoskeletal-radiologists, and dedicated musculoskeletal-radiologists were analysed separately. 1508 image-sets were included (1227 unique patients; 40 radiologist-readers). Radiologist results were: 2.7 % FN (40/1508), 28 with clinical consequences; 1.2 % FP (18/1508), 2 received full-fracture treatments (11.1 %). All AI-implementation methods changed overall FN and FP with statistical significance (p < 0.001): AI-standalone 1.5 % FN (23/1508; 11 consequences), 6.8 % FP (103/1508); AI-problem-solving 3.2 % FN (48/1508; 31 consequences), 0.6 % FP (9/1508); AI-triage 2.1 % FN (32/1508; 18 consequences), 1.7 % FP (26/1508); AI-safety net 0.07 % FN (1/1508; 1 consequence), 7.6 % FP (115/1508). Subgroups show similar trends, except AI-triage increased FN for all subgroups except radiologists-in-training. Implementation methods have a large impact on AI-effectiveness. These results suggest AI should not be considered for problem-solving or triage at this time; AI standalone performs better than either and may be a source of assistance where radiologists are unavailable. Best results were obtained implementing AI as safety net, which eliminates missed fractures with serious clinical consequences; even though false positives are increased, unnecessary treatments are limited.

The radiologist and data: Do we add value or is data just data?

Fishman EK, Soyer P, Hellmann DB, Chu LC

pubmed logopapersJun 1 2025
Artificial intelligence in radiology critically depends on vast amounts of quality data, and there are controversies surrounding the topic of data ownership. In the current clinical framework, the secondary use of clinical data should be treated as a form of public good to benefit future patients. In this article, we propose that the physicians' input in data curation and interpretation adds value to the data and is crucial for building clinically relevant artificial intelligence models.

Explainable deep stacking ensemble model for accurate and transparent brain tumor diagnosis.

Haque R, Khan MA, Rahman H, Khan S, Siddiqui MIH, Limon ZH, Swapno SMMR, Appaji A

pubmed logopapersJun 1 2025
Early detection of brain tumors in MRI images is vital for improving treatment results. However, deep learning models face challenges like limited dataset diversity, class imbalance, and insufficient interpretability. Most studies rely on small, single-source datasets and do not combine different feature extraction techniques for better classification. To address these challenges, we propose a robust and explainable stacking ensemble model for multiclass brain tumor classification. To address these challenges, we propose a stacking ensemble model that combines EfficientNetB0, MobileNetV2, GoogleNet, and Multi-level CapsuleNet, using CatBoost as the meta-learner for improved feature aggregation and classification accuracy. This ensemble approach captures complex tumor characteristics while enhancing robustness and interpretability. The proposed model integrates EfficientNetB0, MobileNetV2, GoogleNet, and a Multi-level CapsuleNet within a stacking framework, utilizing CatBoost as the meta-learner to improve feature aggregation and classification accuracy. We created two large MRI datasets by merging data from four sources: BraTS, Msoud, Br35H, and SARTAJ. To tackle class imbalance, we applied Borderline-SMOTE and data augmentation. We also utilized feature extraction methods, along with PCA and Gray Wolf Optimization (GWO). Our model was validated through confidence interval analysis and statistical tests, demonstrating superior performance. Error analysis revealed misclassification trends, and we assessed computational efficiency regarding inference speed and resource usage. The proposed ensemble achieved 97.81% F1 score and 98.75% PR AUC on M1, and 98.32% F1 score with 99.34% PR AUC on M2. Moreover, the model consistently surpassed state-of-the-art CNNs, Vision Transformers, and other ensemble methods in classifying brain tumors across individual four datasets. Finally, we developed a web-based diagnostic tool that enables clinicians to interact with the proposed model and visualize decision-critical regions in MRI scans using Explainable Artificial Intelligence (XAI). This study connects high-performing AI models with real clinical applications, providing a reliable, scalable, and efficient diagnostic solution for brain tumor classification.

Towards fast and reliable estimations of 3D pressure, velocity and wall shear stress in aortic blood flow: CFD-based machine learning approach.

Lin D, Kenjereš S

pubmed logopapersJun 1 2025
In this work, we developed deep neural networks for the fast and comprehensive estimation of the most salient features of aortic blood flow. These features include velocity magnitude and direction, 3D pressure, and wall shear stress. Starting from 40 subject-specific aortic geometries obtained from 4D Flow MRI, we applied statistical shape modeling to generate 1,000 synthetic aorta geometries. Complete computational fluid dynamics (CFD) simulations of these geometries were performed to obtain ground-truth values. We then trained deep neural networks for each characteristic flow feature using 900 randomly selected aorta geometries. Testing on remaining 100 geometries resulted in average errors of 3.11% for velocity and 4.48% for pressure. For wall shear stress predictions, we applied two approaches: (i) directly derived from the neural network-predicted velocity, and, (ii) predicted from a separate neural network. Both approaches yielded similar accuracy, with average error of 4.8 and 4.7% compared to complete 3D CFD results, respectively. We recommend the second approach for potential clinical use due to its significantly simplified workflow. In conclusion, this proof-of-concept analysis demonstrates the numerical robustness, rapid calculation speed (less than seconds), and good accuracy of the CFD-based machine learning approach in predicting velocity, pressure, and wall shear stress distributions in subject-specific aortic flows.

Res-Net-Based Modeling and Morphologic Analysis of Deep Medullary Veins Using Multi-Echo GRE at 7 T MRI.

Li Z, Liang L, Zhang J, Fan X, Yang Y, Yang H, Wang Q, An J, Xue R, Zhuo Y, Qian H, Zhang Z

pubmed logopapersJun 1 2025
The pathological changes in deep medullary veins (DMVs) have been reported in various diseases. However, accurate modeling and quantification of DMVs remain challenging. We aim to propose and assess an automated approach for modeling and quantifying DMVs at 7 Tesla (7 T) MRI. A multi-echo-input Res-Net was developed for vascular segmentation, and a minimum path loss function was used for modeling and quantifying the geometric parameter of DMVs. Twenty-one patients diagnosed as subcortical vascular dementia (SVaD) and 20 condition matched controls were included in this study. The amplitude and phase images of gradient echo with five echoes were acquired at 7 T. Ten GRE images were manually labeled by two neurologists and compared with the results obtained by our proposed method. Independent samples t test and Pearson correlation were used for statistical analysis in our study, and p value < 0.05 was considered significant. No significant offset was found in centerlines obtained by human labeling and our algorithm (p = 0.734). The length difference between the proposed method and manual labeling was smaller than the error between different clinicians (p < 0.001). Patients with SVaD exhibited fewer DMVs (mean difference = -60.710 ± 21.810, p = 0.011) and higher curvature (mean difference = 0.12 ± 0.022, p < 0.0001), corresponding to their higher Vascular Dementia Assessment Scale-Cog (VaDAS-Cog) scores (mean difference = 4.332 ± 1.992, p = 0.036) and lower Mini-Mental State Examination (MMSE) (mean difference = -3.071 ± 1.443, p = 0.047). The MMSE scores were positively correlated with the numbers of DMVs (r = 0.437, p = 0.037) and were negatively correlated with the curvature (r = -0.426, p = 0.042). In summary, we proposed a novel framework for automated quantifying the morphologic parameters of DMVs. These characteristics of DMVs are expected to help the research and diagnosis of cerebral small vessel diseases with DMV lesions.
Page 160 of 2772761 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.