Sort by:
Page 124 of 1401395 results

Automated engineered-stone silicosis screening and staging using Deep Learning with X-rays.

Priego-Torres B, Sanchez-Morillo D, Khalili E, Conde-Sánchez MÁ, García-Gámez A, León-Jiménez A

pubmed logopapersJun 1 2025
Silicosis, a debilitating occupational lung disease caused by inhaling crystalline silica, continues to be a significant global health issue, especially with the increasing use of engineered stone (ES) surfaces containing high silica content. Traditional diagnostic methods, dependent on radiological interpretation, have low sensitivity, especially, in the early stages of the disease, and present variability between evaluators. This study explores the efficacy of deep learning techniques in automating the screening and staging of silicosis using chest X-ray images. Utilizing a comprehensive dataset, obtained from the medical records of a cohort of workers exposed to artificial quartz conglomerates, we implemented a preprocessing stage for rib-cage segmentation, followed by classification using state-of-the-art deep learning models. The segmentation model exhibited high precision, ensuring accurate identification of thoracic structures. In the screening phase, our models achieved near-perfect accuracy, with ROC AUC values reaching 1.0, effectively distinguishing between healthy individuals and those with silicosis. The models demonstrated remarkable precision in the staging of the disease. Nevertheless, differentiating between simple silicosis and progressive massive fibrosis, the evolved and complicated form of the disease, presented certain difficulties, especially during the transitional period, when assessment can be significantly subjective. Notwithstanding these difficulties, the models achieved an accuracy of around 81% and ROC AUC scores nearing 0.93. This study highlights the potential of deep learning to generate clinical decision support tools to increase the accuracy and effectiveness in the diagnosis and staging of silicosis, whose early detection would allow the patient to be moved away from all sources of occupational exposure, therefore constituting a substantial advancement in occupational health diagnostics.

Pediatric chest X-ray diagnosis using neuromorphic models.

Bokhari SM, Sohaib S, Shafi M

pubmed logopapersJun 1 2025
This research presents an innovative neuromorphic method utilizing Spiking Neural Networks (SNNs) to analyze pediatric chest X-rays (PediCXR) to identify prevalent thoracic illnesses. We incorporate spiking-based machine learning models such as Spiking Convolutional Neural Networks (SCNN), Spiking Residual Networks (S-ResNet), and Hierarchical Spiking Neural Networks (HSNN), for pediatric chest radiographic analysis utilizing the publically available benchmark PediCXR dataset. These models employ spatiotemporal feature extraction, residual connections, and event-driven processing to improve diagnostic precision. The HSNN model surpasses benchmark approaches from the literature, with a classification accuracy of 96% across six thoracic illness categories, with an F1-score of 0.95 and a specificity of 1.0 in pneumonia detection. Our research demonstrates that neuromorphic computing is a feasible and biologically inspired approach to real-time medical imaging diagnostics, significantly improving performance.

Deep learning for multiple sclerosis lesion classification and stratification using MRI.

Umirzakova S, Shakhnoza M, Sevara M, Whangbo TK

pubmed logopapersJun 1 2025
Multiple sclerosis (MS) is a chronic neurological disease characterized by inflammation, demyelination, and neurodegeneration within the central nervous system. Conventional magnetic resonance imaging (MRI) techniques often struggle to detect small or subtle lesions, particularly in challenging regions such as the cortical gray matter and brainstem. This study introduces a novel deep learning-based approach, combined with a robust preprocessing pipeline and optimized MRI protocols, to improve the precision of MS lesion classification and stratification. We designed a convolutional neural network (CNN) architecture specifically tailored for high-resolution T2-weighted imaging (T2WI), augmented by deep learning-based reconstruction (DLR) techniques. The model incorporates dual attention mechanisms, including spatial and channel attention modules, to enhance feature extraction. A comprehensive preprocessing pipeline was employed, featuring bias field correction, skull stripping, image registration, and intensity normalization. The proposed framework was trained and validated on four publicly available datasets and evaluated using precision, sensitivity, specificity, and area under the curve (AUC) metrics. The model demonstrated exceptional performance, achieving a precision of 96.27 %, sensitivity of 95.54 %, specificity of 94.70 %, and an AUC of 0.975. It outperformed existing state-of-the-art methods, particularly in detecting lesions in underdiagnosed regions such as the cortical gray matter and brainstem. The integration of advanced attention mechanisms enabled the model to focus on critical MRI features, leading to significant improvements in lesion classification and stratification. This study presents a novel and scalable approach for MS lesion detection and classification, offering a practical solution for clinical applications. By integrating advanced deep learning techniques with optimized MRI protocols, the proposed framework achieves superior diagnostic accuracy and generalizability, paving the way for enhanced patient care and more personalized treatment strategies. This work sets a new benchmark for MS diagnosis and management in both research and clinical practice.

Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations.

Choi A, Kim HG, Choi MH, Ramasamy SK, Kim Y, Jung SE

pubmed logopapersJun 1 2025
Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed to evaluate the performance of the GPT-4 Turbo and GPT-4o in radiology resident examinations, to analyze differences across question types, and to compare their results with those of residents at different levels. A total of 776 multiple-choice questions from the Korean Society of Radiology In-Training Examinations were used, forming two question sets: one originally written in Korean and the other translated into English. We evaluated the performance of GPT-4 Turbo (gpt-4-turbo-2024-04-09) and GPT-4o (gpt-4o-2024-11-20) on these questions with the temperature set to zero, determining the accuracy based on the majority vote from five independent trials. We analyzed their results using the question type (text-only vs. image-based) and benchmarked them against nationwide radiology residents' performance. The impact of the input language (Korean or English) on model performance was examined. GPT-4o outperformed GPT-4 Turbo for both image-based (48.2% vs. 41.8%, <i>P</i> = 0.002) and text-only questions (77.9% vs. 69.0%, <i>P</i> = 0.031). On image-based questions, GPT-4 Turbo and GPT-4o showed comparable performance to that of 1st-year residents (41.8% and 48.2%, respectively, vs. 43.3%, <i>P</i> = 0.608 and 0.079, respectively) but lower performance than that of 2nd- to 4th-year residents (vs. 56.0%-63.9%, all <i>P</i> ≤ 0.005). For text-only questions, GPT-4 Turbo and GPT-4o performed better than residents across all years (69.0% and 77.9%, respectively, vs. 44.7%-57.5%, all <i>P</i> ≤ 0.039). Performance on the English- and Korean-version questions showed no significant differences for either model (all <i>P</i> ≥ 0.275). GPT-4o outperformed the GPT-4 Turbo in all question types. On image-based questions, both models' performance matched that of 1st-year residents but was lower than that of higher-year residents. Both models demonstrated superior performance compared to residents for text-only questions. The models showed consistent performances across English and Korean inputs.

Prediction of mammographic breast density based on clinical breast ultrasound images using deep learning: a retrospective analysis.

Bunnell A, Valdez D, Wolfgruber TK, Quon B, Hung K, Hernandez BY, Seto TB, Killeen J, Miyoshi M, Sadowski P, Shepherd JA

pubmed logopapersJun 1 2025
Breast density, as derived from mammographic images and defined by the Breast Imaging Reporting & Data System (BI-RADS), is one of the strongest risk factors for breast cancer. Breast ultrasound is an alternative breast cancer screening modality, particularly useful in low-resource, rural contexts. To date, breast ultrasound has not been used to inform risk models that need breast density. The purpose of this study is to explore the use of artificial intelligence (AI) to predict BI-RADS breast density category from clinical breast ultrasound imaging. We compared deep learning methods for predicting breast density directly from breast ultrasound imaging, as well as machine learning models from breast ultrasound image gray-level histograms alone. The use of AI-derived breast ultrasound breast density as a breast cancer risk factor was compared to clinical BI-RADS breast density. Retrospective (2009-2022) breast ultrasound data were split by individual into 70/20/10% groups for training, validation, and held-out testing for reporting results. 405,120 clinical breast ultrasound images from 14,066 women (mean age 53 years, range 18-99 years) with clinical breast ultrasound exams were retrospectively selected for inclusion from three institutions: 10,393 training (302,574 images), 2593 validation (69,842), and 1074 testing (28,616). The AI model achieves AUROC 0.854 in breast density classification and statistically significantly outperforms all image statistic-based methods. In an existing clinical 5-year breast cancer risk model, breast ultrasound AI and clinical breast density predict 5-year breast cancer risk with 0.606 and 0.599 AUROC (DeLong's test p-value: 0.67), respectively. BI-RADS breast density can be estimated from breast ultrasound imaging with high accuracy. The AI model provided superior estimates to other machine learning approaches. Furthermore, we demonstrate that age-adjusted, AI-derived breast ultrasound breast density provides similar predictive power to mammographic breast density in our population. Estimated breast density from ultrasound may be useful in performing breast cancer risk assessment in areas where mammography may not be available. National Cancer Institute.

Advancing Intracranial Aneurysm Detection: A Comprehensive Systematic Review and Meta-analysis of Deep Learning Models Performance, Clinical Integration, and Future Directions.

Delfan N, Abbasi F, Emamzadeh N, Bahri A, Parvaresh Rizi M, Motamedi A, Moshiri B, Iranmehr A

pubmed logopapersJun 1 2025
Cerebral aneurysms pose a significant risk to patient safety, particularly when ruptured, emphasizing the need for early detection and accurate prediction. Traditional diagnostic methods, reliant on clinician-based evaluations, face challenges in sensitivity and consistency, prompting the exploration of deep learning (DL) systems for improved performance. This systematic review and meta-analysis assessed the performance of DL models in detecting and predicting intracranial aneurysms compared to clinician-based evaluations. Imaging modalities included CT angiography (CTA), digital subtraction angiography (DSA), and time-of-flight MR angiography (TOF-MRA). Data on lesion-wise sensitivity, specificity, and the impact of DL assistance on clinician performance were analyzed. Subgroup analyses evaluated DL sensitivity by aneurysm size and location, and interrater agreement was measured using Fleiss' κ. DL systems achieved an overall lesion-wise sensitivity of 90 % and specificity of 94 %, outperforming human diagnostics. Clinician specificity improved significantly with DL assistance, increasing from 83 % to 85 % in the patient-wise scenario and from 93 % to 95 % in the lesion-wise scenario. Similarly, clinician sensitivity also showed notable improvement with DL assistance, rising from 82 % to 96 % in the patient-wise scenario and from 82 % to 88 % in the lesion-wise scenario. Subgroup analysis showed DL sensitivity varied with aneurysm size and location, reaching 100 % for aneurysms larger than 10 mm. Additionally, DL assistance improved interrater agreement among clinicians, with Fleiss' κ increasing from 0.668 to 0.862. DL models demonstrate transformative potential in managing cerebral aneurysms by enhancing diagnostic accuracy, reducing missed cases, and supporting clinical decision-making. However, further validation in diverse clinical settings and seamless integration into standard workflows are necessary to fully realize the benefits of DL-driven diagnostics.

Predicting lung cancer bone metastasis using CT and pathological imaging with a Swin Transformer model.

Li W, Zou X, Zhang J, Hu M, Chen G, Su S

pubmed logopapersJun 1 2025
Bone metastasis is a common and serious complication in lung cancer patients, leading to severe pain, pathological fractures, and reduced quality of life. Early prediction of bone metastasis can enable timely interventions and improve patient outcomes. In this study, we developed a multimodal Swin Transformer-based deep learning model for predicting bone metastasis risk in lung cancer patients by integrating CT imaging and pathological data. A total of 215 patients with confirmed lung cancer diagnoses, including those with and without bone metastasis, were included. The model was designed to process high-resolution CT images and digitized histopathological images, with the features extracted independently by two Swin Transformer networks. These features were then fused using decision-level fusion techniques to improve classification accuracy. The Swin-Dual Fusion Model achieved superior performance compared to single-modality models and conventional architectures such as ResNet50, with an AUC of 0.966 on the test data and 0.967 on the training data. This integrated model demonstrated high accuracy, sensitivity, and specificity, making it a promising tool for clinical application in predicting bone metastasis risk. The study emphasizes the potential of transformer-based models to revolutionize bone oncology through advanced multimodal analysis and early prediction of metastasis, ultimately improving patient care and treatment outcomes.

Accuracy of an Automated Bone Scan Index Measurement System Enhanced by Deep Learning of the Female Skeletal Structure in Patients with Breast Cancer.

Fukai S, Daisaki H, Yamashita K, Kuromori I, Motegi K, Umeda T, Shimada N, Takatsu K, Terauchi T, Koizumi M

pubmed logopapersJun 1 2025
VSBONE<sup>®</sup> BSI (VSBONE), an automated bone scan index (BSI) measurement system was updated from version 2.1 (ver.2) to 3.0 (ver.3). VSBONE ver.3 incorporates deep learning of the skeletal structures of 957 new women, and it can be applied in patients with breast cancer. However, the performance of the updated VSBONE remains unclear. This study aimed to validate the diagnostic accuracy of the VSBONE system in patients with breast cancer. In total, 220 Japanese patients with breast cancer who underwent bone scintigraphy with single-photon emission computed tomography/computed tomography (SPECT/CT) were retrospectively analyzed. The patients were diagnosed with active bone metastases (<i>n</i> = 20) and non-bone metastases (<i>n</i> = 200) according to the physician's radiographic image interpretation. The patients were assessed using the VSBONE ver.2 and VSBONE ver.3, and the BSI findings were compared with the interpretation results by the physicians. The occurrence of segmentation errors, the association of BSI between VSBONE ver.2 and VSBONE ver.3, and the diagnostic accuracy of the systems were evaluated. VSBONE ver.2 and VSBONE ver.3 had segmentation errors in four and two patients. Significant positive linear correlations were confirmed in both versions of the BSI (<i>r</i> = 0.92). The diagnostic accuracy was 54.1% in VSBOBE ver.2, and 80.5% in VSBONE ver.3 <i>(P</i> < 0.001), respectively. The diagnostic accuracy of VSBONE was improved through deep learning of the female skeletal structures. The updated VSBONE ver.3 can be a reliable automated system for measuring BSI in patients with breast cancer.

Prediction of Lymph Node Metastasis in Lung Cancer Using Deep Learning of Endobronchial Ultrasound Images With Size on CT and PET-CT Findings.

Oh JE, Chung HS, Gwon HR, Park EY, Kim HY, Lee GK, Kim TS, Hwangbo B

pubmed logopapersJun 1 2025
Echo features of lymph nodes (LNs) influence target selection during endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). This study evaluates deep learning's diagnostic capabilities on EBUS images for detecting mediastinal LN metastasis in lung cancer, emphasising the added value of integrating a region of interest (ROI), LN size on CT, and PET-CT findings. We analysed 2901 EBUS images from 2055 mediastinal LN stations in 1454 lung cancer patients. ResNet18-based deep learning models were developed to classify images of true positive malignant and true negative benign LNs diagnosed by EBUS-TBNA using different inputs: original images, ROI images, and CT size and PET-CT data. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC) and other diagnostic metrics. The model using only original EBUS images showed the lowest AUROC (0.870) and accuracy (80.7%) in classifying LN images. Adding ROI information slightly increased the AUROC (0.896) without a significant difference (p = 0.110). Further adding CT size resulted in a minimal change in AUROC (0.897), while adding PET-CT (original + ROI + PET-CT) showed a significant improvement (0.912, p = 0.008 vs. original; p = 0.002 vs. original + ROI + CT size). The model combining original and ROI EBUS images with CT size and PET-CT findings achieved the highest AUROC (0.914, p = 0.005 vs. original; p = 0.018 vs. original + ROI + PET-CT) and accuracy (82.3%). Integrating an ROI, LN size on CT, and PET-CT findings into the deep learning analysis of EBUS images significantly enhances the diagnostic capability of models for detecting mediastinal LN metastasis in lung cancer, with the integration of PET-CT data having a substantial impact.

Deep Learning in Knee MRI: A Prospective Study to Enhance Efficiency, Diagnostic Confidence and Sustainability.

Reschke P, Gotta J, Gruenewald LD, Bachir AA, Strecker R, Nickel D, Booz C, Martin SS, Scholtz JE, D'Angelo T, Dahm D, Solim LA, Konrad P, Mahmoudi S, Bernatz S, Al-Saleh S, Hong QAL, Sommer CM, Eichler K, Vogl TJ, Haberkorn SM, Koch V

pubmed logopapersJun 1 2025
The objective of this study was to evaluate a combination of deep learning (DL)-reconstructed parallel acquisition technique (PAT) and simultaneous multislice (SMS) acceleration imaging in comparison to conventional knee imaging. Adults undergoing knee magnetic resonance imaging (MRI) with DL-enhanced acquisitions were prospectively analyzed from December 2023 to April 2024. The participants received T1 without fat saturation and fat-suppressed PD-weighted TSE pulse sequences using conventional two-fold PAT (P2) and either DL-enhanced four-fold PAT (P4) or a combination of DL-enhanced four-fold PAT with two-fold SMS acceleration (P4S2). Three independent readers assessed image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and radiomics features. 34 participants (mean age 45±17years; 14 women) were included who underwent P4S2, P4, and P2 imaging. Both P4S2 and P4 demonstrated higher CNR and SNR values compared to P2 (P<.001). P4 was diagnostically inferior to P2 only in the visualization of cartilage damage (P<.005), while P4S2 consistently outperformed P2 in anatomical delineation across all evaluated structures and raters (P<.05). Radiomics analysis revealed significant differences in contrast and gray-level characteristics among P2, P4, and P4S2 (P<.05). P4 reduced time by 31% and P4S2 by 41% compared to P2 (P<.05). P4S2 DL acceleration offers significant advancements over P4 and P2 in knee MRI, combining superior image quality and improved anatomical delineation at significant time reduction. Its improvements in anatomical delineation, energy consumption, and workforce optimization make P4S2 a significant step forward.
Page 124 of 1401395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.