Sort by:
Page 77 of 91901 results

Pediatric chest X-ray diagnosis using neuromorphic models.

Bokhari SM, Sohaib S, Shafi M

pubmed logopapersJun 1 2025
This research presents an innovative neuromorphic method utilizing Spiking Neural Networks (SNNs) to analyze pediatric chest X-rays (PediCXR) to identify prevalent thoracic illnesses. We incorporate spiking-based machine learning models such as Spiking Convolutional Neural Networks (SCNN), Spiking Residual Networks (S-ResNet), and Hierarchical Spiking Neural Networks (HSNN), for pediatric chest radiographic analysis utilizing the publically available benchmark PediCXR dataset. These models employ spatiotemporal feature extraction, residual connections, and event-driven processing to improve diagnostic precision. The HSNN model surpasses benchmark approaches from the literature, with a classification accuracy of 96% across six thoracic illness categories, with an F1-score of 0.95 and a specificity of 1.0 in pneumonia detection. Our research demonstrates that neuromorphic computing is a feasible and biologically inspired approach to real-time medical imaging diagnostics, significantly improving performance.

Deep learning for multiple sclerosis lesion classification and stratification using MRI.

Umirzakova S, Shakhnoza M, Sevara M, Whangbo TK

pubmed logopapersJun 1 2025
Multiple sclerosis (MS) is a chronic neurological disease characterized by inflammation, demyelination, and neurodegeneration within the central nervous system. Conventional magnetic resonance imaging (MRI) techniques often struggle to detect small or subtle lesions, particularly in challenging regions such as the cortical gray matter and brainstem. This study introduces a novel deep learning-based approach, combined with a robust preprocessing pipeline and optimized MRI protocols, to improve the precision of MS lesion classification and stratification. We designed a convolutional neural network (CNN) architecture specifically tailored for high-resolution T2-weighted imaging (T2WI), augmented by deep learning-based reconstruction (DLR) techniques. The model incorporates dual attention mechanisms, including spatial and channel attention modules, to enhance feature extraction. A comprehensive preprocessing pipeline was employed, featuring bias field correction, skull stripping, image registration, and intensity normalization. The proposed framework was trained and validated on four publicly available datasets and evaluated using precision, sensitivity, specificity, and area under the curve (AUC) metrics. The model demonstrated exceptional performance, achieving a precision of 96.27 %, sensitivity of 95.54 %, specificity of 94.70 %, and an AUC of 0.975. It outperformed existing state-of-the-art methods, particularly in detecting lesions in underdiagnosed regions such as the cortical gray matter and brainstem. The integration of advanced attention mechanisms enabled the model to focus on critical MRI features, leading to significant improvements in lesion classification and stratification. This study presents a novel and scalable approach for MS lesion detection and classification, offering a practical solution for clinical applications. By integrating advanced deep learning techniques with optimized MRI protocols, the proposed framework achieves superior diagnostic accuracy and generalizability, paving the way for enhanced patient care and more personalized treatment strategies. This work sets a new benchmark for MS diagnosis and management in both research and clinical practice.

Performance of GPT-4 Turbo and GPT-4o in Korean Society of Radiology In-Training Examinations.

Choi A, Kim HG, Choi MH, Ramasamy SK, Kim Y, Jung SE

pubmed logopapersJun 1 2025
Despite the potential of large language models for radiology training, their ability to handle image-based radiological questions remains poorly understood. This study aimed to evaluate the performance of the GPT-4 Turbo and GPT-4o in radiology resident examinations, to analyze differences across question types, and to compare their results with those of residents at different levels. A total of 776 multiple-choice questions from the Korean Society of Radiology In-Training Examinations were used, forming two question sets: one originally written in Korean and the other translated into English. We evaluated the performance of GPT-4 Turbo (gpt-4-turbo-2024-04-09) and GPT-4o (gpt-4o-2024-11-20) on these questions with the temperature set to zero, determining the accuracy based on the majority vote from five independent trials. We analyzed their results using the question type (text-only vs. image-based) and benchmarked them against nationwide radiology residents' performance. The impact of the input language (Korean or English) on model performance was examined. GPT-4o outperformed GPT-4 Turbo for both image-based (48.2% vs. 41.8%, <i>P</i> = 0.002) and text-only questions (77.9% vs. 69.0%, <i>P</i> = 0.031). On image-based questions, GPT-4 Turbo and GPT-4o showed comparable performance to that of 1st-year residents (41.8% and 48.2%, respectively, vs. 43.3%, <i>P</i> = 0.608 and 0.079, respectively) but lower performance than that of 2nd- to 4th-year residents (vs. 56.0%-63.9%, all <i>P</i> ≤ 0.005). For text-only questions, GPT-4 Turbo and GPT-4o performed better than residents across all years (69.0% and 77.9%, respectively, vs. 44.7%-57.5%, all <i>P</i> ≤ 0.039). Performance on the English- and Korean-version questions showed no significant differences for either model (all <i>P</i> ≥ 0.275). GPT-4o outperformed the GPT-4 Turbo in all question types. On image-based questions, both models' performance matched that of 1st-year residents but was lower than that of higher-year residents. Both models demonstrated superior performance compared to residents for text-only questions. The models showed consistent performances across English and Korean inputs.

Prediction of mammographic breast density based on clinical breast ultrasound images using deep learning: a retrospective analysis.

Bunnell A, Valdez D, Wolfgruber TK, Quon B, Hung K, Hernandez BY, Seto TB, Killeen J, Miyoshi M, Sadowski P, Shepherd JA

pubmed logopapersJun 1 2025
Breast density, as derived from mammographic images and defined by the Breast Imaging Reporting & Data System (BI-RADS), is one of the strongest risk factors for breast cancer. Breast ultrasound is an alternative breast cancer screening modality, particularly useful in low-resource, rural contexts. To date, breast ultrasound has not been used to inform risk models that need breast density. The purpose of this study is to explore the use of artificial intelligence (AI) to predict BI-RADS breast density category from clinical breast ultrasound imaging. We compared deep learning methods for predicting breast density directly from breast ultrasound imaging, as well as machine learning models from breast ultrasound image gray-level histograms alone. The use of AI-derived breast ultrasound breast density as a breast cancer risk factor was compared to clinical BI-RADS breast density. Retrospective (2009-2022) breast ultrasound data were split by individual into 70/20/10% groups for training, validation, and held-out testing for reporting results. 405,120 clinical breast ultrasound images from 14,066 women (mean age 53 years, range 18-99 years) with clinical breast ultrasound exams were retrospectively selected for inclusion from three institutions: 10,393 training (302,574 images), 2593 validation (69,842), and 1074 testing (28,616). The AI model achieves AUROC 0.854 in breast density classification and statistically significantly outperforms all image statistic-based methods. In an existing clinical 5-year breast cancer risk model, breast ultrasound AI and clinical breast density predict 5-year breast cancer risk with 0.606 and 0.599 AUROC (DeLong's test p-value: 0.67), respectively. BI-RADS breast density can be estimated from breast ultrasound imaging with high accuracy. The AI model provided superior estimates to other machine learning approaches. Furthermore, we demonstrate that age-adjusted, AI-derived breast ultrasound breast density provides similar predictive power to mammographic breast density in our population. Estimated breast density from ultrasound may be useful in performing breast cancer risk assessment in areas where mammography may not be available. National Cancer Institute.

Advancing Intracranial Aneurysm Detection: A Comprehensive Systematic Review and Meta-analysis of Deep Learning Models Performance, Clinical Integration, and Future Directions.

Delfan N, Abbasi F, Emamzadeh N, Bahri A, Parvaresh Rizi M, Motamedi A, Moshiri B, Iranmehr A

pubmed logopapersJun 1 2025
Cerebral aneurysms pose a significant risk to patient safety, particularly when ruptured, emphasizing the need for early detection and accurate prediction. Traditional diagnostic methods, reliant on clinician-based evaluations, face challenges in sensitivity and consistency, prompting the exploration of deep learning (DL) systems for improved performance. This systematic review and meta-analysis assessed the performance of DL models in detecting and predicting intracranial aneurysms compared to clinician-based evaluations. Imaging modalities included CT angiography (CTA), digital subtraction angiography (DSA), and time-of-flight MR angiography (TOF-MRA). Data on lesion-wise sensitivity, specificity, and the impact of DL assistance on clinician performance were analyzed. Subgroup analyses evaluated DL sensitivity by aneurysm size and location, and interrater agreement was measured using Fleiss' κ. DL systems achieved an overall lesion-wise sensitivity of 90 % and specificity of 94 %, outperforming human diagnostics. Clinician specificity improved significantly with DL assistance, increasing from 83 % to 85 % in the patient-wise scenario and from 93 % to 95 % in the lesion-wise scenario. Similarly, clinician sensitivity also showed notable improvement with DL assistance, rising from 82 % to 96 % in the patient-wise scenario and from 82 % to 88 % in the lesion-wise scenario. Subgroup analysis showed DL sensitivity varied with aneurysm size and location, reaching 100 % for aneurysms larger than 10 mm. Additionally, DL assistance improved interrater agreement among clinicians, with Fleiss' κ increasing from 0.668 to 0.862. DL models demonstrate transformative potential in managing cerebral aneurysms by enhancing diagnostic accuracy, reducing missed cases, and supporting clinical decision-making. However, further validation in diverse clinical settings and seamless integration into standard workflows are necessary to fully realize the benefits of DL-driven diagnostics.

Predicting lung cancer bone metastasis using CT and pathological imaging with a Swin Transformer model.

Li W, Zou X, Zhang J, Hu M, Chen G, Su S

pubmed logopapersJun 1 2025
Bone metastasis is a common and serious complication in lung cancer patients, leading to severe pain, pathological fractures, and reduced quality of life. Early prediction of bone metastasis can enable timely interventions and improve patient outcomes. In this study, we developed a multimodal Swin Transformer-based deep learning model for predicting bone metastasis risk in lung cancer patients by integrating CT imaging and pathological data. A total of 215 patients with confirmed lung cancer diagnoses, including those with and without bone metastasis, were included. The model was designed to process high-resolution CT images and digitized histopathological images, with the features extracted independently by two Swin Transformer networks. These features were then fused using decision-level fusion techniques to improve classification accuracy. The Swin-Dual Fusion Model achieved superior performance compared to single-modality models and conventional architectures such as ResNet50, with an AUC of 0.966 on the test data and 0.967 on the training data. This integrated model demonstrated high accuracy, sensitivity, and specificity, making it a promising tool for clinical application in predicting bone metastasis risk. The study emphasizes the potential of transformer-based models to revolutionize bone oncology through advanced multimodal analysis and early prediction of metastasis, ultimately improving patient care and treatment outcomes.

Prediction of Lymph Node Metastasis in Lung Cancer Using Deep Learning of Endobronchial Ultrasound Images With Size on CT and PET-CT Findings.

Oh JE, Chung HS, Gwon HR, Park EY, Kim HY, Lee GK, Kim TS, Hwangbo B

pubmed logopapersJun 1 2025
Echo features of lymph nodes (LNs) influence target selection during endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). This study evaluates deep learning's diagnostic capabilities on EBUS images for detecting mediastinal LN metastasis in lung cancer, emphasising the added value of integrating a region of interest (ROI), LN size on CT, and PET-CT findings. We analysed 2901 EBUS images from 2055 mediastinal LN stations in 1454 lung cancer patients. ResNet18-based deep learning models were developed to classify images of true positive malignant and true negative benign LNs diagnosed by EBUS-TBNA using different inputs: original images, ROI images, and CT size and PET-CT data. Model performance was assessed using the area under the receiver operating characteristic curve (AUROC) and other diagnostic metrics. The model using only original EBUS images showed the lowest AUROC (0.870) and accuracy (80.7%) in classifying LN images. Adding ROI information slightly increased the AUROC (0.896) without a significant difference (p = 0.110). Further adding CT size resulted in a minimal change in AUROC (0.897), while adding PET-CT (original + ROI + PET-CT) showed a significant improvement (0.912, p = 0.008 vs. original; p = 0.002 vs. original + ROI + CT size). The model combining original and ROI EBUS images with CT size and PET-CT findings achieved the highest AUROC (0.914, p = 0.005 vs. original; p = 0.018 vs. original + ROI + PET-CT) and accuracy (82.3%). Integrating an ROI, LN size on CT, and PET-CT findings into the deep learning analysis of EBUS images significantly enhances the diagnostic capability of models for detecting mediastinal LN metastasis in lung cancer, with the integration of PET-CT data having a substantial impact.

Deep Learning in Knee MRI: A Prospective Study to Enhance Efficiency, Diagnostic Confidence and Sustainability.

Reschke P, Gotta J, Gruenewald LD, Bachir AA, Strecker R, Nickel D, Booz C, Martin SS, Scholtz JE, D'Angelo T, Dahm D, Solim LA, Konrad P, Mahmoudi S, Bernatz S, Al-Saleh S, Hong QAL, Sommer CM, Eichler K, Vogl TJ, Haberkorn SM, Koch V

pubmed logopapersJun 1 2025
The objective of this study was to evaluate a combination of deep learning (DL)-reconstructed parallel acquisition technique (PAT) and simultaneous multislice (SMS) acceleration imaging in comparison to conventional knee imaging. Adults undergoing knee magnetic resonance imaging (MRI) with DL-enhanced acquisitions were prospectively analyzed from December 2023 to April 2024. The participants received T1 without fat saturation and fat-suppressed PD-weighted TSE pulse sequences using conventional two-fold PAT (P2) and either DL-enhanced four-fold PAT (P4) or a combination of DL-enhanced four-fold PAT with two-fold SMS acceleration (P4S2). Three independent readers assessed image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and radiomics features. 34 participants (mean age 45±17years; 14 women) were included who underwent P4S2, P4, and P2 imaging. Both P4S2 and P4 demonstrated higher CNR and SNR values compared to P2 (P<.001). P4 was diagnostically inferior to P2 only in the visualization of cartilage damage (P<.005), while P4S2 consistently outperformed P2 in anatomical delineation across all evaluated structures and raters (P<.05). Radiomics analysis revealed significant differences in contrast and gray-level characteristics among P2, P4, and P4S2 (P<.05). P4 reduced time by 31% and P4S2 by 41% compared to P2 (P<.05). P4S2 DL acceleration offers significant advancements over P4 and P2 in knee MRI, combining superior image quality and improved anatomical delineation at significant time reduction. Its improvements in anatomical delineation, energy consumption, and workforce optimization make P4S2 a significant step forward.

Accelerated proton resonance frequency-based magnetic resonance thermometry by optimized deep learning method.

Xu S, Zong S, Mei CS, Shen G, Zhao Y, Wang H

pubmed logopapersMay 31 2025
Proton resonance frequency (PRF)-based magnetic resonance (MR) thermometry plays a critical role in thermal ablation therapies through focused ultrasound (FUS). For clinical applications, accurate and rapid temperature feedback is essential to ensure both the safety and effectiveness of these treatments. This work aims to improve temporal resolution in dynamic MR temperature map reconstructions using an enhanced deep-learning method, thereby supporting the real-time monitoring required for effective FUS treatments. Five classical neural network architectures-cascade net, complex-valued U-Net, shift window transformer for MRI, real-valued U-Net, and U-Net with residual blocks-along with training-optimized methods were applied to reconstruct temperature maps from 2-fold and 4-fold undersampled k-space data. The training enhancements included pre-training/training-phase data augmentations, knowledge distillation, and a novel amplitude-phase decoupling loss function. Phantom and ex vivo tissue heating experiments were conducted using a FUS transducer. Ground truth was the complex MR images with accurate temperature changes, and datasets were manually undersampled to simulate such acceleration here. Separate testing datasets were used to evaluate real-time performance and temperature accuracy. Furthermore, our proposed deep learning-based rapid reconstruction approach was validated on a clinical dataset obtained from patients with uterine fibroids, demonstrating its clinical applicability. Acceleration factors of 1.9 and 3.7 were achieved for 2× and 4× k-space under samplings, respectively. The deep learning-based reconstruction using ResUNet incorporating the four optimizations, showed superior performance. For 2-fold acceleration, the RMSE of temperature map patches were 0.89°C and 1.15°C for the phantom and ex vivo testing datasets, respectively. The DICE coefficient for the 43°C isotherm-enclosed regions was 0.81, and the Bland-Altman analysis indicated a bias of -0.25°C with limits of agreement of ±2.16°C. In the 4-fold under-sampling case, these evaluation metrics showed approximately a 10% reduction in accuracy. Additionally, the DICE coefficient measuring the overlap between the reconstructed temperature maps (using the optimized ResUNet) and the ground truth, specifically in regions where the temperature exceeded the 43°C threshold, were 0.77 and 0.74 for the 2× and 4× under-sampling scenarios, respectively. This study demonstrates that deep learning-based reconstruction significantly enhances the accuracy and efficiency of MR thermometry, particularly in the context of FUS-based clinical treatments for uterine fibroids. This approach could also be extended to other applications such as essential tremor and prostate cancer treatments where MRI-guided FUS plays a critical role.

Development and validation of a 3-D deep learning system for diabetic macular oedema classification on optical coherence tomography images.

Zhu H, Ji J, Lin JW, Wang J, Zheng Y, Xie P, Liu C, Ng TK, Huang J, Xiong Y, Wu H, Lin L, Zhang M, Zhang G

pubmed logopapersMay 31 2025
To develop and validate an automated diabetic macular oedema (DME) classification system based on the images from different three-dimensional optical coherence tomography (3-D OCT) devices. A multicentre, platform-based development study using retrospective and cross-sectional data. Data were subjected to a two-level grading system by trained graders and a retina specialist, and categorised into three types: no DME, non-centre-involved DME and centre-involved DME (CI-DME). The 3-D convolutional neural networks algorithm was used for DME classification system development. The deep learning (DL) performance was compared with the diabetic retinopathy experts. Data were collected from Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Chaozhou People's Hospital and The Second Affiliated Hospital of Shantou University Medical College from January 2010 to December 2023. 7790 volumes of 7146 eyes from 4254 patients were annotated, of which 6281 images were used as the development set and 1509 images were used as the external validation set, split based on the centres. Accuracy, F1-score, sensitivity, specificity, area under receiver operating characteristic curve (AUROC) and Cohen's kappa were calculated to evaluate the performance of the DL algorithm. In classifying DME with non-DME, our model achieved an AUROCs of 0.990 (95% CI 0.983 to 0.996) and 0.916 (95% CI 0.902 to 0.930) for hold-out testing dataset and external validation dataset, respectively. To distinguish CI-DME from non-centre-involved-DME, our model achieved AUROCs of 0.859 (95% CI 0.812 to 0.906) and 0.881 (95% CI 0.859 to 0.902), respectively. In addition, our system showed comparable performance (Cohen's κ: 0.85 and 0.75) to the retina experts (Cohen's κ: 0.58-0.92 and 0.70-0.71). Our DL system achieved high accuracy in multiclassification tasks on DME classification with 3-D OCT images, which can be applied to population-based DME screening.
Page 77 of 91901 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.