Sort by:
Page 217 of 2352341 results

CheX-DS: Improving Chest X-ray Image Classification with Ensemble Learning Based on DenseNet and Swin Transformer

Xinran Li, Yu Liu, Xiujuan Xu, Xiaowei Zhao

arxiv logopreprintMay 16 2025
The automatic diagnosis of chest diseases is a popular and challenging task. Most current methods are based on convolutional neural networks (CNNs), which focus on local features while neglecting global features. Recently, self-attention mechanisms have been introduced into the field of computer vision, demonstrating superior performance. Therefore, this paper proposes an effective model, CheX-DS, for classifying long-tail multi-label data in the medical field of chest X-rays. The model is based on the excellent CNN model DenseNet for medical imaging and the newly popular Swin Transformer model, utilizing ensemble deep learning techniques to combine the two models and leverage the advantages of both CNNs and Transformers. The loss function of CheX-DS combines weighted binary cross-entropy loss with asymmetric loss, effectively addressing the issue of data imbalance. The NIH ChestX-ray14 dataset is selected to evaluate the model's effectiveness. The model outperforms previous studies with an excellent average AUC score of 83.76\%, demonstrating its superior performance.

Application of Quantitative CT and Machine Learning in the Evaluation and Diagnosis of Polymyositis/Dermatomyositis-Associated Interstitial Lung Disease.

Yang K, Chen Y, He L, Sheng Y, Hei H, Zhang J, Jin C

pubmed logopapersMay 16 2025
To investigate lung changes in patients with polymyositis/dermatomyositis-associated interstitial lung disease (PM/DM-ILD) using quantitative CT and to construct a diagnostic model to evaluate the application of quantitative CT and machine learning in diagnosing PM/DM-ILD. Chest CT images from 348 PM/DM individuals were quantitatively analyzed to obtain the lung volume (LV), mean lung density (MLD), and intrapulmonary vascular volume (IPVV) of the whole lung and each lung lobe. The percentage of high attenuation area (HAA %) was determined using the lung density histogram. Patients hospitalized from 2016 to 2021 were used as the training set (n=258), and from 2022 to 2023 were used as the temporal test set (n=90). Seven classification models were established, and their performance was evaluated through ROC analysis, decision curve analysis, calibration, and precision-recall curve. The optimal model was selected and interpreted with Python's SHAP model interpretation package. Compared to the non-ILD group, the mean lung density and percentage of high attenuation area in the whole lung and each lung lobe were significantly increased, and the lung volume and intrapulmonary vessel volume were significantly decreased in the ILD group. The Random Forest (RF) model demonstrated superior performance with the test set area under the curve of 0.843 (95% CI: 0.821-0.865), accuracy of 0.778, sensitivity of 0.784, and specificity of 0.750. Quantitative CT serves as an objective and precise method to assess pulmonary changes in PM/DM-ILD patients. The RF model based on CT quantitative parameters displayed strong diagnostic efficiency in identifying ILD, offering a new and convenient approach for evaluating and diagnosing PM/DM-ILD patients.

Machine learning prediction of pathological complete response to neoadjuvant chemotherapy with peritumoral breast tumor ultrasound radiomics: compare with intratumoral radiomics and clinicopathologic predictors.

Yao J, Zhou W, Jia X, Zhu Y, Chen X, Zhan W, Zhou J

pubmed logopapersMay 16 2025
Noninvasive, accurate and novel approaches to predict patients who will achieve pathological complete response (pCR) after neoadjuvant chemotherapy (NAC) could assist treatment strategies. The aim of this study was to explore the application of machine learning (ML) based peritumoral ultrasound radiomics signature (PURS), compared with intratumoral radiomics (IURS) and clinicopathologic factors, for early prediction of pCR. We analyzed 358 locally advanced breast cancer patients (250 in the training set and 108 in the test set), who accepted NAC and post NAC surgery at our institution. The clinical and pathological data were analyzed using the independent t test and the Chi-square test to determine the factors associated with pCR. The PURS and IURS of baseline breast tumors were extracted by using 3D-slicer and PyRadiomics software. Five ML classifiers including linear discriminant analysis (LDA), support vector machine (SVM), random forest (RF), logistic regression (LR), and adaptive boosting (AdaBoost) were applied to construct radiomics predictive models. The performance of PURS, IURS models and clinicopathologic predictors were assessed with respect to sensitivity, specificity, accuracy and the areas under the curve (AUCs). Ninety-seven patients achieved pCR. The clinicopathologic predictors obtained an AUC of 0.759. Among PURS models, the RF classifier achieved better efficacy (AUC of 0.889) than LR (0.849), AdaBoost (0.823), SVM (0.746) and LDA (0.732). The RF classifier also obtained a maximum AUC of 0.931 than 0.920 (AdaBoost), 0.875 (LR), 0.825 (SVM), and 0.798 (LDA) in IURS models in the test set. The RF based PURS yielded higher predictive ability (AUC 0.889; 95% CI 0.814, 0.947) than clinicopathologic factors (AUC 0.759; 95% CI 0.657, 0.861; p < 0.05), but lower efficacy compared with IURS (AUC 0.931; 95% CI 0.865, 0.980; p < 0.05). The peritumoral US radiomics, as a novel potential biomarker, can assist clinical therapy decisions.

Privacy-Protecting Image Classification Within the Web Browser Using Deep Learning Models from Zenodo.

Auer F, Mayer S, Kramer F

pubmed logopapersMay 15 2025
Integrating deep learning into clinical workflows for medical image analysis holds promise for improving diagnostic accuracy. However, strict data privacy regulations and the sensitivity of clinical IT infrastructure limit the deployment of cloud-based solutions. This paper introduces WebIPred, a web-based application that loads deep learning models directly within the client's web browser, protecting patient privacy while maintaining compatibility with clinical IT environments. WebIPred supports the application of pre-trained models published on Zenodo and other repositories, allowing clinicians to apply these models to real patient data without the need for extensive technical knowledge. This paper outlines WebIPred's model integration system, prediction workflow, and privacy features. Our results show that WebIPred offers a privacy-protecting and flexible application for image classification, only relying on client-side processing. WebIPred combines its strong commitment to data privacy and security with a user-friendly interface that makes it easy for clinicians to integrate AI into their workflows.

CLIF-Net: Intersection-guided Cross-view Fusion Network for Infection Detection from Cranial Ultrasound.

Yu M, Peterson MR, Burgoine K, Harbaugh T, Olupot-Olupot P, Gladstone M, Hagmann C, Cowan FM, Weeks A, Morton SU, Mulondo R, Mbabazi-Kabachelor E, Schiff SJ, Monga V

pubmed logopapersMay 15 2025
This paper addresses the problem of detecting possible serious bacterial infection (pSBI) of infancy, i.e. a clinical presentation consistent with bacterial sepsis in newborn infants using cranial ultrasound (cUS) images. The captured image set for each patient enables multiview imagery: coronal and sagittal, with geometric overlap. To exploit this geometric relation, we develop a new learning framework, called the intersection-guided Crossview Local- and Image-level Fusion Network (CLIF-Net). Our technique employs two distinct convolutional neural network branches to extract features from coronal and sagittal images with newly developed multi-level fusion blocks. Specifically, we leverage the spatial position of these images to locate the intersecting region. We then identify and enhance the semantic features from this region across multiple levels using cross-attention modules, facilitating the acquisition of mutually beneficial and more representative features from both views. The final enhanced features from the two views are then integrated and projected through the image-level fusion layer, outputting pSBI and non-pSBI class probabilities. We contend that our method of exploiting multi-view cUS images enables a first of its kind, robust 3D representation tailored for pSBI detection. When evaluated on a dataset of 302 cUS scans from Mbale Regional Referral Hospital in Uganda, CLIF-Net demonstrates substantially enhanced performance, surpassing the prevailing state-of-the-art infection detection techniques.

Deep Learning-Based Chronic Obstructive Pulmonary Disease Exacerbation Prediction Using Flow-Volume and Volume-Time Curve Imaging: Retrospective Cohort Study.

Jeon ET, Park H, Lee JK, Heo EY, Lee CH, Kim DK, Kim DH, Lee HW

pubmed logopapersMay 15 2025
Chronic obstructive pulmonary disease (COPD) is a common and progressive respiratory condition characterized by persistent airflow limitation and symptoms such as dyspnea, cough, and sputum production. Acute exacerbations (AE) of COPD (AE-COPD) are key determinants of disease progression; yet, existing predictive models relying mainly on spirometric measurements, such as forced expiratory volume in 1 second, reflect only a fraction of the physiological information embedded in respiratory function tests. Recent advances in artificial intelligence (AI) have enabled more sophisticated analyses of full spirometric curves, including flow-volume loops and volume-time curves, facilitating the identification of complex patterns associated with increased exacerbation risk. This study aimed to determine whether a predictive model that integrates clinical data and spirometry images with the use of AI improves accuracy in predicting moderate-to-severe and severe AE-COPD events compared to a clinical-only model. A retrospective cohort study was conducted using COPD registry data from 2 teaching hospitals from January 2004 to December 2020. The study included a total of 10,492 COPD cases, divided into a development cohort (6870 cases) and an external validation cohort (3622 cases). The AI-enhanced model (AI-PFT-Clin) used a combination of clinical variables (eg, history of AE-COPD, dyspnea, and inhaled treatments) and spirometry image data (flow-volume loop and volume-time curves). In contrast, the Clin model used only clinical variables. The primary outcomes were moderate-to-severe and severe AE-COPD events within a year of spirometry. In the external validation cohort, the AI-PFT-Clin model outperformed the Clin model, showing an area under the receiver operating characteristic curve of 0.755 versus 0.730 (P<.05) for moderate-to-severe AE-COPD and 0.713 versus 0.675 (P<.05) for severe AE-COPD. The AI-PFT-Clin model demonstrated reliable predictive capability across subgroups, including younger patients and those without previous exacerbations. Higher AI-PFT-Clin scores correlated with elevated AE-COPD risk (adjusted hazard ratio for Q4 vs Q1: 4.21, P<.001), with sustained predictive stability over a 10-year follow-up period. The AI-PFT-Clin model, by integrating clinical data with spirometry images, offers enhanced predictive accuracy for AE-COPD events compared to a clinical-only approach. This AI-based framework facilitates the early identification of high-risk individuals through the detection of physiological abnormalities not captured by conventional metrics. The model's robust performance and long-term predictive stability suggest its potential utility in proactive COPD management and personalized intervention planning. These findings highlight the promise of incorporating advanced AI techniques into routine COPD management, particularly in populations traditionally seen as lower risk, supporting improved management of COPD through tailored patient care.

Measuring the severity of knee osteoarthritis with an aberration-free fast line scanning Raman imaging system.

Jiao C, Ye J, Liao J, Li J, Liang J, He S

pubmed logopapersMay 15 2025
Osteoarthritis (OA) is a major cause of disability worldwide, with symptoms like joint pain, limited functionality, and decreased quality of life, potentially leading to deformity and irreversible damage. Chemical changes in joint tissues precede imaging alterations, making early diagnosis challenging for conventional methods like X-rays. Although Raman imaging provides detailed chemical information, it is time-consuming. This paper aims to achieve rapid osteoarthritis diagnosis and grading using a self-developed Raman imaging system combined with deep learning denoising and acceleration algorithms. Our self-developed aberration-corrected line-scanning confocal Raman imaging device acquires a line of Raman spectra (hundreds of points) per scan using a galvanometer or displacement stage, achieving spatial and spectral resolutions of 2 μm and 0.2 nm, respectively. Deep learning algorithms enhance the imaging speed by over 4 times through effective spectrum denoising and signal-to-noise ratio (SNR) improvement. By leveraging the denoising capabilities of deep learning, we are able to acquire high-quality Raman spectral data with a reduced integration time, thereby accelerating the imaging process. Experiments on the tibial plateau of osteoarthritis patients compared three excitation wavelengths (532, 671, and 785 nm), with 671 nm chosen for optimal SNR and minimal fluorescence. Machine learning algorithms achieved a 98 % accuracy in distinguishing articular from calcified cartilage and a 97 % accuracy in differentiating osteoarthritis grades I to IV. Our fast Raman imaging system, combining an aberration-corrected line-scanning confocal Raman imager with deep learning denoising, offers improved imaging speed and enhanced spectral and spatial resolutions. It enables rapid, label-free detection of osteoarthritis severity and can identify early compositional changes before clinical imaging, allowing precise grading and tailored treatment, thus advancing orthopedic diagnostics and improving patient outcomes.

Artificial intelligence algorithm improves radiologists' bone age assessment accuracy artificial intelligence algorithm improves radiologists' bone age assessment accuracy.

Chang TY, Chou TY, Jen IA, Yuh YS

pubmed logopapersMay 15 2025
Artificial intelligence (AI) algorithms can provide rapid and precise radiographic bone age (BA) assessment. This study assessed the effects of an AI algorithm on the BA assessment performance of radiologists, and evaluated how automation bias could affect radiologists. In this prospective randomized crossover study, six radiologists with varying levels of experience (senior, mi-level, and junior) assessed cases from a test set of 200 standard BA radiographs. The test set was equally divided into two subsets: datasets A and B. Each radiologist assessed BA independently without AI assistance (A- B-) and with AI assistance (A+ B+). We used the mean of assessments made by two experts as the ground truth for accuracy assessment; subsequently, we calculated the mean absolute difference (MAD) between the radiologists' BA predictions and ground-truth BA and evaluated the proportion of estimates for which the MAD exceeded one year. Additionally, we compared the radiologists' performance under conditions of early AI assistance with their performance under conditions of delayed AI assistance; the radiologists were allowed to reject AI interpretations. The overall accuracy of senior, mid-level, and junior radiologists improved significantly with AI assistance than without AI assistance (MAD: 0.74 vs. 0.46 years, p < 0.001; proportion of assessments for which MAD exceeded 1 year: 24.0% vs. 8.4%, p < 0.001). The proportion of improved BA predictions with AI assistance (16.8%) was significantly higher than that of less accurate predictions with AI assistance (2.3%; p < 0.001). No consistent timing effect was observed between conditions of early and delayed AI assistance. Most disagreements between radiologists and AI occurred over images for patients aged ≤8 years. Senior radiologists had more disagreements than other radiologists. The AI algorithm improved the BA assessment accuracy of radiologists with varying experience levels. Automation bias was prone to affect less experienced radiologists.

MIMI-ONET: Multi-Modal image augmentation via Butterfly Optimized neural network for Huntington DiseaseDetection.

Amudaria S, Jawhar SJ

pubmed logopapersMay 15 2025
Huntington's disease (HD) is a chronic neurodegenerative ailment that affects cognitive decline, motor impairment, and psychiatric symptoms. However, the existing HD detection methods are struggle with limited annotated datasets that restricts their generalization performance. This research work proposes a novel MIMI-ONET for primary detection of HD using augmented multi-modal brain MRI images. The two-dimensional stationary wavelet transform (2DSWT) decomposes the MRI images into different frequency wavelet sub-bands. These sub-bands are enhanced with Contract Stretching Adaptive Histogram Equalization (CSAHE) and Multi-scale Adaptive Retinex (MSAR) by reducing the irrelevant distortions. The proposed MIMI-ONET introduces a Hepta Generative Adversarial Network (Hepta-GAN) to generates different noise-free HD images based on hepta azimuth angles (45°, 90°, 135°, 180°, 225°, 270°, 315°). Hepta-GAN incorporates Affine Estimation Module (AEM) to extract the multi-scale features using dilated convolutional layers for efficient HD image generation. Moreover, Hepta-GAN is normalized with Butterfly Optimization (BO) algorithm for enhancing augmentation performance by balancing the parameters. Finally, the generated images are given to Deep neural network (DNN) for the classification of normal control (NC), Adult-Onset HD (AHD) and Juvenile HD (JHD) cases. The ability of the proposed MIMI-ONET is evaluated with precision, specificity, f1 score, recall, and accuracy, PSNR and MSE. From the experimental results, the proposed MIMI-ONET attains the accuracy of 98.85% and reaches PSNR value of 48.05 based on the gathered Image-HD dataset. The proposed MIMI-ONET increases the overall accuracy of 9.96%, 1.85%, 5.91%, 13.80% and 13.5% for 3DCNN, KNN, FCN, RNN and ML framework respectively.

Predicting Risk of Pulmonary Fibrosis Formation in PASC Patients

Wanying Dou, Gorkem Durak, Koushik Biswas, Ziliang Hong, Andrea Mia Bejar, Elif Keles, Kaan Akin, Sukru Mehmet Erturk, Alpay Medetalibeyoglu, Marc Sala, Alexander Misharin, Hatice Savas, Mary Salvatore, Sachin Jambawalikar, Drew Torigian, Jayaram K. Udupa, Ulas Bagci

arxiv logopreprintMay 15 2025
While the acute phase of the COVID-19 pandemic has subsided, its long-term effects persist through Post-Acute Sequelae of COVID-19 (PASC), commonly known as Long COVID. There remains substantial uncertainty regarding both its duration and optimal management strategies. PASC manifests as a diverse array of persistent or newly emerging symptoms--ranging from fatigue, dyspnea, and neurologic impairments (e.g., brain fog), to cardiovascular, pulmonary, and musculoskeletal abnormalities--that extend beyond the acute infection phase. This heterogeneous presentation poses substantial challenges for clinical assessment, diagnosis, and treatment planning. In this paper, we focus on imaging findings that may suggest fibrotic damage in the lungs, a critical manifestation characterized by scarring of lung tissue, which can potentially affect long-term respiratory function in patients with PASC. This study introduces a novel multi-center chest CT analysis framework that combines deep learning and radiomics for fibrosis prediction. Our approach leverages convolutional neural networks (CNNs) and interpretable feature extraction, achieving 82.2% accuracy and 85.5% AUC in classification tasks. We demonstrate the effectiveness of Grad-CAM visualization and radiomics-based feature analysis in providing clinically relevant insights for PASC-related lung fibrosis prediction. Our findings highlight the potential of deep learning-driven computational methods for early detection and risk assessment of PASC-related lung fibrosis--presented for the first time in the literature.
Page 217 of 2352341 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.