Sort by:
Page 57 of 3463455 results

Automatic opportunistic osteoporosis screening using chest X-ray images via deep neural networks.

Tang J, Yin X, Lai J, Luo K, Wu D

pubmed logopapersAug 27 2025
Osteoporosis is a bone disease characterized by reduced bone mineral density and quality, which increases the risk of fragility fractures. The current diagnostic gold standard, dual-energy X-ray absorptiometry (DXA), faces limitations such as low equipment penetration, high testing costs, and radiation exposure, restricting its feasibility as a screening tool. To address these limitations, We retrospectively collected data from 1995 patients who visited Daping Hospital in Chongqing from January 2019 to August 2024. We developed an opportunistic screening method using chest X-rays. Furthermore, we designed three innovative deep neural network models using transfer learning: Inception v3, VGG16, and ResNet50. These models were evaluated based on their classification performance for osteoporosis using chest X-ray images, with external validation via multi-center data. The ResNet50 model demonstrated superior performance, achieving average accuracies of 87.85 % and 90.38 % in the internal test dataset across two experiments, with AUC values of 0.945 and 0.957, respectively. These results outperformed traditional convolutional neural networks. In the external validation, the ResNet50 model achieved an AUC of 0.904, accuracy of 89 %, sensitivity of 90 %, and specificity of 88.57 %, demonstrating strong generalization ability. And the model shows robust performance despite concurrent pulmonary pathologies. This study provides an automatic screening method for osteoporosis using chest X-rays, without additional radiation exposure or cost. The ResNet50 model's high performance supports clinicians in the early identification and treatment of osteoporosis patients.

A robust deep learning framework for cerebral microbleeds recognition in GRE and SWI MRI.

Hassanzadeh T, Sachdev S, Wen W, Sachdev PS, Sowmya A

pubmed logopapersAug 27 2025
Cerebral microbleeds (CMB) are small hypointense lesions visible on gradient echo (GRE) or susceptibility-weighted (SWI) MRI, serving as critical biomarkers for various cerebrovascular and neurological conditions. Accurate quantification of CMB is essential, as their number correlates with the severity of conditions such as small vessel disease, stroke risk and cognitive decline. Current detection methods depend on manual inspection, which is time-consuming and prone to variability. Automated detection using deep learning presents a transformative solution but faces challenges due to the heterogeneous appearance of CMB, high false-positive rates, and similarity to other artefacts. This study investigates the application of deep learning techniques to public (ADNI and AIBL) and private datasets (OATS and MAS), leveraging GRE and SWI MRI modalities to enhance CMB detection accuracy, reduce false positives, and ensure robustness in both clinical and normal cases (i.e., scans without cerebral microbleeds). A 3D convolutional neural network (CNN) was developed for automated detection, complemented by a You Only Look Once (YOLO)-based approach to address false positive cases in more complex scenarios. The pipeline incorporates extensive preprocessing and validation, demonstrating robust performance across a diverse range of datasets. The proposed method achieves remarkable performance across four datasets, ADNI: Balanced accuracy: 0.953, AUC: 0.955, Precision: 0.954, Sensitivity: 0.920, F1-score: 0.930, AIBL: Balanced accuracy: 0.968, AUC: 0.956, Precision: 0.956, Sensitivity: 0.938, F1-score: 0.946, MAS: Balanced accuracy: 0.889, AUC: 0.889, Precision: 0.948, Sensitivity: 0.779, F1-score: 0.851, and OATS dataset: Balanced accuracy: 0.93, AUC: 0.930, Precision: 0.949, Sensitivity: 0.862, F1-score: 0.900. These results highlight the potential of deep learning models to improve early diagnosis and support treatment planning for conditions associated with CMB.

A Hybrid CNN-Transformer Deep Learning Model for Differentiating Benign and Malignant Breast Tumors Using Multi-View Ultrasound Images

qi, z., Jianxing, Z., Pan, T., Miao, C.

medrxiv logopreprintAug 27 2025
Breast cancer is a leading malignancy threatening womens health globally, making early and accurate diagnosis crucial. Ultrasound is a key screening and diagnostic tool due to its non- invasive, real-time, and cost-effective nature. However, its diagnostic accuracy is highly dependent on operator experience, and conventional single-image analysis often fails to capture the comprehensive features of a lesion. This study introduces a computer-aided diagnosis (CAD) system that emulates a clinicians multi-view diagnostic process. We developed a novel hybrid deep learning model that integrates a Convolutional Neural Network (CNN) with a Transformer architecture. The model uses a pretrained EfficientNetV2 to extract spatial features from multiple, unordered ultrasound images of a single lesion. These features are then processed by a Transformer encoder, whose self-attention mechanism globally models and fuses their intrinsic correlations. A strict lesion-level data partitioning strategy ensured a rigorous evaluation. On an internal test set, our CNN-Transformer model achieved an accuracy of 0.93, a sensitivity of 0.92, a specificity of 0.94, and an Area Under the Curve (AUC) of 0.98. On an external test set, it demonstrated an accuracy of 0.93, a sensitivity of 0.94, a specificity of 0.91, and an AUC of 0.97. These results significantly outperform those of a baseline single-image model, which achieved accuracies of 0.88 and 0.89 and AUCs of 0.95 and 0.94 on the internal and external test sets, respectively. This study confirms that combining CNNs with Transformers yields a highly accurate and robust diagnostic system for breast ultrasound. By effectively fusing multi-view information, our model aligns with clinical logic and shows immense potential for improving diagnostic reliability.

Machine learning prediction of effective radiation doses in various computed tomography applications: a virtual human phantom study.

Tanyildizi-Kokkulunk H

pubmed logopapersAug 26 2025
In this work, it was aimed to employ machine learning (ML) algorithms to accurately forecast the radiation doses for phantoms while accounting for the most popular CT protocols. A cloud-based software was utilized to calculate the effective doses from different CT protocols. To simulate a range of adult patients with different weights, eight entire body mesh-based computational phantom sets were used. The head, neck, and chest-abdomen-pelvis CT scan characteristics were combined to create a dataset with 33 rows for each phantom and 792 rows total. At the ML stage, linear (LR), random forest (RF) and support vector regression (SVR) were used. Mean absolute error, mean squared error and accuracy were used to evaluate the performances. The female phantoms received higher doses (7.8 %) than males. Furthermore, an average of 11 % more dose was taken to the normal weight phantom than to the overweight, the overweight in comparison to the obese I, and the obese I in comparison to the obese II. Among the ML algorithms, the LR showed 0 error rate and 100 % accuracy in predicting CT doses. The LR was shown to be the best approach out of those used in the ML estimation of CT-induced doses.

Application of artificial intelligence in medical imaging for tumor diagnosis and treatment: a comprehensive approach.

Huang J, Xiang Y, Gan S, Wu L, Yan J, Ye D, Zhang J

pubmed logopapersAug 26 2025
This narrative review provides a comprehensive and structured overview of recent advances in the application of artificial intelligence (AI) to medical imaging for tumor diagnosis and treatment. By synthesizing evidence from recent literature and clinical reports, we highlight the capabilities, limitations, and translational potential of AI techniques across key imaging modalities such as CT, MRI, and PET. Deep learning (DL) and radiomics have facilitated automated lesion detection, tumour segmentation, and prognostic assessments, improving early cancer detection across various malignancies, including breast, lung, and prostate cancers. AI-driven multi-modal imaging fusion integrates radiomics, genomics, and clinical data, refining precision oncology strategies. Additionally, AI-assisted radiotherapy planning and adaptive dose optimisation have enhanced therapeutic efficacy while minimising toxicity. However, challenges persist regarding data heterogeneity, model generalisability, regulatory constraints, and ethical concerns. The lack of standardised datasets and explainable AI (XAI) frameworks hinders clinical adoption. Future research should focus on improving AI interpretability, fostering multi-centre dataset interoperability, and integrating AI with molecular imaging and real-time clinical decision support. Addressing these challenges will ensure AI's seamless integration into clinical oncology, optimising cancer diagnosis, prognosis, and treatment outcomes.

ESR Essentials: artificial intelligence in breast imaging-practice recommendations by the European Society of Breast Imaging.

Schiaffino S, Bernardi D, Healy N, Marino MA, Romeo V, Sechopoulos I, Mann RM, Pinker K

pubmed logopapersAug 26 2025
Artificial intelligence (AI) can enhance the diagnostic performance of breast cancer imaging and improve workflow optimization, potentially mitigating excessive radiologist workload and suboptimal diagnostic accuracy. AI can also boost imaging capabilities through individual risk prediction, molecular subtyping, and neoadjuvant therapy response predictions. Evidence demonstrates AI's potential across multiple modalities. The most robust data come from mammographic screening, where AI models improve diagnostic accuracy and optimize workflow, but rigorous post-market surveillance is required before any implementation strategy in this field. Commercial tools for digital breast tomosynthesis and ultrasound, potentially able to reduce interpretation time and improve accuracy, are also available, but post-implementation evaluation studies are likewise lacking. Besides basic tools for breast MRI with limited proven clinical benefit, AI applications for other modalities are not yet commercially available. Applications in contrast-enhanced mammography are still in the research stage, especially for radiomics-based molecular subtype classification. Applications of Large Language Models (LLMs) are in their infancy, and there are currently no clinical applications. Consequently, and despite their promise, all commercially available AI tools for breast imaging should currently still be regarded as techniques that, at best, aid radiologists in image evaluation. Their use is therefore optional, and the findings may always be overruled. KEY POINTS: AI systems improve diagnostic accuracy and efficiency of mammography screening, but long-term outcomes data are lacking. Commercial tools for digital breast tomosynthesis and ultrasound are available, but post-implementation evaluation studies are lacking. AI tools for breast imaging should still be regarded as a non-obligatory aid to radiologists for image interpretation.

[Comparison of diagnostic performance between artificial intelligence-assisted automated breast ultrasound and handheld ultrasound in breast cancer screening].

Yi DS, Sun WY, Song HP, Zhao XL, Hu SY, Gu X, Gao Y, Zhao FH

pubmed logopapersAug 26 2025
<b>Objective:</b> To compare the diagnostic performance of artificial intelligence-assisted automated breast ultrasound (AI-ABUS) with traditional handheld ultrasound (HHUS) in breast cancer screening. <b>Methods:</b> A total of 36 171 women undergoing breast cancer ultrasound screening in Futian District, Shenzhen, between July 1, 2023 and June 30, 2024 were prospectively recruited and assigned to either the AI-ABUS or HHUS group based on the screening modality used. In the AI-ABUS group, image acquisition was performed on-site by technicians, and two ultrasound physicians conducted remote diagnoses with AI assistance, supported by a follow-up management system. In the HHUS group, one ultrasound physician conducted both image acquisition and diagnosis on-site, and follow-up was led by clinical physicians. Based on the reported malignancy rates of different BI-RADS categories, the number of undiagnosed breast cancer cases in individuals without pathology was estimated, and adjusted detection rates were calculated. Primary outcomes included screening positive rate, biopsy rate, cancer detection rate, loss-to-follow-up rate, specificity, and sensitivity. <b>Results:</b> The median age [interquartile range, <i>M</i> (<i>Q</i><sub>1</sub>, <i>Q</i><sub>3</sub>)] of the 36 171 women was 43.8 (36.6, 50.8) years. A total of 14 766 women (40.82%) were screened with AI-ABUS and 21 405 (59.18%) with HHUS. Baseline characteristics showed no significant differences between the groups (all <i>P</i>>0.05). The AI-ABUS group had a lower screening positive rate [0.59% (87/14 766) vs 1.94% (416/21 405)], but higher biopsy rate [47.13% (41/87) vs 16.10% (67/416)], higher cancer detection rate [1.69‰ (25/14 766) vs 0.47‰ (10/21 428)], and lower loss-to-follow-up rate (6.90% vs 71.39%) compared to the HHUS group (all <i>P</i><0.05). There was no statistically significant difference in the distribution of breast cancer pathological stages among those who underwent biopsy between the two groups (<i>P</i>>0.05). The specificity of AI-ABUS was higher than that of HHUS [89.77% (13, 231/14 739) vs 74.12% (15, 858/21 394), <i>P</i><0.05], while sensitivity did not differ significantly [92.59% (25/27) vs 90.91% (10/11), <i>P</i>>0.05]. After estimating undiagnosed cancer cases among participants without pathology, the adjusted detection rate was 2.30‰ (34/14 766) in the AI-ABUS group and ranged from 1.17‰ to 2.75‰ [(25-59)/21 428] in the HHUS group. In the minimum estimation scenario, the detection rate in the AI-ABUS group was significantly higher (<i>P</i><0.05); in the maximum estimation scenario, the difference was not statistically significant (<i>P</i>>0.05). <b>Conclusions:</b> The AI-ABUS model, combined with an intelligent follow-up management system, enables a higher breast cancer detection rate with a lower screening positive rate, improved specificity, and reduced loss to follow-up. This suggests AI-ABUS is a promising alternative model for breast cancer screening.

Optimizing meningioma grading with radiomics and deep features integration, attention mechanisms, and reproducibility analysis.

Albadr RJ, Sur D, Yadav A, Rekha MM, Jain B, Jayabalan K, Kubaev A, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Mohammadifard M, Farhood B, Akhavan-Sigari R

pubmed logopapersAug 26 2025
This study aims to develop a robust and clinically applicable framework for preoperative grading of meningiomas using T1-contrast-enhanced and T2-weighted MRI images. The approach integrates radiomic feature extraction, attention-guided deep learning models, and reproducibility assessment to achieve high diagnostic accuracy, model interpretability, and clinical reliability. We analyzed MRI scans from 2546 patients with histopathologically confirmed meningiomas (1560 low-grade, 986 high-grade). High-quality T1-contrast and T2-weighted images were preprocessed through harmonization, normalization, resizing, and augmentation. Tumor segmentation was performed using ITK-SNAP, and inter-rater reliability of radiomic features was evaluated using the intraclass correlation coefficient (ICC). Radiomic features were extracted via the SERA software, while deep features were derived from pre-trained models (ResNet50 and EfficientNet-B0), with attention mechanisms enhancing focus on tumor-relevant regions. Feature fusion and dimensionality reduction were conducted using PCA and LASSO. Ensemble models employing Random Forest, XGBoost, and LightGBM were implemented to optimize classification performance using both radiomic and deep features. Reproducibility analysis showed that 52% of radiomic features demonstrated excellent reliability (ICC > 0.90). Deep features from EfficientNet-B0 outperformed ResNet50, achieving AUCs of 94.12% (T1) and 93.17% (T2). Hybrid models combining radiomic and deep features further improved performance, with XGBoost reaching AUCs of 95.19% (T2) and 96.87% (T1). Ensemble models incorporating both deep architectures achieved the highest classification performance, with AUCs of 96.12% (T2) and 96.80% (T1), demonstrating superior robustness and accuracy. This work introduces a comprehensive and clinically meaningful AI framework that significantly enhances the preoperative grading of meningiomas. The model's high accuracy, interpretability, and reproducibility support its potential to inform surgical planning, reduce reliance on invasive diagnostics, and facilitate more personalized therapeutic decision-making in routine neuro-oncology practice. Not applicable.

Validation of an Automated CT Image Analysis in the Prevention of Urinary Stones with Hydration Trial.

Tasian GE, Maalouf NM, Harper JD, Sivalingam S, Logan J, Al-Khalidi HR, Lieske JC, Selman-Fermin A, Desai AC, Lai H, Kirkali Z, Scales CD, Fan Y

pubmed logopapersAug 26 2025
<b><i>Introduction and Objective:</i></b> Kidney stone growth and new stone formation are common clinical trial endpoints and are associated with future symptomatic events. To date, a manual review of CT scans has been required to assess stone growth and new stone formation, which is laborious. We validated the performance of a software algorithm that automatically identified, registered, and measured stones over longitudinal CT studies. <b><i>Methods:</i></b> We validated the performance of a pretrained machine learning algorithm to classify stone outcomes on longitudinal CT scan images at baseline and at the end of the 2-year follow-up period for 62 participants aged >18 years in the Prevention of Urinary Stones with Hydration (PUSH) randomized controlled trial. Stones were defined as an area of voxels with a minimum linear dimension of 2 mm that was higher in density than the mean plus 4 standard deviations of all nonnegative HU values within the kidney. The four outcomes assessed were: (1) growth of at least one existing stone by ≥2 mm, (2) formation of at least one new ≥2 mm stone, (3) no stone growth or new stone formation, and (4) loss of at least one stone. The accuracy of the algorithm was determined by comparing its outcomes to the gold standard of independent review of the CT images by at least two expert clinicians. <b><i>Results:</i></b> The algorithm correctly classified outcomes for 61 paired scans (98.4%). One pair that the algorithm incorrectly classified as stone growth was a new renal artery calcification on end-of-study CT. <b><i>Conclusions:</i></b> An automated image analysis method validated for the prospective PUSH trial was highly accurate for determining clinical outcomes of new stone formation, stone growth, stable stone size, and stone loss on longitudinal CT images. This method has the potential to improve the accuracy and efficiency of clinical care and endpoint determination for future clinical trials.

Machine Learning-Driven radiomics on 18 F-FDG PET for glioma diagnosis: a systematic review and meta-analysis.

Shahriari A, Ghazanafar Ahari S, Mousavi A, Sadeghi M, Abbasi M, Hosseinpour M, Mir A, Zohouri Zanganeh D, Gharedaghi H, Ezati S, Sareminia A, Seyedi D, Shokouhfar M, Darzi A, Ghaedamini A, Zamani S, Khosravi F, Asadi Anar M

pubmed logopapersAug 26 2025
Machine learning (ML) applied to radiomics has revolutionized neuro-oncological imaging, yet the diagnostic performance of ML models based specifically on ^18F-FDG PET features in glioma remains poorly characterized. To systematically evaluate and quantitatively synthesize the diagnostic accuracy of ML models trained on ^18F-FDG PET radiomics for glioma classification. We conducted a PRISMA-compliant systematic review and meta-analysis registered on OSF ( https://doi.org/10.17605/OSF.IO/XJG6P ). PubMed, Scopus, and Web of Science were searched up to January 2025. Studies were included if they applied ML algorithms to ^18F-FDG PET radiomic features for glioma classification and reported at least one performance metric. Data extraction included demographics, imaging protocols, feature types, ML models, and validation design. Meta-analysis was performed using random-effects models with pooled estimates of accuracy, sensitivity, specificity, AUC, F1 score, and precision. Heterogeneity was explored via meta-regression and Galbraith plots. Twelve studies comprising 2,321 patients were included. Pooled diagnostic metrics were: accuracy 92.6% (95% CI: 91.3-93.9%), AUC 0.95 (95% CI: 0.94-0.95), sensitivity 85.4%, specificity 89.7%, F1 score 0.78, and precision 0.90. Heterogeneity was high across all domains (I² >75%). Meta-regression identified ML model type and validation strategy as partial moderators. Models using CNNs or PET/MRI integration achieved superior performance. ML models based on ^18F-FDG PET radiomics demonstrate strong and balanced diagnostic performance for glioma classification. However, methodological heterogeneity underscores the need for standardized pipelines, external validation, and transparent reporting before clinical integration.
Page 57 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.