Sort by:
Page 357 of 3813802 results

Total radius BMD correlates with the hip and lumbar spine BMD among post-menopausal patients with fragility wrist fracture in a machine learning model.

Ruotsalainen T, Panfilov E, Thevenot J, Tiulpin A, Saarakkala S, Niinimäki J, Lehenkari P, Valkealahti M

pubmed logopapersMay 14 2025
Osteoporosis screening should be systematic in the group of over 50-year-old females with a radius fracture. We tested a phantom combined with machine learning model and studied osteoporosis-related variables. This machine learning model for screening osteoporosis using plain radiographs requires further investigation in larger cohorts to assess its potential as a replacement for DXA measurements in settings where DXA is not available. The main purpose of this study was to improve osteoporosis screening, especially in post-menopausal patients with fragility wrist fractures. The secondary objective was to increase understanding of the connection between osteoporosis and aging, as well as other risk factors. We collected data on 83 females > 50 years old with a distal radius fracture treated at Oulu University Hospital in 2019-2020. The data included basic patient information, WHO FRAX tool, blood tests, X-ray imaging of the fractured wrist, and DXA scanning of the non-fractured forearm, both hips, and the lumbar spine. Machine learning was used in combination with a custom phantom. Eighty-five percent of the study population had osteopenia or osteoporosis. Only 28.4% of patients had increased bone resorption activity measured by ICTP values. Total radius BMD correlated with other osteoporosis-related variables (age r =  - 0.494, BMI r = 0.273, FRAX osteoporotic fracture risk r =  - 0.419, FRAX hip fracture risk r =  - 0.433, hip BMD r = 0.435, and lumbar spine BMD r = 0.645), but the ultra distal (UD) radius BMD did not. Our custom phantom combined with a machine learning model showed potential for screening osteoporosis, with the class-wise accuracies for "Osteoporotic vs. osteopenic & normal bone" of 76% and 75%, respectively. We suggest osteoporosis screening for all females over 50 years old with wrist fractures. We found that the total radius BMD correlates with the central BMD. Due to the limited sample size in the phantom and machine learning parts of the study, further research is needed to make a clinically useful tool for screening osteoporosis.

Privacy-preserving Federated Learning and Uncertainty Quantification in Medical Imaging.

Koutsoubis N, Waqas A, Yilmaz Y, Ramachandran RP, Schabath MB, Rasool G

pubmed logopapersMay 14 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Artificial Intelligence (AI) has demonstrated strong potential in automating medical imaging tasks, with potential applications across disease diagnosis, prognosis, treatment planning, and posttreatment surveillance. However, privacy concerns surrounding patient data remain a major barrier to the widespread adoption of AI in clinical practice, as large and diverse training datasets are essential for developing accurate, robust, and generalizable AI models. Federated Learning offers a privacy-preserving solution by enabling collaborative model training across institutions without sharing sensitive data. Instead, model parameters, such as model weights, are exchanged between participating sites. Despite its potential, federated learning is still in its early stages of development and faces several challenges. Notably, sensitive information can still be inferred from the shared model parameters. Additionally, postdeployment data distribution shifts can degrade model performance, making uncertainty quantification essential. In federated learning, this task is particularly challenging due to data heterogeneity across participating sites. This review provides a comprehensive overview of federated learning, privacy-preserving federated learning, and uncertainty quantification in federated learning. Key limitations in current methodologies are identified, and future research directions are proposed to enhance data privacy and trustworthiness in medical imaging applications. ©RSNA, 2025.

The Future of Urodynamics: Innovations, Challenges, and Possibilities.

Chew LE, Hannick JH, Woo LL, Weaver JK, Damaser MS

pubmed logopapersMay 14 2025
Urodynamic studies (UDS) are essential for evaluating lower urinary tract function but are limited by patient discomfort, lack of standardization and diagnostic variability. Advances in technology aim to address these challenges and improve diagnostic accuracy and patient comfort. AUM offers physiological assessment by allowing natural bladder filling and monitoring during daily activities. Compared to conventional UDS, AUM demonstrates higher sensitivity for detecting detrusor overactivity and underlying pathophysiology. However, it faces challenges like motion artifacts, catheter-related discomfort, and difficulty measuring continuous bladder volume. Emerging devices such as Urodynamics Monitor and UroSound offer more patient-friendly alternatives. These tools have the potential to improve diagnostic accuracy for bladder pressure and voiding metrics but remain limited and still require further validation and testing. Ultrasound-based modalities, including dynamic ultrasonography and shear wave elastography, provide real-time, noninvasive assessment of bladder structure and function. These modalities are promising but will require further development of standardized protocols. AI and machine learning models enhance diagnostic accuracy and reduce variability in UDS interpretation. Applications include detecting detrusor overactivity and distinguishing bladder outlet obstruction from detrusor underactivity. However, further validation is required for clinical adoption. Advances in AUM, wearable technologies, ultrasonography, and AI demonstrate potential for transforming UDS into a more accurate, patient-centered tool. Despite significant progress, challenges like technical complexity, standardization, and cost-effectiveness must be addressed to integrate these innovations into routine practice. Nonetheless, these technologies provide the possibility of a future of improved diagnosis and treatment of lower urinary tract dysfunction.

Synthetic Data-Enhanced Classification of Prevalent Osteoporotic Fractures Using Dual-Energy X-Ray Absorptiometry-Based Geometric and Material Parameters.

Quagliato L, Seo J, Hong J, Lee T, Chung YS

pubmed logopapersMay 14 2025
Bone fracture risk assessment for osteoporotic patients is essential for implementing early countermeasures and preventing discomfort and hospitalization. Current methodologies, such as Fracture Risk Assessment Tool (FRAX), provide a risk assessment over a 5- to 10-year period rather than evaluating the bone's current health status. The database was collected by Ajou University Medical Center from 2017 to 2021. It included 9,260 patients, aged 55 to 99, comprising 242 femur fracture (FX) cases and 9,018 non-fracture (NFX) cases. To model the association of the bone's current health status with prevalent FXs, three prediction algorithms-extreme gradient boosting (XGB), support vector machine, and multilayer perceptron-were trained using two-dimensional dual-energy X-ray absorptiometry (2D-DXA) analysis results and subsequently benchmarked. The XGB classifier, which proved most effective, was then further refined using synthetic data generated by the adaptive synthetic oversampler to balance the FX and NFX classes and enhance boundary sharpness for better classification accuracy. The XGB model trained on raw data demonstrated good prediction capabilities, with an area under the curve (AUC) of 0.78 and an F1 score of 0.71 on test cases. The inclusion of synthetic data improved classification accuracy in terms of both specificity and sensitivity, resulting in an AUC of 0.99 and an F1 score of 0.98. The proposed methodology demonstrates that current bone health can be assessed through post-processed results from 2D-DXA analysis. Moreover, it was also shown that synthetic data can help stabilize uneven databases by balancing majority and minority classes, thereby significantly improving classification performance.

[Radiosurgery of benign intracranial lesions. Indications, results , and perspectives].

Danthez N, De Cournuaud C, Pistocchi S, Aureli V, Giammattei L, Hottinger AF, Schiappacasse L

pubmed logopapersMay 14 2025
Stereotactic radiosurgery (SRS) is a non-invasive technique that is transforming the management of benign intracranial lesions through its precision and preservation of healthy tissues. It is effective for meningiomas, trigeminal neuralgia (TN), pituitary adenomas, vestibular schwannomas, and arteriovenous malformations. SRS ensures high tumor control rates, particularly for Grade I meningiomas and vestibular schwannomas. For refractory TN, it provides initial pain relief > 80 %. The advent of technologies such as PET-MRI, hypofractionation, and artificial intelligence is further improving treatment precision, but challenges remain, including the management of late side effects and standardization of practice.

Optimizing breast lesions diagnosis and decision-making with a deep learning fusion model integrating ultrasound and mammography: a dual-center retrospective study.

Xu Z, Zhong S, Gao Y, Huo J, Xu W, Huang W, Huang X, Zhang C, Zhou J, Dan Q, Li L, Jiang Z, Lang T, Xu S, Lu J, Wen G, Zhang Y, Li Y

pubmed logopapersMay 14 2025
This study aimed to develop a BI-RADS network (DL-UM) via integrating ultrasound (US) and mammography (MG) images and explore its performance in improving breast lesion diagnosis and management when collaborating with radiologists, particularly in cases with discordant US and MG Breast Imaging Reporting and Data System (BI-RADS) classifications. We retrospectively collected image data from 1283 women with breast lesions who underwent both US and MG within one month at two medical centres and categorised them into concordant and discordant BI-RADS classification subgroups. We developed a DL-UM network via integrating US and MG images, and DL networks using US (DL-U) or MG (DL-M) alone, respectively. The performance of DL-UM network for breast lesion diagnosis was evaluated using ROC curves and compared to DL-U and DL-M networks in the external testing dataset. The diagnostic performance of radiologists with different levels of experience under the assistance of DL-UM network was also evaluated. In the external testing dataset, DL-UM outperformed DL-M in sensitivity (0.962 vs. 0.833, P = 0.016) and DL-U in specificity (0.667 vs. 0.526, P = 0.030), respectively. In the discordant BI-RADS classification subgroup, DL-UM achieved an AUC of 0.910. The diagnostic performance of four radiologists improved when collaborating with the DL-UM network, with AUCs increased from 0.674-0.772 to 0.889-0.910, specificities from 52.1%-75.0 to 81.3-87.5% and reducing unnecessary biopsies by 16.1%-24.6%, particularly for junior radiologists. Meanwhile, DL-UM outputs and heatmaps enhanced radiologists' trust and improved interobserver agreement between US and MG, with weighted kappa increased from 0.048 to 0.713 (P < 0.05). The DL-UM network, integrating complementary US and MG features, assisted radiologists in improving breast lesion diagnosis and management, potentially reducing unnecessary biopsies.

Application of artificial intelligence medical imaging aided diagnosis system in the diagnosis of pulmonary nodules.

Yang Y, Wang P, Yu C, Zhu J, Sheng J

pubmed logopapersMay 14 2025
The application of artificial intelligence (AI) technology has realized the transformation of people's production and lifestyle, and also promoted the rapid development of the medical field. At present, the application of intelligence in the medical field is increasing. Using its advanced methods and technologies of AI, this paper aims to realize the integration of medical imaging-aided diagnosis system and AI, which is helpful to analyze and solve the loopholes and errors of traditional artificial diagnosis in the diagnosis of pulmonary nodules. Drawing on the principles and rules of image segmentation methods, the construction and optimization of a medical image-aided diagnosis system is carried out to realize the precision of the diagnosis system in the diagnosis of pulmonary nodules. In the diagnosis of pulmonary nodules carried out by traditional artificial and medical imaging-assisted diagnosis systems, 231 nodules with pathology or no change in follow-up for more than two years were also tested in 200 cases. The results showed that the AI software detected a total of 881 true nodules with a sensitivity of 99.10% (881/889). The radiologists detected 385 true nodules with a sensitivity of 43.31% (385/889). The sensitivity of AI software in detecting non-calcified nodules was significantly higher than that of radiologists (99.01% vs 43.30%, P < 0.001), and the difference was statistically significant.

A multi-layered defense against adversarial attacks in brain tumor classification using ensemble adversarial training and feature squeezing.

Yinusa A, Faezipour M

pubmed logopapersMay 14 2025
Deep learning, particularly convolutional neural networks (CNNs), has proven valuable for brain tumor classification, aiding diagnostic and therapeutic decisions in medical imaging. Despite their accuracy, these models are vulnerable to adversarial attacks, compromising their reliability in clinical settings. In this research, we utilized a VGG16-based CNN model to classify brain tumors, achieving 96% accuracy on clean magnetic resonance imaging (MRI) data. To assess robustness, we exposed the model to Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks, which reduced accuracy to 32% and 13%, respectively. We then applied a multi-layered defense strategy, including adversarial training with FGSM and PGD examples and feature squeezing techniques such as bit-depth reduction and Gaussian blurring. This approach improved model resilience, achieving 54% accuracy on FGSM and 47% on PGD adversarial examples. Our results highlight the importance of proactive defense strategies for maintaining the reliability of AI in medical imaging under adversarial conditions.

CT-based AI framework leveraging multi-scale features for predicting pathological grade and Ki67 index in clear cell renal cell carcinoma: a multicenter study.

Yang H, Zhang Y, Li F, Liu W, Zeng H, Yuan H, Ye Z, Huang Z, Yuan Y, Xiang Y, Wu K, Liu H

pubmed logopapersMay 14 2025
To explore whether a CT-based AI framework, leveraging multi-scale features, can offer a non-invasive approach to accurately predict pathological grade and Ki67 index in clear cell renal cell carcinoma (ccRCC). In this multicenter retrospective study, a total of 1073 pathologically confirmed ccRCC patients from seven cohorts were split into internal cohorts (training and validation sets) and an external test set. The AI framework comprised an image processor, a 3D-kidney and tumor segmentation model by 3D-UNet, a multi-scale features extractor built upon unsupervised learning, and a multi-task classifier utilizing XGBoost. A quantitative model interpretation technique, known as SHapley Additive exPlanations (SHAP), was employed to explore the contribution of multi-scale features. The 3D-UNet model showed excellent performance in segmenting both the kidney and tumor regions, with Dice coefficients exceeding 0.92. The proposed multi-scale features model exhibited strong predictive capability for pathological grading and Ki67 index, with AUROC values of 0.84 and 0.87, respectively, in the internal validation set, and 0.82 and 0.82, respectively, in the external test set. The SHAP results demonstrated that features from radiomics, the 3D Auto-Encoder, and dimensionality reduction all made significant contributions to both prediction tasks. The proposed AI framework, leveraging multi-scale features, accurately predicts the pathological grade and Ki67 index of ccRCC. The CT-based AI framework leveraging multi-scale features offers a promising avenue for accurately predicting the pathological grade and Ki67 index of ccRCC preoperatively, indicating a direction for non-invasive assessment. Non-invasively determining pathological grade and Ki67 index in ccRCC could guide treatment decisions. The AI framework integrates segmentation, classification, and model interpretation, enabling fully automated analysis. The AI framework enables non-invasive preoperative detection of high-risk tumors, assisting clinical decision-making.

Whole-body CT-to-PET synthesis using a customized transformer-enhanced GAN.

Xu B, Nie Z, He J, Li A, Wu T

pubmed logopapersMay 14 2025
Positron emission tomography with 2-deoxy-2-[fluorine-18]fluoro-D-glucose integrated with computed tomography (18F-FDG PET-CT) is a multi-modality medical imaging technique widely used for screening and diagnosis of lesions and tumors, in which, CT can provide detailed anatomical structures, while PET can show metabolic activities. Nevertheless, it has disadvantages such as long scanning time, high cost, and relatively high radiation doses.&#xD;&#xD;Purpose: We propose a deep learning model for the whole-body CT-to-PET synthesis task, generating high-quality synthetic PET images that are comparable to real ones in both clinical relevance and diagnostic value.&#xD;&#xD;Material: We collect 102 pairs of 3D CT and PET scans, which are sliced into 27,240 pairs of 2D CT and PET images ( training: 21,855 pairs, validation: 2,810, testing: 2,575 pairs).&#xD;&#xD;Methods: We propose a Transformer-enhanced Generative Adversarial Network (GAN) for whole-body CT-to-PET synthesis task. The CPGAN model uses residual blocks and Fully Connected Transformer Residual (FCTR) blocks to capture both local features and global contextual information. A customized loss function incorporating structural consistency is designed to improve the quality of synthesized PET images.&#xD;&#xD;Results: Both quantitative and qualitative evaluation results demonstrate effectiveness of the CPGAN model. The mean and standard variance of NRMSE,PSNR and SSIM values on test set are (16.90 ± 12.27) × 10-4, 28.71 ± 2.67 and 0.926 ± 0.033, respectively, outperforming other seven state-of-the-art models. Three radiologists independently and blindly evaluated and gave subjective scores to 100 randomly chosen PET images (50 real and 50 synthetic). By Wilcoxon signed rank test, there are no statistical differences between the synthetic PET images and the real ones.&#xD;&#xD;Conclusions: Despite the inherent limitations of CT images to directly reflect biological information of metabolic tissues, CPGAN model effectively synthesizes satisfying PET images from CT scans, which has potential in reducing the reliance on actual PET-CT scans.
Page 357 of 3813802 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.