Sort by:
Page 2 of 33326 results

Automated 3D segmentation of rotator cuff muscle and fat from longitudinal CT for shoulder arthroplasty evaluation.

Yang M, Jun BJ, Owings T, Subhas N, Polster J, Winalski CS, Ho JC, Entezari V, Derwin KA, Ricchetti ET, Li X

pubmed logopapersAug 9 2025
To develop and validate a deep learning model for automated 3D segmentation of rotator cuff muscles on longitudinal CT scans to quantify muscle volume and fat fraction in patients undergoing total shoulder arthroplasty (TSA). The proposed segmentation models adopted DeepLabV3 + with ResNet50 as the backbone. The models were trained, validated, and tested on preoperative or minimum 2-year follow-up CT scans from 53 TSA subjects. 3D Dice similarity scores, average symmetric surface distance (ASSD), 95th percentile Hausdorff distance (HD95), and relative absolute volume difference (RAVD) were used to evaluate the model performance on hold-out test sets. The trained models were applied to a cohort of 172 patients to quantify rotator cuff muscle volumes and fat fractions across preoperative and minimum 2- and 5-year follow-ups. Compared to the ground truth, the models achieved mean Dice of 0.928 and 0.916, mean ASSD of 0.844 mm and 1.028 mm, mean HD95 of 3.071 mm and 4.173 mm, and mean RAVD of 0.025 and 0.068 on the hold-out test sets for the pre-operative and the minimum 2-year follow-up CT scans, respectively. This study developed accurate and reliable deep learning models for automated 3D segmentation of rotator cuff muscles on clinical CT scans in TSA patients. These models substantially reduce the time required for muscle volume and fat fraction analysis and provide a practical tool for investigating how rotator cuff muscle health relates to surgical outcomes. This has the potential to inform patient selection, rehabilitation planning, and surgical decision-making in TSA and RCR.

Parental and carer views on the use of AI in imaging for children: a national survey.

Agarwal G, Salami RK, Lee L, Martin H, Shantharam L, Thomas K, Ashworth E, Allan E, Yung KW, Pauling C, Leyden D, Arthurs OJ, Shelmerdine SC

pubmed logopapersAug 9 2025
Although the use of artificial intelligence (AI) in healthcare is increasing, stakeholder engagement remains poor, particularly relating to understanding parent/carer acceptance of AI tools in paediatric imaging. We explore these perceptions and compare them to the opinions of children and young people (CYAP). A UK national online survey was conducted, inviting parents, carers and guardians of children to participate. The survey was "live" from June 2022 to 2023. The survey included questions asking about respondents' views of AI in general, as well as in specific circumstances (e.g. fractures) with respect to children's healthcare. One hundred forty-six parents/carers (mean age = 45; range = 21-80) from all four nations of the UK responded. Most respondents (93/146, 64%) believed that AI would be more accurate at interpreting paediatric musculoskeletal radiographs than healthcare professionals, but had a strong preference for human supervision (66%). Whilst male respondents were more likely to believe that AI would be more accurate (55/72, 76%), they were twice as likely as female parents/carers to believe that AI use could result in their child's data falling into the wrong hands. Most respondents would like to be asked permission before AI is used for the interpretation of their child's scans (104/146, 71%). Notably, 79% of parents/carers prioritised accuracy over speed compared to 66% of CYAP. Parents/carers feel positively about AI for paediatric imaging but strongly discourage autonomous use. Acknowledging the diverse opinions of the patient population is vital in aiding the successful integration of AI for paediatric imaging. Parents/carers demonstrate a preference for AI use with human supervision that prioritises accuracy, transparency and institutional accountability. AI is welcomed as a supportive tool, but not as a substitute for human expertise. Parents/carers are accepting of AI use, with human supervision. Over half believe AI would replace doctors/nurses looking at bone X-rays within 5 years. Parents/carers are more likely than CYAP to trust AI's accuracy. Parents/carers are also more sceptical about AI data misuse.

Towards MR-Based Trochleoplasty Planning

Michael Wehrli, Alicia Durrer, Paul Friedrich, Sidaty El Hadramy, Edwin Li, Luana Brahaj, Carol C. Hasler, Philippe C. Cattin

arxiv logopreprintAug 8 2025
To treat Trochlear Dysplasia (TD), current approaches rely mainly on low-resolution clinical Magnetic Resonance (MR) scans and surgical intuition. The surgeries are planned based on surgeons experience, have limited adoption of minimally invasive techniques, and lead to inconsistent outcomes. We propose a pipeline that generates super-resolved, patient-specific 3D pseudo-healthy target morphologies from conventional clinical MR scans. First, we compute an isotropic super-resolved MR volume using an Implicit Neural Representation (INR). Next, we segment femur, tibia, patella, and fibula with a multi-label custom-trained network. Finally, we train a Wavelet Diffusion Model (WDM) to generate pseudo-healthy target morphologies of the trochlear region. In contrast to prior work producing pseudo-healthy low-resolution 3D MR images, our approach enables the generation of sub-millimeter resolved 3D shapes compatible for pre- and intraoperative use. These can serve as preoperative blueprints for reshaping the femoral groove while preserving the native patella articulation. Furthermore, and in contrast to other work, we do not require a CT for our pipeline - reducing the amount of radiation. We evaluated our approach on 25 TD patients and could show that our target morphologies significantly improve the sulcus angle (SA) and trochlear groove depth (TGD). The code and interactive visualization are available at https://wehrlimi.github.io/sr-3d-planning/.

Application of Artificial Intelligence in Bone Quality and Quantity Assessment for Dental Implant Planning: A Scoping Review.

Qiu S, Yu X, Wu Y

pubmed logopapersAug 8 2025
To assess how artificial intelligence (AI) models perform in evaluating bone quality and quantity in the preoperative planning process for dental implants. This review included studies that utilized AI-based assessments of bone quality and/or quantity based on radiographic images in the preoperative phase. Studies published in English before April 2025 were used in this review, which were obtained from searches in PubMed/MEDLINE, Embase, Web of Science, Scopus, and the Cochrane Library, as well as from manual searches. Eleven studies met the inclusion criteria. Five studies focused on bone quality evaluation and six studies included volumetric assessments using AI models. The performance measures included accuracy, sensitivity, specificity, precision, F1 score, and Dice coefficient, and were compared with human expert evaluations. AI models demonstrated high accuracy (76.2%-99.84%), high sensitivity (78.9%-100%), and high specificity (66.2%-99%). AI models have potential for the evaluation of bone quality and quantity, although standardization and external validation studies are lacking. Future studies should propose multicenter datasets, integration into clinical workflows, and the development of refined models to better reflect real-life conditions. AI has the potential to offer clinicians with reliable automated evaluations of bone quality and quantity, with the promise of a fully automated system of implant planning. It may also support preoperative workflows for clinical decision-making based on evidence more efficiently.

A Co-Plane Machine Learning Model Based on Ultrasound Radiomics for the Evaluation of Diabetic Peripheral Neuropathy.

Jiang Y, Peng R, Liu X, Xu M, Shen H, Yu Z, Jiang Z

pubmed logopapersAug 8 2025
Detection of diabetic peripheral neuropathy (DPN) is critical for preventing severe complications. Machine learning (ML) and radiomics offer promising approaches for the diagnosis of DPN; however, their application in ultrasound-based detection of DPN remains limited. Moreover, there is no consensus on whether longitudinal or transverse ultrasound planes provide more robust radiomic features for nerve evaluation. This study aimed to analyze and compare radiomic features from different ultrasound planes of the tibial nerve and to develop a co-plane fusion ML model to enhance the diagnostic accuracy of DPN. In our study, a total of 516 feet from 262 diabetics across two institutions was analyzed and stratified into a training cohort (n = 309), an internal testing cohort (n = 133), and an external testing cohort (n = 74). A total of 1316 radiomic features were extracted from both transverse and longitudinal planes of the tibial nerve. After feature selection, six ML algorithms were utilized to construct radiomics models based on transverse, longitudinal, and combined planes. The performance of these models was assessed using receiver operating characteristic curves, calibration curves, and decision curve analysis (DCA). Shapley Additive exPlanations (SHAP) were employed to elucidate the key features and their contributions to predictions within the optimal model. The co-plane Support Vector Machine (SVM) model exhibited superior performance, achieving AUC values of 0.90 (95% CI: 0.86-0.93), 0.88 (95% CI: 0.84-0.91), and 0.70 (95% CI: 0.64-0.76) in the training, internal testing, and external testing cohorts, respectively. These results significantly exceeded those of the single-plane models, as determined by the DeLong test (P < 0.05). Calibration curves and DCA curve indicated a good model fit and suggested potential clinical utility. Furthermore, SHAP were employed to explain the model. The co-plane SVM model, which integrates transverse and longitudinal radiomic features of the tibial nerve, demonstrated optimal performance in DPN prediction, thereby significantly enhancing the efficacy of DPN diagnosis. This model may serve as a robust tool for noninvasive assessment of DPN, highlighting its promising applicability in clinical settings.

XAG-Net: A Cross-Slice Attention and Skip Gating Network for 2.5D Femur MRI Segmentation

Byunghyun Ko, Anning Tian, Jeongkyu Lee

arxiv logopreprintAug 8 2025
Accurate segmentation of femur structures from Magnetic Resonance Imaging (MRI) is critical for orthopedic diagnosis and surgical planning but remains challenging due to the limitations of existing 2D and 3D deep learning-based segmentation approaches. In this study, we propose XAG-Net, a novel 2.5D U-Net-based architecture that incorporates pixel-wise cross-slice attention (CSA) and skip attention gating (AG) mechanisms to enhance inter-slice contextual modeling and intra-slice feature refinement. Unlike previous CSA-based models, XAG-Net applies pixel-wise softmax attention across adjacent slices at each spatial location for fine-grained inter-slice modeling. Extensive evaluations demonstrate that XAG-Net surpasses baseline 2D, 2.5D, and 3D U-Net models in femur segmentation accuracy while maintaining computational efficiency. Ablation studies further validate the critical role of the CSA and AG modules, establishing XAG-Net as a promising framework for efficient and accurate femur MRI segmentation.

A Multimodal Deep Learning Ensemble Framework for Building a Spine Surgery Triage System.

Siavashpour M, McCabe E, Nataraj A, Pareek N, Zaiane O, Gross D

pubmed logopapersAug 7 2025
Spinal radiology reports and physician-completed questionnaires serve as crucial resources for medical decision-making for patients experiencing low back and neck pain. However, due to the time-consuming nature of this process, individuals with severe conditions may experience a deterioration in their health before receiving professional care. In this work, we propose an ensemble framework built on top of pre-trained BERT-based models which can classify patients on their need for surgery given their different data modalities including radiology reports and questionnaires. Our results demonstrate that our approach exceeds previous studies, effectively integrating information from multiple data modalities and serving as a valuable tool to assist physicians in decision making.

Automated detection of wrist ganglia in MRI using convolutional neural networks.

Hämäläinen M, Sormaala M, Kaseva T, Salli E, Savolainen S, Kangasniemi M

pubmed logopapersAug 7 2025
To investigate feasibility of a method which combines segmenting convolutional neural networks (CNN) for the automated detection of ganglion cysts in 2D MRI of the wrist. The study serves as proof-of-concept, demonstrating a method to decrease false positives and offering an efficient solution for ganglia detection. We retrospectively analyzed 58 MRI studies with wrist ganglia, each including 2D axial, sagittal, and coronal series. Manual segmentations were performed by a radiologist and used to train CNNs for automatic segmentation of each orthogonal series. Predictions were fused into a single 3D volume using a proposed prediction fusion method. Performance was evaluated over all studies using six-fold cross-validation, comparing method variations with metrics including true positive rate, number of false positives, and F-score metrics. The proposed method reached mean TPR of 0.57, mean FP of 0.4 and mean F-score of 0.53. Fusion of series predictions decreased the number of false positives significantly but also decreased TPR values. CNNs can detect ganglion cysts in wrist MRI. The number of false positives can be decreased by a method of prediction fusion from multiple CNNs.

Predictive Modeling of Osteonecrosis of the Femoral Head Progression Using MobileNetV3_Large and Long Short-Term Memory Network: Novel Approach.

Kong G, Zhang Q, Liu D, Pan J, Liu K

pubmed logopapersAug 6 2025
The assessment of osteonecrosis of the femoral head (ONFH) often presents challenges in accuracy and efficiency. Traditional methods rely on imaging studies and clinical judgment, prompting the need for advanced approaches. This study aims to use deep learning algorithms to enhance disease assessment and prediction in ONFH, optimizing treatment strategies. The primary objective of this research is to analyze pathological images of ONFH using advanced deep learning algorithms to evaluate treatment response, vascular reconstruction, and disease progression. By identifying the most effective algorithm, this study seeks to equip clinicians with precise tools for disease assessment and prediction. Magnetic resonance imaging (MRI) data from 30 patients diagnosed with ONFH were collected, totaling 1200 slices, which included 675 slices with lesions and 225 normal slices. The dataset was divided into training (630 slices), validation (135 slices), and test (135 slices) sets. A total of 10 deep learning algorithms were tested for training and optimization, and MobileNetV3_Large was identified as the optimal model for subsequent analyses. This model was applied for quantifying vascular reconstruction, evaluating treatment responses, and assessing lesion progression. In addition, a long short-term memory (LSTM) model was integrated for the dynamic prediction of time-series data. The MobileNetV3_Large model demonstrated an accuracy of 96.5% (95% CI 95.1%-97.8%) and a recall of 94.8% (95% CI 93.2%-96.4%) in ONFH diagnosis, significantly outperforming DenseNet201 (87.3%; P<.05). Quantitative evaluation of treatment responses showed that vascularized bone grafting resulted in an average increase of 12.4 mm in vascular length (95% CI 11.2-13.6 mm; P<.01) and an increase of 2.7 in branch count (95% CI 2.3-3.1; P<.01) among the 30 patients. The model achieved an AUC of 0.92 (95% CI 0.90-0.94) for predicting lesion progression, outperforming traditional methods like ResNet50 (AUC=0.85; P<.01). Predictions were consistent with clinical observations in 92.5% of cases (24/26). The application of deep learning algorithms in examining treatment response, vascular reconstruction, and disease progression in ONFH presents notable advantages. This study offers clinicians a precise tool for disease assessment and highlights the significance of using advanced technological solutions in health care practice.

Current applications of deep learning in vertebral fracture diagnosis.

Gu Y, Wang Y, Li M, Wang R

pubmed logopapersAug 6 2025
Deep learning is a machine learning method that mimics neural networks to build decision-making models. Recent advances in computing power and algorithms have enhanced deep learning's potential for vertebral fracture diagnosis in medical imaging. The application of deep learning in vertebral fracture diagnosis, including the identification of vertebrae and classification of vertebral fracture types, might significantly reduce the workload of radiologists and orthopedic surgeons as well as greatly improve the accuracy of vertebral fracture diagnosis. In this narrative review, we will summarize the application of deep learning models in the diagnosis of vertebral fractures.
Page 2 of 33326 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.