Sort by:
Page 23 of 33328 results

Artificial intelligence in pediatric osteopenia diagnosis: evaluating deep network classification and model interpretability using wrist X-rays.

Harris CE, Liu L, Almeida L, Kassick C, Makrogiannis S

pubmed logopapersJun 1 2025
Osteopenia is a bone disorder that causes low bone density and affects millions of people worldwide. Diagnosis of this condition is commonly achieved through clinical assessment of bone mineral density (BMD). State of the art machine learning (ML) techniques, such as convolutional neural networks (CNNs) and transformer models, have gained increasing popularity in medicine. In this work, we employ six deep networks for osteopenia vs. healthy bone classification using X-ray imaging from the pediatric wrist dataset GRAZPEDWRI-DX. We apply two explainable AI techniques to analyze and interpret visual explanations for network decisions. Experimental results show that deep networks are able to effectively learn osteopenic and healthy bone features, achieving high classification accuracy rates. Among the six evaluated networks, DenseNet201 with transfer learning yielded the top classification accuracy at 95.2 %. Furthermore, visual explanations of CNN decisions provide valuable insight into the blackbox inner workings and present interpretable results. Our evaluation of deep network classification results highlights their capability to accurately differentiate between osteopenic and healthy bones in pediatric wrist X-rays. The combination of high classification accuracy and interpretable visual explanations underscores the promise of incorporating machine learning techniques into clinical workflows for the early and accurate diagnosis of osteopenia.

Advanced Three-Dimensional Assessment and Planning for Hallux Valgus.

Forin Valvecchi T, Marcolli D, De Cesar Netto C

pubmed logopapersJun 1 2025
The article discusses advanced three-dimensional evaluation of hallux valgus deformity using weightbearing computed tomography. Conventional two-dimensional radiographs fall short in assessing the complexity of hallux valgus deformities, whereas weightbearing computed tomography provides detailed insights into bone alignment and joint stability in a weightbearing state. Recent studies have highlighted the significance of first ray hypermobility and intrinsic metatarsal rotation in hallux valgus, influencing surgical planning and outcomes. The integration of semiautomatic and artificial intelligence-assisted tools with weightbearing computed tomography is enhancing the precision of deformity assessment, leading to more personalized and effective hallux valgus management.

Accuracy of an Automated Bone Scan Index Measurement System Enhanced by Deep Learning of the Female Skeletal Structure in Patients with Breast Cancer.

Fukai S, Daisaki H, Yamashita K, Kuromori I, Motegi K, Umeda T, Shimada N, Takatsu K, Terauchi T, Koizumi M

pubmed logopapersJun 1 2025
VSBONE<sup>®</sup> BSI (VSBONE), an automated bone scan index (BSI) measurement system was updated from version 2.1 (ver.2) to 3.0 (ver.3). VSBONE ver.3 incorporates deep learning of the skeletal structures of 957 new women, and it can be applied in patients with breast cancer. However, the performance of the updated VSBONE remains unclear. This study aimed to validate the diagnostic accuracy of the VSBONE system in patients with breast cancer. In total, 220 Japanese patients with breast cancer who underwent bone scintigraphy with single-photon emission computed tomography/computed tomography (SPECT/CT) were retrospectively analyzed. The patients were diagnosed with active bone metastases (<i>n</i> = 20) and non-bone metastases (<i>n</i> = 200) according to the physician's radiographic image interpretation. The patients were assessed using the VSBONE ver.2 and VSBONE ver.3, and the BSI findings were compared with the interpretation results by the physicians. The occurrence of segmentation errors, the association of BSI between VSBONE ver.2 and VSBONE ver.3, and the diagnostic accuracy of the systems were evaluated. VSBONE ver.2 and VSBONE ver.3 had segmentation errors in four and two patients. Significant positive linear correlations were confirmed in both versions of the BSI (<i>r</i> = 0.92). The diagnostic accuracy was 54.1% in VSBOBE ver.2, and 80.5% in VSBONE ver.3 <i>(P</i> < 0.001), respectively. The diagnostic accuracy of VSBONE was improved through deep learning of the female skeletal structures. The updated VSBONE ver.3 can be a reliable automated system for measuring BSI in patients with breast cancer.

Prognostic assessment of osteolytic lesions and mechanical properties of bones bearing breast cancer using neural network and finite element analysis<sup>☆</sup>.

Wang S, Chu T, Wasi M, Guerra RM, Yuan X, Wang L

pubmed logopapersJun 1 2025
The management of skeletal-related events (SREs), particularly the prevention of pathological fractures, is crucial for cancer patients. Current clinical assessment of fracture risk is mostly based on medical images, but incorporating sequential images in the assessment remains challenging. This study addressed this issue by leveraging a comprehensive dataset consisting of 260 longitudinal micro-computed tomography (μCT) scans acquired in normal and breast cancer bearing mice. A machine learning (ML) model based on a spatial-temporal neural network was built to forecast bone structures from previous μCT scans, which were found to have an overall similarity coefficient (Dice) of 0.814 with ground truths. Despite the predicted lesion volumes (18.5 ​% ​± ​15.3 ​%) being underestimated by ∼21 ​% than the ground truths' (22.1 ​% ​± ​14.8 ​%), the time course of the lesion growth was better represented in the predicted images than the preceding scans (10.8 ​% ​± ​6.5 ​%). Under virtual biomechanical testing using finite element analysis (FEA), the predicted bone structures recapitulated the loading carrying behaviors of the ground truth structures with a positive correlation (y ​= ​0.863x) and a high coefficient of determination (R<sup>2</sup> ​= ​0.955). Interestingly, the compliances of the predicted and ground truth structures demonstrated nearly identical linear relationships with the lesion volumes. In summary, we have demonstrated that bone deterioration could be proficiently predicted using machine learning in our preclinical dataset, suggesting the importance of large longitudinal clinical imaging datasets in fracture risk assessment for cancer bone metastasis.

Ensemble learning of deep CNN models and two stage level prediction of Cobb angle on surface topography in adolescents with idiopathic scoliosis.

Hassan M, Gonzalez Ruiz JM, Mohamed N, Burke TN, Mei Q, Westover L

pubmed logopapersJun 1 2025
This study employs Convolutional Neural Networks (CNNs) as feature extractors with appended regression layers for the non-invasive prediction of Cobb Angle (CA) from Surface Topography (ST) scans in adolescents with Idiopathic Scoliosis (AIS). The aim is to minimize radiation exposure during critical growth periods by offering a reliable, non-invasive assessment tool. The efficacy of various CNN-based feature extractors-DenseNet121, EfficientNetB0, ResNet18, SqueezeNet, and a modified U-Net-was evaluated on a dataset of 654 ST scans using a regression analysis framework for accurate CA prediction. The dataset comprised 590 training and 64 testing scans. Performance was evaluated using Mean Absolute Error (MAE), Root Mean Square Error (RMSE), and accuracy in classifying scoliosis severity (mild, moderate, severe) based on CA measurements. The EfficientNetB0 feature extractor outperformed other models, demonstrating strong performance on the training set (R=0.96, R=20.93) and achieving an MAE of 6.13<sup>∘</sup> and RMSE of 7.5<sup>∘</sup> on the test set. In terms of scoliosis severity classification, it achieved high precision (84.62%) and specificity (95.65% for mild cases and 82.98% for severe cases), highlighting its clinical applicability in AIS management. The regression-based approach using the EfficientNetB0 as a feature extractor presents a significant advancement for accurately determining CA from ST scans, offering a promising tool for improving scoliosis severity categorization and management in adolescents.

Generating Synthetic T2*-Weighted Gradient Echo Images of the Knee with an Open-source Deep Learning Model.

Vrettos K, Vassalou EE, Vamvakerou G, Karantanas AH, Klontzas ME

pubmed logopapersJun 1 2025
Routine knee MRI protocols for 1.5 T and 3 T scanners, do not include T2*-w gradient echo (T2*W) images, which are useful in several clinical scenarios such as the assessment of cartilage, synovial blooming (deposition of hemosiderin), chondrocalcinosis and the evaluation of the physis in pediatric patients. Herein, we aimed to develop an open-source deep learning model that creates synthetic T2*W images of the knee using fat-suppressed intermediate-weighted images. A cycleGAN model was trained with 12,118 sagittal knee MR images and tested on an independent set of 2996 images. Diagnostic interchangeability of synthetic T2*W images was assessed against a series of findings. Voxel intensity of four tissues was evaluated with Bland-Altman plots. Image quality was assessed with the use of root mean squared error (NRMSE), structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR). Code, model and a standalone executable file are provided on github. The model achieved a median NRMSE, PSNR and SSIM of 0.5, 17.4, and 0.5, respectively. Images were found interchangeable with an intraclass correlation coefficient >0.95 for all findings. Mean voxel intensity was equal between synthetic and conventional images. Four types of artifacts were identified: geometrical distortion (86/163 cases), object insertion/omission (11/163 cases), a wrap-around-like (26/163 cases) and an incomplete fat-suppression artifact (120/163 cases), which had a median 0 impact (no impact) on the diagnosis. In conclusion, the developed open-source GAN model creates synthetic T2*W images of the knee of high diagnostic value and quality. The identified artifacts had no or minor effect on the diagnostic value of the images.

Diagnosis of carpal tunnel syndrome using deep learning with comparative guidance.

Sim J, Lee S, Kim S, Jeong SH, Yoon J, Baek S

pubmed logopapersJun 1 2025
This study aims to develop a deep learning model for a robust diagnosis of Carpal Tunnel Syndrome (CTS) based on comparative classification leveraging the ultrasound images of the thenar and hypothenar muscles. We recruited 152 participants, both patients with varying severities of CTS and healthy individuals. The enrolled patients underwent ultrasonography, which provided ultrasound image data of the thenar and hypothenar muscles from the median and ulnar nerves. These images were used to train a deep learning model. We compared the performance of our model with previous comparative methods using echo intensity ratio or machine learning, and non-comparative methods based on deep learning. During the training process, comparative guidance based on cosine similarity was used so that the model learns to automatically identify the abnormal differences in echotexture between the ultrasound images of the thenar and hypothenar muscles. The proposed deep learning model with comparative guidance showed the highest performance. The comparison of Receiver operating characteristic (ROC) curves between models demonstrated that the Comparative guidance was effective in autonomously identifying complex features within the CTS dataset. The proposed deep learning model with comparative guidance was shown to be effective in automatically identifying important features for CTS diagnosis from the ultrasound images. The proposed comparative approach was found to be robust to the traditional problems in ultrasound image analysis such as different cut-off values and anatomical variation of patients. Proposed deep learning methodology facilitates accurate and efficient diagnosis of CTS from ultrasound images.

Expanded AI learning: AI as a Tool for Human Learning.

Faghani S, Tiegs-Heiden CA, Moassefi M, Powell GM, Ringler MD, Erickson BJ, Rhodes NG

pubmed logopapersJun 1 2025
To demonstrate that a deep learning (DL) model can be employed as a teaching tool to improve radiologists' ability to perform a subsequent imaging task without additional artificial intelligence (AI) assistance at time of image interpretation. Three human readers were tasked to categorize 50 frontal knee radiographs by male and female sex before and after reviewing data derived from our DL model. The model's high accuracy in performing this task was revealed to the human subjects, who were also supplied the DL model's resultant occlusion interpretation maps ("heat maps") to serve as a teaching tool for study before final testing. Two weeks later, the three human readers performed the same task with a new set of 50 radiographs. The average accuracy of the three human readers was initially 0.59 (95%CI: 0.59-0.65), not statistically different than guessing given our sample skew. The DL model categorized sex with 0.96 accuracy. After study of AI-derived "heat maps" and associated radiographs, the average accuracy of the human readers, without the direct help of AI, on the new set of radiographs increased to 0.80 (95%CI: 0.73-0.86), a significant improvement (p=0.0270). AI-derived data can be used as a teaching tool to improve radiologists' own ability to perform an imaging task. This is an idea that we have not before seen advanced in the radiology literature. AI can be used as a teaching tool to improve the intrinsic accuracy of radiologists, even without the concurrent use of AI.

Deep learning-based acceleration of high-resolution compressed sense MR imaging of the hip.

Marka AW, Meurer F, Twardy V, Graf M, Ebrahimi Ardjomand S, Weiss K, Makowski MR, Gersing AS, Karampinos DC, Neumann J, Woertler K, Banke IJ, Foreman SC

pubmed logopapersJun 1 2025
To evaluate a Compressed Sense Artificial Intelligence framework (CSAI) incorporating parallel imaging, compressed sense (CS), and deep learning for high-resolution MRI of the hip, comparing it with standard-resolution CS imaging. Thirty-two patients with femoroacetabular impingement syndrome underwent 3 T MRI scans. Coronal and sagittal intermediate-weighted TSE sequences with fat saturation were acquired using CS (0.6 ×0.8 mm resolution) and CSAI (0.3 ×0.4 mm resolution) protocols in comparable acquisition times (7:49 vs. 8:07 minutes for both planes). Two readers systematically assessed the depiction of the acetabular and femoral cartilage (in five cartilage zones), labrum, ligamentum capitis femoris, and bone using a five-point Likert scale. Diagnostic confidence and abnormality detection were recorded and analyzed using the Wilcoxon signed-rank test. CSAI significantly improved the cartilage depiction across most cartilage zones compared to CS. Overall Likert scores were 4.0 ± 0.2 (CS) vs 4.2 ± 0.6 (CSAI) for reader 1 and 4.0 ± 0.2 (CS) vs 4.3 ± 0.6 (CSAI) for reader 2 (p ≤ 0.001). Diagnostic confidence increased from 3.5 ± 0.7 and 3.9 ± 0.6 (CS) to 4.0 ± 0.6 and 4.1 ± 0.7 (CSAI) for readers 1 and 2, respectively (p ≤ 0.001). More cartilage lesions were detected with CSAI, with significant improvements in diagnostic confidence in certain cartilage zones such as femoral zone C and D for both readers. Labrum and ligamentum capitis femoris depiction remained similar, while bone depiction was rated lower. No abnormalities detected in CS were missed in CSAI. CSAI provides high-resolution hip MR images with enhanced cartilage depiction without extending acquisition times, potentially enabling more precise hip cartilage assessment.

Bridging innovation to implementation in artificial intelligence fracture detection : a commentary piece.

Khattak M, Kierkegaard P, McGregor A, Perry DC

pubmed logopapersJun 1 2025
The deployment of AI in medical imaging, particularly in areas such as fracture detection, represents a transformative advancement in orthopaedic care. AI-driven systems, leveraging deep-learning algorithms, promise to enhance diagnostic accuracy, reduce variability, and streamline workflows by analyzing radiograph images swiftly and accurately. Despite these potential benefits, the integration of AI into clinical settings faces substantial barriers, including slow adoption across health systems, technical challenges, and a major lag between technology development and clinical implementation. This commentary explores the role of AI in healthcare, highlighting its potential to enhance patient outcomes through more accurate and timely diagnoses. It addresses the necessity of bridging the gap between AI innovation and practical application. It also emphasizes the importance of implementation science in effectively integrating AI technologies into healthcare systems, using frameworks such as the Consolidated Framework for Implementation Research and the Knowledge-to-Action Cycle to guide this process. We call for a structured approach to address the challenges of deploying AI in clinical settings, ensuring that AI's benefits translate into improved healthcare delivery and patient care.
Page 23 of 33328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.