Sort by:
Page 15 of 3433422 results

Sex classification from hand X-ray images in pediatric patients: How zero-shot Segment Anything Model (SAM) can improve medical image analysis.

Mollineda RA, Becerra K, Mederos B

pubmed logopapersSep 13 2025
The potential to classify sex from hand data is a valuable tool in both forensic and anthropological sciences. This work presents possibly the most comprehensive study to date of sex classification from hand X-ray images. The research methodology involves a systematic evaluation of zero-shot Segment Anything Model (SAM) in X-ray image segmentation, a novel hand mask detection algorithm based on geometric criteria leveraging human knowledge (avoiding costly retraining and prompt engineering), the comparison of multiple X-ray image representations including hand bone structure and hand silhouette, a rigorous application of deep learning models and ensemble strategies, visual explainability of decisions by aggregating attribution maps from multiple models, and the transfer of models trained from hand silhouettes to sex prediction of prehistoric handprints. Training and evaluation of deep learning models were performed using the RSNA Pediatric Bone Age dataset, a collection of hand X-ray images from pediatric patients. Results showed very high effectiveness of zero-shot SAM in segmenting X-ray images, the contribution of segmenting before classifying X-ray images, hand sex classification accuracy above 95% on test data, and predictions from ancient handprints highly consistent with previous hypotheses based on sexually dimorphic features. Attention maps highlighted the carpometacarpal joints in the female class and the radiocarpal joint in the male class as sex discriminant traits. These findings are anatomically very close to previous evidence reported under different databases, classification models and visualization techniques.

Association of artificial intelligence-screened interstitial lung disease with radiation pneumonitis in locally advanced non-small cell lung cancer.

Bacon H, McNeil N, Patel T, Welch M, Ye XY, Bezjak A, Lok BH, Raman S, Giuliani M, Cho BCJ, Sun A, Lindsay P, Liu G, Kandel S, McIntosh C, Tadic T, Hope A

pubmed logopapersSep 13 2025
Interstitial lung disease (ILD) has been correlated with an increased risk for radiation pneumonitis (RP) following lung SBRT, but the degree to which locally advanced NSCLC (LA-NSCLC) patients are affected has yet to be quantified. An algorithm to identify patients at high risk for RP may help clinicians mitigate risk. All LA-NSCLC patients treated with definitive radiotherapy at our institution from 2006 to 2021 were retrospectively assessed. A convolutional neural network was previously developed to identify patients with radiographic ILD using planning computed tomography (CT) images. All screen-positive (AI-ILD + ) patients were reviewed by a thoracic radiologist to identify true radiographic ILD (r-ILD). The association between the algorithm output, clinical and dosimetric variables, and the outcomes of grade ≥ 3 RP and mortality were assessed using univariate (UVA) and multivariable (MVA) logistic regression, and Kaplan-Meier survival analysis. 698 patients were included in the analysis. Grade (G) 0-5 RP was reported in 51 %, 27 %, 17 %, 4.4 %, 0.14 % and 0.57 % of patients, respectively. Overall, 23 % of patients were classified as AI-ILD + . On MVA, only AI-ILD status (OR 2.15, p = 0.03) and AI-ILD score (OR 35.27, p < 0.01) were significant predictors of G3 + RP. Median OS was 3.6 years in AI-ILD- patients and 2.3 years in AI-ILD + patients (NS). Patients with r-ILD had significantly higher rates of severe toxicities, with G3 + RP 25 % and G5 RP 7 %. R-ILD was associated with an increased risk for G3 + RP on MVA (OR 5.42, p < 0.01). Our AI-ILD algorithm detects patients with significantly increased risk for G3 + RP.

Biomechanical assessment of Hoffa fat pad characteristics with ultrasound: a narrative review focusing on diagnostic imaging and image-guided interventions.

Qin N, Zhang B, Zhang X, Tian L

pubmed logopapersSep 13 2025
The infrapatellar fat pad (IFP), a key intra-articular knee structure, plays a crucial role in biomechanical cushioning and metabolic regulation, with fibrosis and inflammation contributing to osteoarthritis-related pain and dysfunction. This review outlines the anatomy and clinical value of IFP ultrasonography in static and dynamic assessment, as well as guided interventions. Shear wave elastography (SWE), Doppler imaging, and dynamic ultrasound effectively quantify tissue stiffness, vascular signals, and flexion-extension morphology. Due to the limited penetration capability of ultrasound imaging, it is difficult to directly observe IPF through the patella. However, its real-time capability and sensitivity effectively complement the detailed anatomical information provided by MRI, making it an important supplementary method for MRI-based IPF detection. This integrated approach creates a robust diagnostic pathway, from initial assessment and precise treatment guidance to long-term monitoring. Advances in ultrasound-guided precision medicine, protocol standardization, and the integration of Artificial Intelligence (AI) with multimodal imaging hold significant promise for improving the management of IFP pathologies.

Three-Dimensional Radiomics and Machine Learning for Predicting Postoperative Outcomes in Laminoplasty for Cervical Spondylotic Myelopathy: A Clinical-Radiomics Model.

Zheng B, Zhu Z, Ma K, Liang Y, Liu H

pubmed logopapersSep 12 2025
This study aims to explore a method based on three-dimensional cervical spinal cord reconstruction, radiomics feature extraction, and machine learning to build a postoperative prognosis prediction model for patients with cervical spondylotic myelopathy (CSM). It also evaluates the predictive performance of different cervical spinal cord segmentation strategies and machine learning algorithms. A retrospective analysis is conducted on 126 CSM patients who underwent posterior single-door laminoplasty from January 2017 to December 2022. Three different cervical spinal cord segmentation strategies (narrowest segment, surgical segment, and entire cervical cord C1-C7) are applied to preoperative MRI images for radiomics feature extraction. Good clinical prognosis is defined as a postoperative JOA recovery rate ≥50%. By comparing the performance of 8 machine learning algorithms, the optimal cervical spinal cord segmentation strategy and classifier are selected. Subsequently, clinical features (smoking history, diabetes, preoperative JOA score, and cSVA) are combined with radiomics features to construct a clinical-radiomics prediction model. Among the three cervical spinal cord segmentation strategies, the SVM model based on the narrowest segment performed best (AUC=0.885). Among clinical features, smoking history, diabetes, preoperative JOA score, and cSVA are important indicators for prognosis prediction. When clinical features are combined with radiomics features, the fusion model achieved excellent performance on the test set (accuracy=0.895, AUC=0.967), significantly outperforming either the clinical model or the radiomics model alone. This study validates the feasibility and superiority of three-dimensional radiomics combined with machine learning in predicting postoperative prognosis for CSM. The combination of radiomics features based on the narrowest segment and clinical features can yield a highly accurate prognosis prediction model, providing new insights for clinical assessment and individualized treatment decisions. Future studies need to further validate the stability and generalizability of this model in multi-center, large-sample cohorts.

Assessing accuracy and legitimacy of multimodal large language models on Japan Diagnostic Radiology Board Examination.

Hirano Y, Miki S, Yamagishi Y, Hanaoka S, Nakao T, Kikuchi T, Nakamura Y, Nomura Y, Yoshikawa T, Abe O

pubmed logopapersSep 12 2025
To assess and compare the accuracy and legitimacy of multimodal large language models (LLMs) on the Japan Diagnostic Radiology Board Examination (JDRBE). The dataset comprised questions from JDRBE 2021, 2023, and 2024, with ground-truth answers established through consensus among multiple board-certified diagnostic radiologists. Questions without associated images and those lacking unanimous agreement on answers were excluded. Eight LLMs were evaluated: GPT-4 Turbo, GPT-4o, GPT-4.5, GPT-4.1, o3, o4-mini, Claude 3.7 Sonnet, and Gemini 2.5 Pro. Each model was evaluated under two conditions: with inputting images (vision) and without (text-only). Performance differences between the conditions were assessed using McNemar's exact test. Two diagnostic radiologists (with 2 and 18 years of experience) independently rated the legitimacy of responses from four models (GPT-4 Turbo, Claude 3.7 Sonnet, o3, and Gemini 2.5 Pro) using a five-point Likert scale, blinded to model identity. Legitimacy scores were analyzed using Friedman's test, followed by pairwise Wilcoxon signed-rank tests with Holm correction. The dataset included 233 questions. Under the vision condition, o3 achieved the highest accuracy at 72%, followed by o4-mini (70%) and Gemini 2.5 Pro (70%). Under the text-only condition, o3 topped the list with an accuracy of 67%. Addition of image input significantly improved the accuracy of two models (Gemini 2.5 Pro and GPT-4.5), but not the others. Both o3 and Gemini 2.5 Pro received significantly higher legitimacy scores than GPT-4 Turbo and Claude 3.7 Sonnet from both raters. Recent multimodal LLMs, particularly o3 and Gemini 2.5 Pro, have demonstrated remarkable progress on JDRBE questions, reflecting their rapid evolution in diagnostic radiology. Eight multimodal large language models were evaluated on the Japan Diagnostic Radiology Board Examination. OpenAI's o3 and Google DeepMind's Gemini 2.5 Pro achieved high accuracy rates (72% and 70%) and received good legitimacy scores from human raters, demonstrating steady progress.

MultiASNet: Multimodal Label Noise Robust Framework for the Classification of Aortic Stenosis in Echocardiography.

Wu V, Fung A, Khodabakhshian B, Abdelsamad B, Vaseli H, Ahmadi N, Goco JAD, Tsang MY, Luong C, Abolmaesumi P, Tsang TSM

pubmed logopapersSep 12 2025
Aortic stenosis (AS), a prevalent and serious heart valve disorder, requires early detection but remains difficult to diagnose in routine practice. Although echocardiography with Doppler imaging is the clinical standard, these assessments are typically limited to trained specialists. Point-of-care ultrasound (POCUS) offers an accessible alternative for AS screening but is restricted to basic 2D B-mode imaging, often lacking the analysis Doppler provides. Our project introduces MultiASNet, a multimodal machine learning framework designed to enhance AS screening with POCUS by combining 2D B-mode videos with structured data from echocardiography reports, including Doppler parameters. Using contrastive learning, MultiASNet aligns video features with report features in tabular form from the same patient to improve interpretive quality. To address misalignment where a single report corresponds to multiple video views, some irrelevant to AS diagnosis, we use cross-attention in a transformer-based video and tabular network to assign less importance to irrelevant report data. The model integrates structured data only during training, enabling independent use with B-mode videos during inference for broader accessibility. MultiASNet also incorporates sample selection to counteract label noise from observer variability, yielding improved accuracy on two datasets. We achieved balanced accuracy scores of 93.0% on a private dataset and 83.9% on the public TMED-2 dataset for AS detection. For severity classification, balanced accuracy scores were 80.4% and 59.4% on the private and public datasets, respectively. This model facilitates reliable AS screening in non-specialist settings, bridging the gap left by Doppler data while reducing noise-related errors. Our code is publicly available at github.com/DeepRCL/MultiASNet.

Machine Learning for Preoperative Assessment and Postoperative Prediction in Cervical Cancer: Multicenter Retrospective Model Integrating MRI and Clinicopathological Data.

Li S, Guo C, Fang Y, Qiu J, Zhang H, Ling L, Xu J, Peng X, Jiang C, Wang J, Hua K

pubmed logopapersSep 12 2025
Machine learning (ML) has been increasingly applied to cervical cancer (CC) research. However, few studies have combined both clinical parameters and imaging data. At the same time, there remains an urgent need for more robust and accurate preoperative assessment of parametrial invasion and lymph node metastasis, as well as postoperative prognosis prediction. The objective of this study is to develop an integrated ML model combining clinicopathological variables and magnetic resonance image features for (1) preoperative parametrial invasion and lymph node metastasis detection and (2) postoperative recurrence and survival prediction. Retrospective data from 250 patients with CC (2014-2022; 2 tertiary hospitals) were analyzed. Variables were assessed for their predictive value regarding parametrial invasion, lymph node metastasis, survival, and recurrence using 7 ML models: K-nearest neighbor (KNN), support vector machine, decision tree, random forest (RF), balanced RF, weighted DT, and weighted KNN. Performance was assessed via 5-fold cross-validation using accuracy, sensitivity, specificity, precision, F1-score, and area under the receiver operating characteristic curve (AUC). The optimal models were deployed in an artificial intelligence-assisted contouring and prognosis prediction system. Among 250 women, there were 11 deaths and 24 recurrences. (1) For preoperative evaluation, the integrated model using balanced RF achieved optimal performance (sensitivity 0.81, specificity 0.85) for parametrial invasion, while weighted KNN achieved the best performance for lymph node metastasis (sensitivity 0.98, AUC 0.72). (2) For postoperative prognosis, weighted KNN also demonstrated high accuracy for recurrence (accuracy 0.94, AUC 0.86) and mortality (accuracy 0.97, AUC 0.77), with relatively balanced sensitivity of 0.80 and 0.33, respectively. (3) An artificial intelligence-assisted contouring and prognosis prediction system was developed to support preoperative evaluation and postoperative prognosis prediction. The integration of clinical data and magnetic resonance images provides enhanced diagnostic capability to preoperatively detect parametrial invasion and lymph node metastasis detection and prognostic capability to predict recurrence and mortality for CC, facilitating personalized, precise treatment strategies.

Predicting molecular subtypes of pediatric medulloblastoma using MRI-based artificial intelligence: A systematic review and meta-analysis.

Liu J, Zou Z, He Y, Guo Z, Yi C, Huang B

pubmed logopapersSep 12 2025
This meta-analysis aims to assess the diagnostic performance of artificial intelligence (AI) based on magnetic resonance imaging (MRI) in detecting molecular subtypes of pediatric medulloblastoma (MB) in children. A thorough review of the literature was performed using PubMed, Embase, and Web of Science to locate pertinent studies released prior to October 2024. Selected studies focused on the diagnostic performance of AI based on MRI in detecting molecular subtypes of pediatric MB. A bivariate random-effects model was used to calculate pooled sensitivity and specificity, both with 95% confidence intervals (CI). Study heterogeneity was assessed using I<sup>2</sup> statistics. Among the 540 studies determined, eight studies (involving 1195 patients) were included. For the wingless (WNT), the combined sensitivity, specificity, and receiver operating characteristic curve (AUC) based on MRI were 0.73 (95% CI: 0.61-0.83, I<sup>2</sup> = 19%), 0.94 (95% CI: 0.79-0.99, I<sup>2</sup> = 93%), and 0.80 (95% CI: 0.77-0.83), respectively. For the sonic hedgehog (SHH), the combined sensitivity, specificity, and AUC were 0.64 (95% CI: 0.51-0.75, I<sup>2</sup> = 69%), 0.84 (95% CI: 0.80-0.88, I<sup>2</sup> = 54%), and 0.85 (95% CI: 0.81-0.88), respectively. For Group 3 (G3), the combined sensitivity, specificity, and AUC were 0.89 (95% CI: 0.52-0.98, I<sup>2</sup> = 82%), 0.70 (95% CI: 0.62-0.77, I<sup>2</sup> = 44%), and 0.88 (95% CI: 0.84-0.90), respectively. For Group 4 (G4), the combined sensitivity, specificity, and AUC were 0.77 (95% CI: 0.64-0.87, I<sup>2</sup> = 54%), 0.91 (95% CI: 0.68-0.98, I<sup>2</sup> = 80%), and 0.86 (95% CI: 0.83-0.89), respectively. MRI-based artificial intelligence shows high diagnostic performance in detecting molecular subtypes of pediatric MB. However, all included studies employed retrospective designs, which may introduce potential biases. More researches using external validation datasets are needed to confirm the results and assess their clinical applicability.

Deep learning-powered temperature prediction for optimizing transcranial MR-guided focused ultrasound treatment.

Xiong Y, Yang M, Arkin M, Li Y, Duan C, Bian X, Lu H, Zhang L, Wang S, Ren X, Li X, Zhang M, Zhou X, Pan L, Lou X

pubmed logopapersSep 12 2025
Precise temperature control is challenging during transcranial MR-guided focused ultrasound (MRgFUS) treatment. The aim of this study was to develop a deep learning model integrating the treatment parameters for each sonication, along with patient-specific clinical information and skull metrics, for prediction of the MRgFUS therapeutic temperature. This is a retrospective analysis of sonications from patients with essential tremor or Parkinson's disease who underwent unilateral MRgFUS thalamotomy or pallidothalamic tractotomy at a single hospital from January 2019 to June 2023. For model training, a dataset of 600 sonications (72 patients) was used, while a validation dataset comprising 199 sonications (18 patients) was used to assess model performance. Additionally, an external dataset of 146 sonications (20 patients) was used for external validation. The developed deep learning model, called Fust-Net, achieved high predictive accuracy, with normalized mean absolute errors of 1.655°C for the internal dataset and 2.432°C for the external dataset, which closely matched the actual temperature. The graded evaluation showed that Fust-Net achieved an effective temperature prediction rate of 82.6%. These results showcase the exciting potential of Fust-Net for achieving precise temperature control during MRgFUS treatment, opening new doors for enhanced precision and safety in clinical applications.

Deep learning for automated segmentation of central cartilage tumors on MRI.

Gitto S, Corti A, van Langevelde K, Navas Cañete A, Cincotta A, Messina C, Albano D, Vignaga C, Ferrari L, Mainardi L, Corino VDA, Sconfienza LM

pubmed logopapersSep 12 2025
Automated segmentation methods may potentially increase the reliability and applicability of radiomics in skeletal oncology. Our aim was to propose a deep learning-based method for automated segmentation of atypical cartilaginous tumor (ACT) and grade II chondrosarcoma (CS2) of long bones on magnetic resonance imaging (MRI). This institutional review board-approved retrospective study included 164 patients with surgically treated and histology-proven cartilaginous tumors at two tertiary bone tumor centers. The first cohort consisted of 99 MRI scans from center 1 (79 ACT, 20 CS2). The second cohort consisted of 65 MRI scans from center 2 (45 ACT, 20 CS2). Supervised Edge-Attention Guidance segmentation Network (SEAGNET) architecture was employed for automated image segmentation on T1-weighted images, using manual segmentations drawn by musculoskeletal radiologists as the ground truth. In the first cohort, a total of 1,037 slices containing the tumor out of 99 patients were split into 70% training, 15% validation, and 15% internal test sets, respectively, and used for model tuning. The second cohort was used for independent external testing. In the first cohort, Dice Score (DS) and Intersection over Union (IoU) per patient were 0.782 ± 0.148 and 0.663 ± 0.175, and 0.748 ± 0.191 and 0.630 ± 0.210 in the validation and internal test sets, respectively. DS and IoU per slice were 0.742 ± 0.273 and 0.646 ± 0.266, and 0.752 ± 0.256 and 0.656 ± 0.261 in the validation and internal test sets, respectively. In the independent external test dataset, the model achieved DS of 0.828 ± 0.175 and IoU of 0.706 ± 0.180. Deep learning proved excellent for automated segmentation of central cartilage tumors on MRI. A deep learning model based on SEAGNET architecture achieved excellent performance for automated segmentation of cartilage tumors of long bones on MRI and may be beneficial, given the increasing detection rate of these lesions in clinical practice. Automated segmentation may potentially increase the reliability and applicability of radiomics-based models. A deep learning architecture was proposed for automated segmentation of appendicular cartilage tumors on MRI. Deep learning proved excellent with a mean Dice Score of 0.828 in the external test cohort.
Page 15 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.