Sort by:
Page 168 of 6486473 results

Qin N, Zhang B, Zhang X, Tian L

pubmed logopapersSep 13 2025
The infrapatellar fat pad (IFP), a key intra-articular knee structure, plays a crucial role in biomechanical cushioning and metabolic regulation, with fibrosis and inflammation contributing to osteoarthritis-related pain and dysfunction. This review outlines the anatomy and clinical value of IFP ultrasonography in static and dynamic assessment, as well as guided interventions. Shear wave elastography (SWE), Doppler imaging, and dynamic ultrasound effectively quantify tissue stiffness, vascular signals, and flexion-extension morphology. Due to the limited penetration capability of ultrasound imaging, it is difficult to directly observe IPF through the patella. However, its real-time capability and sensitivity effectively complement the detailed anatomical information provided by MRI, making it an important supplementary method for MRI-based IPF detection. This integrated approach creates a robust diagnostic pathway, from initial assessment and precise treatment guidance to long-term monitoring. Advances in ultrasound-guided precision medicine, protocol standardization, and the integration of Artificial Intelligence (AI) with multimodal imaging hold significant promise for improving the management of IFP pathologies.

Olesinski A, Lederman R, Azraq Y, Sosna J, Joskowicz L

pubmed logopapersSep 13 2025
Manual detection and measurement of structures in volumetric scans is routine in clinical practice but is time-consuming and subject to observer variability. Automatic deep learning-based solutions are effective but require a large dataset of manual annotations by experts. We present a novel annotation-efficient semi-supervised deep learning method for automatic detection, segmentation, and measurement of the short axis length (SAL) of mediastinal lymph nodes (LNs) in contrast-enhanced CT (ceCT) scans. Our semi-supervised method combines the precision of expert annotations with the quantity advantages of pseudolabeled data. It uses an ensemble of 3D nnU-Net models trained on a few expert-annotated scans to generate pseudolabels on a large dataset of unannotated scans. The pseudolabels are then filtered to remove false positive LNs by excluding LNs outside the mediastinum and LNs overlapping with other anatomical structures. Finally, a single 3D nnU-Net model is trained using the filtered pseudo-labels. Our method optimizes the ratio of annotated/non-annotated dataset sizes to achieve the desired performance, thus reducing manual annotation effort. Experimental studies on three chest ceCT datasets with a total of 268 annotated scans (1817 LNs), of which 134 scans were used for testing and the remaining for ensemble training in batches of 17, 34, 67, and 134 scans, as well as 710 unannotated scans, show that the semi-supervised models' recall improvements were 11-24% (0.72-0.87) while maintaining comparable precision levels. The best model achieved mean SAL differences of 1.65 ± 0.92 mm for normal LNs and 4.25 ± 4.98 mm for enlarged LNs, both within the observer variability. Our semi-supervised method requires one-fourth to one-eighth less annotations to achieve a performance to supervised models trained on the same dataset for the automatic measurement of mediastinal LNs in chest ceCT. Using pseudolabels with anatomical filtering may be effective to overcome the challenges of the development of AI-based solutions in radiology.

Yazdanpanah F, Hunt SJ

pubmed logopapersSep 13 2025
PET-computed tomography (CT) has become essential in sarcoma management, offering precise diagnosis, staging, and response assessment by combining metabolic and anatomic imaging. Its high accuracy in detecting primary, recurrent, and metastatic disease guides personalized treatment strategies and enhances interventional procedures like biopsies and ablations. Advances in novel radiotracers and hybrid imaging modalities further improve diagnostic specificity, especially in complex and pediatric cases. Integrating PET-CT with genomic data and artificial intelligence (AI)-driven tools promises to advance personalized medicine, enabling tailored therapies and better outcomes. As a cornerstone of multidisciplinary sarcoma care, PET-CT continues to transform diagnostic and therapeutic approaches in oncology.

White MS, Horikawa-Strakovsky A, Mayer KP, Noehren BW, Wen Y

pubmed logopapersSep 13 2025
Ultrasound imaging is a clinically feasible method for assessing muscle size and quality, but manual processing is time-consuming and difficult to scale. Existing artificial intelligence (AI) models measure muscle cross-sectional area, but they do not include assessments of muscle quality or account for the influence of subcutaneous adipose tissue thickness on echo intensity measurements. We developed an open-source AI model to accurately segment the vastus lateralis and subcutaneous adipose tissue in B-mode images for automating measurements of muscle size and quality. The model was trained on 612 ultrasound images from 44 participants who had anterior cruciate ligament reconstruction. Model generalizability was evaluated on a test set of 50 images from 14 unique participants. A U-Net architecture with ResNet50 backbone was used for segmentation. Performance was assessed using the Dice coefficient and Intersection over Union (IoU). Agreement between model predictions and manual measurements was evaluated using intraclass correlation coefficients (ICCs), R² values and standard errors of measurement (SEM). Dice coefficients were 0.9095 and 0.9654 for subcutaneous adipose tissue and vastus lateralis segmentation, respectively. Excellent agreement was observed between model predictions and manual measurements for cross-sectional area (ICC = 0.986), echo intensity (ICC = 0.991) and subcutaneous adipose tissue thickness (ICC = 0.996). The model demonstrated high reliability with low SEM values for clinical measurements (cross-sectional area: 1.15 cm², echo intensity: 1.28-1.78 a.u.). We developed an open-source AI model that accurately segments the vastus lateralis and subcutaneous adipose tissue in B-mode ultrasound images, enabling automated measurements of muscle size and quality.

Mollineda RA, Becerra K, Mederos B

pubmed logopapersSep 13 2025
The potential to classify sex from hand data is a valuable tool in both forensic and anthropological sciences. This work presents possibly the most comprehensive study to date of sex classification from hand X-ray images. The research methodology involves a systematic evaluation of zero-shot Segment Anything Model (SAM) in X-ray image segmentation, a novel hand mask detection algorithm based on geometric criteria leveraging human knowledge (avoiding costly retraining and prompt engineering), the comparison of multiple X-ray image representations including hand bone structure and hand silhouette, a rigorous application of deep learning models and ensemble strategies, visual explainability of decisions by aggregating attribution maps from multiple models, and the transfer of models trained from hand silhouettes to sex prediction of prehistoric handprints. Training and evaluation of deep learning models were performed using the RSNA Pediatric Bone Age dataset, a collection of hand X-ray images from pediatric patients. Results showed very high effectiveness of zero-shot SAM in segmenting X-ray images, the contribution of segmenting before classifying X-ray images, hand sex classification accuracy above 95% on test data, and predictions from ancient handprints highly consistent with previous hypotheses based on sexually dimorphic features. Attention maps highlighted the carpometacarpal joints in the female class and the radiocarpal joint in the male class as sex discriminant traits. These findings are anatomically very close to previous evidence reported under different databases, classification models and visualization techniques.

Farhan Sadik, Christopher L. Newman, Stuart J. Warden, Rachel K. Surowiec

arxiv logopreprintSep 13 2025
Rigid-motion artifacts, such as cortical bone streaking and trabecular smearing, hinder in vivo assessment of bone microstructures in high-resolution peripheral quantitative computed tomography (HR-pQCT). Despite various motion grading techniques, no motion correction methods exist due to the lack of standardized degradation models. We optimize a conventional sinogram-based method to simulate motion artifacts in HR-pQCT images, creating paired datasets of motion-corrupted images and their corresponding ground truth, which enables seamless integration into supervised learning frameworks for motion correction. As such, we propose an Edge-enhanced Self-attention Wasserstein Generative Adversarial Network with Gradient Penalty (ESWGAN-GP) to address motion artifacts in both simulated (source) and real-world (target) datasets. The model incorporates edge-enhancing skip connections to preserve trabecular edges and self-attention mechanisms to capture long-range dependencies, facilitating motion correction. A visual geometry group (VGG)-based perceptual loss is used to reconstruct fine micro-structural features. The ESWGAN-GP achieves a mean signal-to-noise ratio (SNR) of 26.78, structural similarity index measure (SSIM) of 0.81, and visual information fidelity (VIF) of 0.76 for the source dataset, while showing improved performance on the target dataset with an SNR of 29.31, SSIM of 0.87, and VIF of 0.81. The proposed methods address a simplified representation of real-world motion that may not fully capture the complexity of in vivo motion artifacts. Nevertheless, because motion artifacts present one of the foremost challenges to more widespread adoption of this modality, these methods represent an important initial step toward implementing deep learning-based motion correction in HR-pQCT.

Jin Yang, Daniel S. Marcus, Aristeidis Sotiras

arxiv logopreprintSep 13 2025
Medical Vision Foundation Models (Med-VFMs) have superior capabilities of interpreting medical images due to the knowledge learned from self-supervised pre-training with extensive unannotated images. To improve their performance on adaptive downstream evaluations, especially segmentation, a few samples from target domains are selected randomly for fine-tuning them. However, there lacks works to explore the way of adapting Med-VFMs to achieve the optimal performance on target domains efficiently. Thus, it is highly demanded to design an efficient way of fine-tuning Med-VFMs by selecting informative samples to maximize their adaptation performance on target domains. To achieve this, we propose an Active Source-Free Domain Adaptation (ASFDA) method to efficiently adapt Med-VFMs to target domains for volumetric medical image segmentation. This ASFDA employs a novel Active Learning (AL) method to select the most informative samples from target domains for fine-tuning Med-VFMs without the access to source pre-training samples, thus maximizing their performance with the minimal selection budget. In this AL method, we design an Active Test Time Sample Query strategy to select samples from the target domains via two query metrics, including Diversified Knowledge Divergence (DKD) and Anatomical Segmentation Difficulty (ASD). DKD is designed to measure the source-target knowledge gap and intra-domain diversity. It utilizes the knowledge of pre-training to guide the querying of source-dissimilar and semantic-diverse samples from the target domains. ASD is designed to evaluate the difficulty in segmentation of anatomical structures by measuring predictive entropy from foreground regions adaptively. Additionally, our ASFDA method employs a Selective Semi-supervised Fine-tuning to improve the performance and efficiency of fine-tuning by identifying samples with high reliability from unqueried ones.

Sajad Amiri, Shahram Taeb, Sara Gharibi, Setareh Dehghanfard, Somayeh Sadat Mehrnia, Mehrdad Oveisi, Ilker Hacihaliloglu, Arman Rahmim, Mohammad R. Salmanpour

arxiv logopreprintSep 13 2025
Gadolinium-based contrast agents (GBCAs) are central to glioma imaging but raise safety, cost, and accessibility concerns. Predicting contrast enhancement from non-contrast MRI using machine learning (ML) offers a safer alternative, as enhancement reflects tumor aggressiveness and informs treatment planning. Yet scanner and cohort variability hinder robust model selection. We propose a stability-aware framework to identify reproducible ML pipelines for multicenter prediction of glioma MRI contrast enhancement. We analyzed 1,446 glioma cases from four TCIA datasets (UCSF-PDGM, UPENN-GB, BRATS-Africa, BRATS-TCGA-LGG). Non-contrast T1WI served as input, with enhancement derived from paired post-contrast T1WI. Using PyRadiomics under IBSI standards, 108 features were extracted and combined with 48 dimensionality reduction methods and 25 classifiers, yielding 1,200 pipelines. Rotational validation was trained on three datasets and tested on the fourth. Cross-validation prediction accuracies ranged from 0.91 to 0.96, with external testing achieving 0.87 (UCSF-PDGM), 0.98 (UPENN-GB), and 0.95 (BRATS-Africa), with an average of 0.93. F1, precision, and recall were stable (0.87 to 0.96), while ROC-AUC varied more widely (0.50 to 0.82), reflecting cohort heterogeneity. The MI linked with ETr pipeline consistently ranked highest, balancing accuracy and stability. This framework demonstrates that stability-aware model selection enables reliable prediction of contrast enhancement from non-contrast glioma MRI, reducing reliance on GBCAs and improving generalizability across centers. It provides a scalable template for reproducible ML in neuro-oncology and beyond.

Bacon H, McNeil N, Patel T, Welch M, Ye XY, Bezjak A, Lok BH, Raman S, Giuliani M, Cho BCJ, Sun A, Lindsay P, Liu G, Kandel S, McIntosh C, Tadic T, Hope A

pubmed logopapersSep 13 2025
Interstitial lung disease (ILD) has been correlated with an increased risk for radiation pneumonitis (RP) following lung SBRT, but the degree to which locally advanced NSCLC (LA-NSCLC) patients are affected has yet to be quantified. An algorithm to identify patients at high risk for RP may help clinicians mitigate risk. All LA-NSCLC patients treated with definitive radiotherapy at our institution from 2006 to 2021 were retrospectively assessed. A convolutional neural network was previously developed to identify patients with radiographic ILD using planning computed tomography (CT) images. All screen-positive (AI-ILD + ) patients were reviewed by a thoracic radiologist to identify true radiographic ILD (r-ILD). The association between the algorithm output, clinical and dosimetric variables, and the outcomes of grade ≥ 3 RP and mortality were assessed using univariate (UVA) and multivariable (MVA) logistic regression, and Kaplan-Meier survival analysis. 698 patients were included in the analysis. Grade (G) 0-5 RP was reported in 51 %, 27 %, 17 %, 4.4 %, 0.14 % and 0.57 % of patients, respectively. Overall, 23 % of patients were classified as AI-ILD + . On MVA, only AI-ILD status (OR 2.15, p = 0.03) and AI-ILD score (OR 35.27, p < 0.01) were significant predictors of G3 + RP. Median OS was 3.6 years in AI-ILD- patients and 2.3 years in AI-ILD + patients (NS). Patients with r-ILD had significantly higher rates of severe toxicities, with G3 + RP 25 % and G5 RP 7 %. R-ILD was associated with an increased risk for G3 + RP on MVA (OR 5.42, p < 0.01). Our AI-ILD algorithm detects patients with significantly increased risk for G3 + RP.

Bendella Z, Wichtmann BD, Clauberg R, Keil VC, Lehnen NC, Haase R, Sáez LC, Wiest IC, Kather JN, Endler C, Radbruch A, Paech D, Deike K

pubmed logopapersSep 13 2025
The aim of this study was to determine whether ChatGPT-4 can correctly suggest MRI protocols and additional MRI sequences based on real-world Radiology Request Forms (RRFs) as well as to investigate the ability of ChatGPT-4 to suggest time saving protocols. Retrospectively, 1,001 RRFs of our Department of Neuroradiology (in-house dataset), 200 RRFs of an independent Department of General Radiology (independent dataset) and 300 RRFs from an external, foreign Department of Neuroradiology (external dataset) were included. Patients' age, sex, and clinical information were extracted from the RRFs and used to prompt ChatGPT- 4 to choose an adequate MRI protocol from predefined institutional lists. Four independent raters then assessed its performance. Additionally, ChatGPT-4 was tasked with creating case-specific protocols aimed at saving time. Two and 7 of 1,001 protocol suggestions of ChatGPT-4 were rated "unacceptable" in the in-house dataset for reader 1 and 2, respectively. No protocol suggestions were rated "unacceptable" in both the independent and external dataset. When assessing the inter-reader agreement, Coheńs weighted ĸ ranged from 0.88 to 0.98 (each p < 0.001). ChatGPT-4's freely composed protocols were approved in 766/1,001 (76.5 %) and 140/300 (46.67 %) cases of the in-house and external dataset with mean time savings (standard deviation) of 3:51 (minutes:seconds) (±2:40) minutes and 2:59 (±3:42) minutes per adopted in-house and external MRI protocol. ChatGPT-4 demonstrated a very high agreement with board-certified (neuro-)radiologists in selecting MRI protocols and was able to suggest approved time saving protocols from the set of available sequences.
Page 168 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.