Sort by:
Page 13 of 3423413 results

Magnetization transfer MRI (MT-MRI) detects white matter damage beyond the primary site of compression in degenerative cervical myelopathy using a novel semi-automated analysis.

Muhammad F, Weber Ii KA, Haynes G, Villeneuve L, Smith L, Baha A, Hameed S, Khan AF, Dhaher Y, Parrish T, Rohan M, Smith ZA

pubmed logopapersSep 14 2025
Degenerative cervical myelopathy (DCM) is the leading cause of spinal cord disorder in adults, yet conventional MRI cannot detect microstructural damage beyond the compression site. Current application of magnetization transfer ratio (MTR), while promising, suffer from limited standardization, operator dependence, and unclear added value over traditional metrics such as cross-sectional area (CSA). To address these limitations, we utilized our semi-automated analysis pipeline built on the Spinal Cord Toolbox (SCT) platform to automate MTR extraction. Our method integrates deep learning-based convolutional neural networks (CNNs) for spinal cord segmentation, vertebral labeling via the global curve optimization algorithm and PAM50 template registration to enable automated MTR extraction. Using the Generic Spine Protocol, we acquired 3T T2w- and MT-MRI images from 30 patients with DCM and 15 age-matched healthy controls (HC). We computed MTR and CSA at the maximal compression level (C5-C6) and a distant, uncompressed region (C2-C3). We extracted regional and tract-specific MTR using probabilistic maps in template space. Diagnostic accuracy was assessed with ROC analysis, and k-means clustering reveal patients subgroups based on neurological impairments. Correlation analysis assessed associations between MTR measures and DCM deficits. Patients with DCM showed significant MTR reductions in both compressed and uncompressed regions (p < 0.05). At C2-C3, MTR outperformed CSA (AUC 0.74 vs 0.69) in detecting spinal cord pathology. Tract-specific MTR were correlated with dexterity, grip strength, and balance deficits. Our reproducible, computationally robust pipeline links microstructural injury to clinical outcomes in DCM and provides a scalable framework for multi-site quantitative MRI analysis of the spinal cord.

Multimodal Machine Learning for Diagnosis of Multiple Sclerosis Using Optical Coherence Tomography in Pediatric Cases

Chen, C., Soltanieh, S., Rajapaksa, S., Khalvati, F., Yeh, E. A.

medrxiv logopreprintSep 14 2025
Background and ObjectivesIdentifying MS in children early and distinguishing it from other neuroinflammatory conditions of childhood is critical, as early therapeutic intervention can improve outcomes. The anterior visual pathway has been demonstrated to be of central importance in diagnostic considerations for MS and has recently been identified as a fifth topography in the McDonald Diagnostic Criteria for MS. Optical coherence tomography (OCT) provides high-resolution retinal imaging and reflects the structural integrity of the retinal nerve fiber and ganglion cell inner plexiform layers. Whether multimodal deep learning models can use OCT alone to diagnose pediatric MS (POMS) is unknown. MethodsWe analyzed 3D OCT scans collected prospectively through the Neuroinflammatory Registry of the Hospital for Sick Children (REB#1000005356). Raw macular and optic nerve head images, and 52 automatically segmented features were included. We evaluated three classification approaches: (1) deep learning models (e.g. ResNet, DenseNet) for representation learning followed by classical ML classifiers, (2) ML models trained on OCT-derived features, and (3) multimodal models combining both via early and late fusion. ResultsScans from individuals with POMS (onset 16.0 {+/-} 3.1 years, 51.0%F; 211 scans) and 29 children with non-inflammatory neurological conditions (13.1 {+/-} 4.0 years, 69.0%F, 52 scans) were included. The early fusion model achieved the highest performance (AUC: 0.87, F1: 0.87, Accuracy: 90%), outperforming both unimodal and late fusion models. The best unimodal feature-based model (SVC) yielded an AUC of 0.84, F1 of 0.85 and an accuracy of 85%, while the best image-based model (ResNet101 with Random Forest) achieved an AUC of 0.87, F1 of 0.79, and accuracy of 84%. Late fusion underperformed, reaching 82% accuracy but failing in the minority class. DiscussionMultimodal learning with early fusion significantly enhances diagnostic performance by combining spatial retinal information with clinically relevant structural features. This approach captures complementary patterns associated with MS pathology and shows promise as an AI-driven tool to support pediatric neuroinflammatory diagnosis.

AI and Healthcare Disparities: Lessons from a Cautionary Tale in Knee Radiology.

Hull G

pubmed logopapersSep 14 2025
Enthusiasm about the use of artificial intelligence (AI) in medicine has been tempered by concern that algorithmic systems can be unfairly biased against racially minoritized populations. This article uses work on racial disparities in knee osteoarthritis diagnoses to underline that achieving justice in the use of AI in medical imaging requires attention to the entire sociotechnical system within which it operates, rather than isolated properties of algorithms. Using AI to make current diagnostic procedures more efficient risks entrenching existing disparities; a recent algorithm points to some of the problems in current procedures while highlighting systemic normative issues that need to be addressed while designing further AI systems. The article thus contributes to a literature arguing that bias and fairness issues in AI be considered as aspects of structural inequality and injustice and to highlighting ways that AI can be helpful in making progress on these.

Association of artificial intelligence-screened interstitial lung disease with radiation pneumonitis in locally advanced non-small cell lung cancer.

Bacon H, McNeil N, Patel T, Welch M, Ye XY, Bezjak A, Lok BH, Raman S, Giuliani M, Cho BCJ, Sun A, Lindsay P, Liu G, Kandel S, McIntosh C, Tadic T, Hope A

pubmed logopapersSep 13 2025
Interstitial lung disease (ILD) has been correlated with an increased risk for radiation pneumonitis (RP) following lung SBRT, but the degree to which locally advanced NSCLC (LA-NSCLC) patients are affected has yet to be quantified. An algorithm to identify patients at high risk for RP may help clinicians mitigate risk. All LA-NSCLC patients treated with definitive radiotherapy at our institution from 2006 to 2021 were retrospectively assessed. A convolutional neural network was previously developed to identify patients with radiographic ILD using planning computed tomography (CT) images. All screen-positive (AI-ILD + ) patients were reviewed by a thoracic radiologist to identify true radiographic ILD (r-ILD). The association between the algorithm output, clinical and dosimetric variables, and the outcomes of grade ≥ 3 RP and mortality were assessed using univariate (UVA) and multivariable (MVA) logistic regression, and Kaplan-Meier survival analysis. 698 patients were included in the analysis. Grade (G) 0-5 RP was reported in 51 %, 27 %, 17 %, 4.4 %, 0.14 % and 0.57 % of patients, respectively. Overall, 23 % of patients were classified as AI-ILD + . On MVA, only AI-ILD status (OR 2.15, p = 0.03) and AI-ILD score (OR 35.27, p < 0.01) were significant predictors of G3 + RP. Median OS was 3.6 years in AI-ILD- patients and 2.3 years in AI-ILD + patients (NS). Patients with r-ILD had significantly higher rates of severe toxicities, with G3 + RP 25 % and G5 RP 7 %. R-ILD was associated with an increased risk for G3 + RP on MVA (OR 5.42, p < 0.01). Our AI-ILD algorithm detects patients with significantly increased risk for G3 + RP.

Adapting Medical Vision Foundation Models for Volumetric Medical Image Segmentation via Active Learning and Selective Semi-supervised Fine-tuning

Jin Yang, Daniel S. Marcus, Aristeidis Sotiras

arxiv logopreprintSep 13 2025
Medical Vision Foundation Models (Med-VFMs) have superior capabilities of interpreting medical images due to the knowledge learned from self-supervised pre-training with extensive unannotated images. To improve their performance on adaptive downstream evaluations, especially segmentation, a few samples from target domains are selected randomly for fine-tuning them. However, there lacks works to explore the way of adapting Med-VFMs to achieve the optimal performance on target domains efficiently. Thus, it is highly demanded to design an efficient way of fine-tuning Med-VFMs by selecting informative samples to maximize their adaptation performance on target domains. To achieve this, we propose an Active Source-Free Domain Adaptation (ASFDA) method to efficiently adapt Med-VFMs to target domains for volumetric medical image segmentation. This ASFDA employs a novel Active Learning (AL) method to select the most informative samples from target domains for fine-tuning Med-VFMs without the access to source pre-training samples, thus maximizing their performance with the minimal selection budget. In this AL method, we design an Active Test Time Sample Query strategy to select samples from the target domains via two query metrics, including Diversified Knowledge Divergence (DKD) and Anatomical Segmentation Difficulty (ASD). DKD is designed to measure the source-target knowledge gap and intra-domain diversity. It utilizes the knowledge of pre-training to guide the querying of source-dissimilar and semantic-diverse samples from the target domains. ASD is designed to evaluate the difficulty in segmentation of anatomical structures by measuring predictive entropy from foreground regions adaptively. Additionally, our ASFDA method employs a Selective Semi-supervised Fine-tuning to improve the performance and efficiency of fine-tuning by identifying samples with high reliability from unqueried ones.

Enhancement Without Contrast: Stability-Aware Multicenter Machine Learning for Glioma MRI Imaging

Sajad Amiri, Shahram Taeb, Sara Gharibi, Setareh Dehghanfard, Somayeh Sadat Mehrnia, Mehrdad Oveisi, Ilker Hacihaliloglu, Arman Rahmim, Mohammad R. Salmanpour

arxiv logopreprintSep 13 2025
Gadolinium-based contrast agents (GBCAs) are central to glioma imaging but raise safety, cost, and accessibility concerns. Predicting contrast enhancement from non-contrast MRI using machine learning (ML) offers a safer alternative, as enhancement reflects tumor aggressiveness and informs treatment planning. Yet scanner and cohort variability hinder robust model selection. We propose a stability-aware framework to identify reproducible ML pipelines for multicenter prediction of glioma MRI contrast enhancement. We analyzed 1,446 glioma cases from four TCIA datasets (UCSF-PDGM, UPENN-GB, BRATS-Africa, BRATS-TCGA-LGG). Non-contrast T1WI served as input, with enhancement derived from paired post-contrast T1WI. Using PyRadiomics under IBSI standards, 108 features were extracted and combined with 48 dimensionality reduction methods and 25 classifiers, yielding 1,200 pipelines. Rotational validation was trained on three datasets and tested on the fourth. Cross-validation prediction accuracies ranged from 0.91 to 0.96, with external testing achieving 0.87 (UCSF-PDGM), 0.98 (UPENN-GB), and 0.95 (BRATS-Africa), with an average of 0.93. F1, precision, and recall were stable (0.87 to 0.96), while ROC-AUC varied more widely (0.50 to 0.82), reflecting cohort heterogeneity. The MI linked with ETr pipeline consistently ranked highest, balancing accuracy and stability. This framework demonstrates that stability-aware model selection enables reliable prediction of contrast enhancement from non-contrast glioma MRI, reducing reliance on GBCAs and improving generalizability across centers. It provides a scalable template for reproducible ML in neuro-oncology and beyond.

Sex classification from hand X-ray images in pediatric patients: How zero-shot Segment Anything Model (SAM) can improve medical image analysis.

Mollineda RA, Becerra K, Mederos B

pubmed logopapersSep 13 2025
The potential to classify sex from hand data is a valuable tool in both forensic and anthropological sciences. This work presents possibly the most comprehensive study to date of sex classification from hand X-ray images. The research methodology involves a systematic evaluation of zero-shot Segment Anything Model (SAM) in X-ray image segmentation, a novel hand mask detection algorithm based on geometric criteria leveraging human knowledge (avoiding costly retraining and prompt engineering), the comparison of multiple X-ray image representations including hand bone structure and hand silhouette, a rigorous application of deep learning models and ensemble strategies, visual explainability of decisions by aggregating attribution maps from multiple models, and the transfer of models trained from hand silhouettes to sex prediction of prehistoric handprints. Training and evaluation of deep learning models were performed using the RSNA Pediatric Bone Age dataset, a collection of hand X-ray images from pediatric patients. Results showed very high effectiveness of zero-shot SAM in segmenting X-ray images, the contribution of segmenting before classifying X-ray images, hand sex classification accuracy above 95% on test data, and predictions from ancient handprints highly consistent with previous hypotheses based on sexually dimorphic features. Attention maps highlighted the carpometacarpal joints in the female class and the radiocarpal joint in the male class as sex discriminant traits. These findings are anatomically very close to previous evidence reported under different databases, classification models and visualization techniques.

Annotation-efficient deep learning detection and measurement of mediastinal lymph nodes in CT.

Olesinski A, Lederman R, Azraq Y, Sosna J, Joskowicz L

pubmed logopapersSep 13 2025
Manual detection and measurement of structures in volumetric scans is routine in clinical practice but is time-consuming and subject to observer variability. Automatic deep learning-based solutions are effective but require a large dataset of manual annotations by experts. We present a novel annotation-efficient semi-supervised deep learning method for automatic detection, segmentation, and measurement of the short axis length (SAL) of mediastinal lymph nodes (LNs) in contrast-enhanced CT (ceCT) scans. Our semi-supervised method combines the precision of expert annotations with the quantity advantages of pseudolabeled data. It uses an ensemble of 3D nnU-Net models trained on a few expert-annotated scans to generate pseudolabels on a large dataset of unannotated scans. The pseudolabels are then filtered to remove false positive LNs by excluding LNs outside the mediastinum and LNs overlapping with other anatomical structures. Finally, a single 3D nnU-Net model is trained using the filtered pseudo-labels. Our method optimizes the ratio of annotated/non-annotated dataset sizes to achieve the desired performance, thus reducing manual annotation effort. Experimental studies on three chest ceCT datasets with a total of 268 annotated scans (1817 LNs), of which 134 scans were used for testing and the remaining for ensemble training in batches of 17, 34, 67, and 134 scans, as well as 710 unannotated scans, show that the semi-supervised models' recall improvements were 11-24% (0.72-0.87) while maintaining comparable precision levels. The best model achieved mean SAL differences of 1.65 ± 0.92 mm for normal LNs and 4.25 ± 4.98 mm for enlarged LNs, both within the observer variability. Our semi-supervised method requires one-fourth to one-eighth less annotations to achieve a performance to supervised models trained on the same dataset for the automatic measurement of mediastinal LNs in chest ceCT. Using pseudolabels with anatomical filtering may be effective to overcome the challenges of the development of AI-based solutions in radiology.

Open-Source AI for Vastus Lateralis and Adipose Tissue Segmentation to Assess Muscle Size and Quality.

White MS, Horikawa-Strakovsky A, Mayer KP, Noehren BW, Wen Y

pubmed logopapersSep 13 2025
Ultrasound imaging is a clinically feasible method for assessing muscle size and quality, but manual processing is time-consuming and difficult to scale. Existing artificial intelligence (AI) models measure muscle cross-sectional area, but they do not include assessments of muscle quality or account for the influence of subcutaneous adipose tissue thickness on echo intensity measurements. We developed an open-source AI model to accurately segment the vastus lateralis and subcutaneous adipose tissue in B-mode images for automating measurements of muscle size and quality. The model was trained on 612 ultrasound images from 44 participants who had anterior cruciate ligament reconstruction. Model generalizability was evaluated on a test set of 50 images from 14 unique participants. A U-Net architecture with ResNet50 backbone was used for segmentation. Performance was assessed using the Dice coefficient and Intersection over Union (IoU). Agreement between model predictions and manual measurements was evaluated using intraclass correlation coefficients (ICCs), R² values and standard errors of measurement (SEM). Dice coefficients were 0.9095 and 0.9654 for subcutaneous adipose tissue and vastus lateralis segmentation, respectively. Excellent agreement was observed between model predictions and manual measurements for cross-sectional area (ICC = 0.986), echo intensity (ICC = 0.991) and subcutaneous adipose tissue thickness (ICC = 0.996). The model demonstrated high reliability with low SEM values for clinical measurements (cross-sectional area: 1.15 cm², echo intensity: 1.28-1.78 a.u.). We developed an open-source AI model that accurately segments the vastus lateralis and subcutaneous adipose tissue in B-mode ultrasound images, enabling automated measurements of muscle size and quality.

PET-Computed Tomography in the Management of Sarcoma by Interventional Oncology.

Yazdanpanah F, Hunt SJ

pubmed logopapersSep 13 2025
PET-computed tomography (CT) has become essential in sarcoma management, offering precise diagnosis, staging, and response assessment by combining metabolic and anatomic imaging. Its high accuracy in detecting primary, recurrent, and metastatic disease guides personalized treatment strategies and enhances interventional procedures like biopsies and ablations. Advances in novel radiotracers and hybrid imaging modalities further improve diagnostic specificity, especially in complex and pediatric cases. Integrating PET-CT with genomic data and artificial intelligence (AI)-driven tools promises to advance personalized medicine, enabling tailored therapies and better outcomes. As a cornerstone of multidisciplinary sarcoma care, PET-CT continues to transform diagnostic and therapeutic approaches in oncology.
Page 13 of 3423413 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.