Sort by:
Page 14 of 1331322 results

Magnetization transfer MRI (MT-MRI) detects white matter damage beyond the primary site of compression in degenerative cervical myelopathy using a novel semi-automated analysis.

Muhammad F, Weber Ii KA, Haynes G, Villeneuve L, Smith L, Baha A, Hameed S, Khan AF, Dhaher Y, Parrish T, Rohan M, Smith ZA

pubmed logopapersSep 14 2025
Degenerative cervical myelopathy (DCM) is the leading cause of spinal cord disorder in adults, yet conventional MRI cannot detect microstructural damage beyond the compression site. Current application of magnetization transfer ratio (MTR), while promising, suffer from limited standardization, operator dependence, and unclear added value over traditional metrics such as cross-sectional area (CSA). To address these limitations, we utilized our semi-automated analysis pipeline built on the Spinal Cord Toolbox (SCT) platform to automate MTR extraction. Our method integrates deep learning-based convolutional neural networks (CNNs) for spinal cord segmentation, vertebral labeling via the global curve optimization algorithm and PAM50 template registration to enable automated MTR extraction. Using the Generic Spine Protocol, we acquired 3T T2w- and MT-MRI images from 30 patients with DCM and 15 age-matched healthy controls (HC). We computed MTR and CSA at the maximal compression level (C5-C6) and a distant, uncompressed region (C2-C3). We extracted regional and tract-specific MTR using probabilistic maps in template space. Diagnostic accuracy was assessed with ROC analysis, and k-means clustering reveal patients subgroups based on neurological impairments. Correlation analysis assessed associations between MTR measures and DCM deficits. Patients with DCM showed significant MTR reductions in both compressed and uncompressed regions (p < 0.05). At C2-C3, MTR outperformed CSA (AUC 0.74 vs 0.69) in detecting spinal cord pathology. Tract-specific MTR were correlated with dexterity, grip strength, and balance deficits. Our reproducible, computationally robust pipeline links microstructural injury to clinical outcomes in DCM and provides a scalable framework for multi-site quantitative MRI analysis of the spinal cord.

Open-Source AI for Vastus Lateralis and Adipose Tissue Segmentation to Assess Muscle Size and Quality.

White MS, Horikawa-Strakovsky A, Mayer KP, Noehren BW, Wen Y

pubmed logopapersSep 13 2025
Ultrasound imaging is a clinically feasible method for assessing muscle size and quality, but manual processing is time-consuming and difficult to scale. Existing artificial intelligence (AI) models measure muscle cross-sectional area, but they do not include assessments of muscle quality or account for the influence of subcutaneous adipose tissue thickness on echo intensity measurements. We developed an open-source AI model to accurately segment the vastus lateralis and subcutaneous adipose tissue in B-mode images for automating measurements of muscle size and quality. The model was trained on 612 ultrasound images from 44 participants who had anterior cruciate ligament reconstruction. Model generalizability was evaluated on a test set of 50 images from 14 unique participants. A U-Net architecture with ResNet50 backbone was used for segmentation. Performance was assessed using the Dice coefficient and Intersection over Union (IoU). Agreement between model predictions and manual measurements was evaluated using intraclass correlation coefficients (ICCs), R² values and standard errors of measurement (SEM). Dice coefficients were 0.9095 and 0.9654 for subcutaneous adipose tissue and vastus lateralis segmentation, respectively. Excellent agreement was observed between model predictions and manual measurements for cross-sectional area (ICC = 0.986), echo intensity (ICC = 0.991) and subcutaneous adipose tissue thickness (ICC = 0.996). The model demonstrated high reliability with low SEM values for clinical measurements (cross-sectional area: 1.15 cm², echo intensity: 1.28-1.78 a.u.). We developed an open-source AI model that accurately segments the vastus lateralis and subcutaneous adipose tissue in B-mode ultrasound images, enabling automated measurements of muscle size and quality.

Adapting Medical Vision Foundation Models for Volumetric Medical Image Segmentation via Active Learning and Selective Semi-supervised Fine-tuning

Jin Yang, Daniel S. Marcus, Aristeidis Sotiras

arxiv logopreprintSep 13 2025
Medical Vision Foundation Models (Med-VFMs) have superior capabilities of interpreting medical images due to the knowledge learned from self-supervised pre-training with extensive unannotated images. To improve their performance on adaptive downstream evaluations, especially segmentation, a few samples from target domains are selected randomly for fine-tuning them. However, there lacks works to explore the way of adapting Med-VFMs to achieve the optimal performance on target domains efficiently. Thus, it is highly demanded to design an efficient way of fine-tuning Med-VFMs by selecting informative samples to maximize their adaptation performance on target domains. To achieve this, we propose an Active Source-Free Domain Adaptation (ASFDA) method to efficiently adapt Med-VFMs to target domains for volumetric medical image segmentation. This ASFDA employs a novel Active Learning (AL) method to select the most informative samples from target domains for fine-tuning Med-VFMs without the access to source pre-training samples, thus maximizing their performance with the minimal selection budget. In this AL method, we design an Active Test Time Sample Query strategy to select samples from the target domains via two query metrics, including Diversified Knowledge Divergence (DKD) and Anatomical Segmentation Difficulty (ASD). DKD is designed to measure the source-target knowledge gap and intra-domain diversity. It utilizes the knowledge of pre-training to guide the querying of source-dissimilar and semantic-diverse samples from the target domains. ASD is designed to evaluate the difficulty in segmentation of anatomical structures by measuring predictive entropy from foreground regions adaptively. Additionally, our ASFDA method employs a Selective Semi-supervised Fine-tuning to improve the performance and efficiency of fine-tuning by identifying samples with high reliability from unqueried ones.

Epicardial and Pericardial Adipose Tissue: Anatomy, physiology, Imaging, Segmentation, and Treatment Effects.

Demmert TT, Klambauer K, Moser LJ, Mergen V, Eberhard M, Alkadhi H

pubmed logopapersSep 13 2025
Epicardial (EAT) and pericardial adipose tissue (PAT) are increasingly recognized as distinct fat depots with implications for cardiovascular disease. This review discusses their anatomical and physiological characteristics, as well as their pathophysiological roles. EAT, in direct contact with the myocardium, exerts local inflammatory and metabolic effects on the heart, while PAT influences cardiovascular health rather systemically. We sought to discuss the currently used imaging modalities to assess these fat compartments-CT, MRI, and echocardiography-emphasizing their advantages, limitations, and the urgent need for standardization for both scanning and image reconstruction. Advances in image segmentation, particularly deep learning-based approaches, have improved the accuracy and reproducibility of EAT and PAT quantification. This review also explores the role of EAT and PAT as risk factors for cardiovascular outcomes, summarizing conflicting evidence across studies. Finally, we summarize the effects of medical therapy and lifestyle interventions on reducing EAT volume. Understanding and accurately quantifying EAT and PAT is essential for cardiovascular risk stratification and may open new pathways for therapeutic interventions.

Ex vivo human brain volumetry: Validation of MRI measurements.

Gérin-Lajoie A, Adame-Gonzalez W, Frigon EM, Guerra Sanches L, Nayouf A, Boire D, Dadar M, Maranzano J

pubmed logopapersSep 12 2025
The volume of in vivo human brains is determined with various MRI measurement tools that have not been assessed against a gold standard. The purpose of this study was to validate the MRI brain volumes by scanning ex vivo, in situ specimens, which allows the extraction of the brain after the scan to compare its volume with the gold-standard water displacement method (WDM). The 3T MRI T<sub>2</sub>-weighted, T<sub>1</sub>-weighted, and MP2RAGE images of seven anatomical heads fixed with an alcohol-formaldehyde solution were acquired. The gray and white matter were assessed using two methods: (i) a manual intensity-based threshold segmentation using Display (MINC-ToolKit) and (ii) an automatic deep learning-based segmentation tool (SynthSeg). The brains were extracted and their volumes measured with the WDM after the removal of their meninges and a midsagittal cut. Volumes from all methods were compared with the ground truth (WDM volumes) using a repeated-measures analysis of variance. Mean brain volumes, in cubic centimeters, were 1111.14 ± 121.78 for WDM, 1020.29 ± 70.01 for manual T<sub>2</sub>-weighted, 1056.29 ± 90.54 for automatic T<sub>2</sub>-weighted, 1094.69 ± 100.51 for automatic T<sub>1</sub>-weighted, 1066.56 ± 96.52 for automatic magnetization-prepared 2 rapid gradient-echo first inversion time, and 1156.18 ± 121.87 for automatic magnetization-prepared 2 rapid gradient-echo second inversion time. All volumetry methods were significantly different (F = 17.874; p < 0.001) from the WDM volumes, except the automatic T<sub>1</sub>-weighted volumes. SynthSeg accurately determined the brain volume in ex vivo, in situ T<sub>1</sub>-weighted MRI scans. The results suggested that given the contrast similarity between the ex vivo and in vivo sequences, the brain volumes of clinical studies are most probably sufficiently accurate, with some degree of underestimation depending on the sequence used.

Deep learning for automated segmentation of central cartilage tumors on MRI.

Gitto S, Corti A, van Langevelde K, Navas Cañete A, Cincotta A, Messina C, Albano D, Vignaga C, Ferrari L, Mainardi L, Corino VDA, Sconfienza LM

pubmed logopapersSep 12 2025
Automated segmentation methods may potentially increase the reliability and applicability of radiomics in skeletal oncology. Our aim was to propose a deep learning-based method for automated segmentation of atypical cartilaginous tumor (ACT) and grade II chondrosarcoma (CS2) of long bones on magnetic resonance imaging (MRI). This institutional review board-approved retrospective study included 164 patients with surgically treated and histology-proven cartilaginous tumors at two tertiary bone tumor centers. The first cohort consisted of 99 MRI scans from center 1 (79 ACT, 20 CS2). The second cohort consisted of 65 MRI scans from center 2 (45 ACT, 20 CS2). Supervised Edge-Attention Guidance segmentation Network (SEAGNET) architecture was employed for automated image segmentation on T1-weighted images, using manual segmentations drawn by musculoskeletal radiologists as the ground truth. In the first cohort, a total of 1,037 slices containing the tumor out of 99 patients were split into 70% training, 15% validation, and 15% internal test sets, respectively, and used for model tuning. The second cohort was used for independent external testing. In the first cohort, Dice Score (DS) and Intersection over Union (IoU) per patient were 0.782 ± 0.148 and 0.663 ± 0.175, and 0.748 ± 0.191 and 0.630 ± 0.210 in the validation and internal test sets, respectively. DS and IoU per slice were 0.742 ± 0.273 and 0.646 ± 0.266, and 0.752 ± 0.256 and 0.656 ± 0.261 in the validation and internal test sets, respectively. In the independent external test dataset, the model achieved DS of 0.828 ± 0.175 and IoU of 0.706 ± 0.180. Deep learning proved excellent for automated segmentation of central cartilage tumors on MRI. A deep learning model based on SEAGNET architecture achieved excellent performance for automated segmentation of cartilage tumors of long bones on MRI and may be beneficial, given the increasing detection rate of these lesions in clinical practice. Automated segmentation may potentially increase the reliability and applicability of radiomics-based models. A deep learning architecture was proposed for automated segmentation of appendicular cartilage tumors on MRI. Deep learning proved excellent with a mean Dice Score of 0.828 in the external test cohort.

The impact of U-Net architecture choices and skip connections on the robustness of segmentation across texture variations.

Kamath A, Willmann J, Andratschke N, Reyes M

pubmed logopapersSep 12 2025
Since its introduction in 2015, the U-Net architecture has become popular for medical image segmentation. U-Net is known for its "skip connections," which transfer image details directly to its decoder branch at various levels. However, it's unclear how these skip connections affect the model's performance when the texture of input images varies. To explore this, we tested six types of U-Net-like architectures in three groups: Standard (U-Net and V-Net), No-Skip (U-Net and V-Net without skip connections), and Enhanced (AGU-Net and UNet++, which have extra skip connections). Because convolutional neural networks (CNNs) are known to be sensitive to texture, we defined a novel texture disparity (TD) metric and ran experiments with synthetic images, adjusting this measure. We then applied these findings to four real medical imaging datasets, covering different anatomies (breast, colon, heart, and spleen) and imaging types (ultrasound, histology, MRI, and CT). The goal was to understand how the choice of architecture impacts the model's ability to handle varying TD between foreground and background. For each dataset, we tested the models with five categories of TD, measuring their performance using the Dice Score Coefficient (DSC), Hausdorff distance, surface distance, and surface DSC. Our results on synthetic data with varying textures show differences between the performance of architectures with and without skip connections, especially when trained in hard textural conditions. When translated to medical data, it indicates that training data sets with a narrow texture range negatively impact the robustness of architectures that include more skip connections. The robustness gap between architectures reduces when trained on a larger TD range. In the harder TD categories, models from the No-Skip group performed the best in 5/8 cases (based on DSC) and 7/8 (based on Hausdorff distances). When measuring robustness using the coefficient of variation metric on the DSC, the No-Skip group performed the best in 7 out of 16 cases, showing superior results than the Enhanced (6/16) and Standard groups (3/16). These findings suggest that skip connections offer performance benefits, usually at the expense of robustness losses, depending on the degree of texture disparity between the foreground and background, and the range of texture variations present in the training set. This indicates careful evaluation of their use for robustness-critical tasks like medical image segmentation. Combinations of texture-aware architectures must be investigated to achieve better performance-robustness characteristics.

Cardiac Magnetic Resonance Imaging in the German National Cohort (NAKO): Automated Segmentation of Short-Axis Cine Images and Post-Processing Quality Control.

Full PM, Schirrmeister RT, Hein M, Russe MF, Reisert M, Ammann C, Greiser KH, Niendorf T, Pischon T, Schulz-Menger J, Maier-Hein KH, Bamberg F, Rospleszcz S, Schlett CL, Schuppert C

pubmed logopapersSep 12 2025
The prospective, multicenter German National Cohort (NAKO) provides a unique dataset of cardiac magnetic resonance (CMR) cine images. Effective processing of these images requires a robust segmentation and quality control pipeline. A deep learning model for semantic segmentation, based on the nnU-Net architecture, was applied to full-cycle short-axis cine images from 29,908 baseline participants. The primary objective was to determine data on structure and function for both ventricles (LV, RV), including end-diastolic volumes (EDV), end-systolic volumes (ESV), and LV myocardial mass. Quality control measures included a visual assessment of outliers in morphofunctional parameters, inter- and intra-ventricular phase differences, and time-volume curves (TVC). These were adjudicated using a five-point rating scale, ranging from five (excellent) to one (non-diagnostic), with ratings of three or lower subject to exclusion. The predictive value of outlier criteria for inclusion and exclusion was evaluated using receiver operating characteristics analysis. The segmentation model generated complete data for 29,609 participants (incomplete in 1.0%), of which 5,082 cases (17.0%) underwent visual assessment. Quality assurance yielded a sample of 26,899 (90.8%) participants with excellent or good quality, excluding 1,875 participants due to image quality issues and 835 participants due to segmentation quality issues. TVC was the strongest single discriminator between included and excluded participants (AUC: 0.684). Of the two-category combinations, the pairing of TVC and phases provided the greatest improvement over TVC alone (AUC difference: 0.044; p<0.001). The best performance was observed when all three categories were combined (AUC: 0.748). By extending the quality-controlled sample to include mid-level 'acceptable' quality ratings, a total of 28,413 (96.0%) participants could be included. The implemented pipeline facilitated the automated segmentation of an extensive CMR dataset, integrating quality control measures. This methodology ensures that ensuing quantitative analyses are conducted with a diminished risk of bias.

A Comparison and Evaluation of Fine-tuned Convolutional Neural Networks to Large Language Models for Image Classification and Segmentation of Brain Tumors on MRI

Felicia Liu, Jay J. Yoo, Farzad Khalvati

arxiv logopreprintSep 12 2025
Large Language Models (LLMs) have shown strong performance in text-based healthcare tasks. However, their utility in image-based applications remains unexplored. We investigate the effectiveness of LLMs for medical imaging tasks, specifically glioma classification and segmentation, and compare their performance to that of traditional convolutional neural networks (CNNs). Using the BraTS 2020 dataset of multi-modal brain MRIs, we evaluated a general-purpose vision-language LLM (LLaMA 3.2 Instruct) both before and after fine-tuning, and benchmarked its performance against custom 3D CNNs. For glioma classification (Low-Grade vs. High-Grade), the CNN achieved 80% accuracy and balanced precision and recall. The general LLM reached 76% accuracy but suffered from a specificity of only 18%, often misclassifying Low-Grade tumors. Fine-tuning improved specificity to 55%, but overall performance declined (e.g., accuracy dropped to 72%). For segmentation, three methods - center point, bounding box, and polygon extraction, were implemented. CNNs accurately localized gliomas, though small tumors were sometimes missed. In contrast, LLMs consistently clustered predictions near the image center, with no distinction of glioma size, location, or placement. Fine-tuning improved output formatting but failed to meaningfully enhance spatial accuracy. The bounding polygon method yielded random, unstructured outputs. Overall, CNNs outperformed LLMs in both tasks. LLMs showed limited spatial understanding and minimal improvement from fine-tuning, indicating that, in their current form, they are not well-suited for image-based tasks. More rigorous fine-tuning or alternative training strategies may be needed for LLMs to achieve better performance, robustness, and utility in the medical space.

Toward Reliable Thalamic Segmentation: a rigorous evaluation of automated methods for structural MRI

Argyropoulos, G. P. D., Butler, C. R., Saranathan, M.

medrxiv logopreprintSep 12 2025
Automated thalamic nuclear segmentation has contributed towards a shift in neuroimaging analyses from treating the thalamus as a homogeneous, passive relay, to a set of individual nuclei, embedded within distinct brain-wide circuits. However, many studies continue to widely rely on FreeSurfers segmentation of T1-weighted structural MRIs, despite their poor intrathalamic nuclear contrast. Meanwhile, a convolutional neural network tool has been developed for FreeSurfer, using information from both diffusion and T1-weighted MRIs. Another popular thalamic nuclear segmentation technique is HIPS-THOMAS, a multi-atlas-based method that leverages white-matter-like contrast synthesized from T1-weighted MRIs. However, rigorous comparisons amongst methods remain scant, and the thalamic atlases against which these methods have been assessed have their own limitations. These issues may compromise the quality of cross-species comparisons, structural and functional connectivity studies in health and disease, as well as the efficacy of neuromodulatory interventions targeting the thalamus. Here, we report, for the first time, comparisons amongst HIPS-THOMAS, the standard FreeSurfer segmentation, and its more recent development, against two thalamic atlases as silver-standard ground-truths. We used two cohorts of healthy adults, and one cohort of patients in the chronic phase of autoimmune limbic encephalitis. In healthy adults, HIPS-THOMAS surpassed, not only the standard FreeSurfer segmentation, but also its more recent, diffusion-based update. The improvements made with the latter relative to the former were limited to a few nuclei. Finally, the standard FreeSurfer method underperformed, relative to the other two, in distinguishing between patients and healthy controls based on the affected anteroventral and pulvinar nuclei. In light of the above findings, we provide recommendations on the use of automated segmentation methods of the human thalamus using structural brain imaging.
Page 14 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.