Sort by:
Page 112 of 1331328 results

A new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation.

Sagberg K, Lie T, F Peterson H, Hillestad V, Eskild A, Bø LE

pubmed logopapersJun 1 2025
Placental volume measurements can potentially identify high-risk pregnancies. We aimed to develop and validate a new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation. We included 43 pregnancies at gestational week 27 and acquired placental images using a 2D ultrasound probe with position tracking, and trained a convolutional neural network (CNN) for automatic image segmentation. The automatically segmented 2D images were combined with tracking data to calculate placental volume. For 15 of the included pregnancies, placental volume was also estimated based on MRI examinations, 3D ultrasound and manually segmented 2D ultrasound images. The ultrasound methods were compared to MRI (gold standard). The CNN demonstrated good performance in automatic image segmentation (F1-score 0.84). The correlation with MRI-based placental volume was similar for tracked 2D ultrasound using automatically segmented images (absolute agreement intraclass correlation coefficient [ICC] 0.58, 95% CI 0.13-0.84) and manually segmented images (ICC 0.59, 95% CI 0.13-0.84). The 3D ultrasound method showed lower ICC (0.35, 95% CI -0.11 to 0.74) than the methods based on tracked 2D ultrasound. Tracked 2D ultrasound with automatic image segmentation is a promising new method for placental volume measurements and has potential for further improvement.

Information Geometric Approaches for Patient-Specific Test-Time Adaptation of Deep Learning Models for Semantic Segmentation.

Ravishankar H, Paluru N, Sudhakar P, Yalavarthy PK

pubmed logopapersJun 1 2025
The test-time adaptation (TTA) of deep-learning-based semantic segmentation models, specific to individual patient data, was addressed in this study. The existing TTA methods in medical imaging are often unconstrained, require anatomical prior information or additional neural networks built during training phase, making them less practical, and prone to performance deterioration. In this study, a novel framework based on information geometric principles was proposed to achieve generic, off-the-shelf, regularized patient-specific adaptation of models during test-time. By considering the pre-trained model and the adapted models as part of statistical neuromanifolds, test-time adaptation was treated as constrained functional regularization using information geometric measures, leading to improved generalization and patient optimality. The efficacy of the proposed approach was shown on three challenging problems: 1) improving generalization of state-of-the-art models for segmenting COVID-19 anomalies in Computed Tomography (CT) images 2) cross-institutional brain tumor segmentation from magnetic resonance (MR) images, 3) segmentation of retinal layers in Optical Coherence Tomography (OCT) images. Further, it was demonstrated that robust patient-specific adaptation can be achieved without adding significant computational burden, making it first of its kind based on information geometric principles.

LiDSCUNet++: A lightweight depth separable convolutional UNet++ for vertebral column segmentation and spondylosis detection.

Agrawal KK, Kumar G

pubmed logopapersMay 31 2025
Accurate computer-aided diagnosis systems rely on precise segmentation of the vertebral column to assist physicians in diagnosing various disorders. However, segmenting spinal disks and bones becomes challenging in the presence of abnormalities and complex anatomical structures. While Deep Convolutional Neural Networks (DCNNs) achieve remarkable results in medical image segmentation, their performance is limited by data insufficiency and the high computational complexity of existing solutions. This paper introduces LiDSCUNet++, a lightweight deep learning framework based on depthwise-separable and pointwise convolutions integrated with UNet++ for vertebral column segmentation. The model segments vertebral anomalies from dog radiographs, and the results are further processed by YOLOv8 for automated detection of Spondylosis Deformans. LiDSCUNet++ delivers comparable segmentation performance while significantly reducing trainable parameters, memory usage, energy consumption, and computational time, making it an efficient and practical solution for medical image analysis.

Relationship between spleen volume and diameter for assessment of response to treatment on CT in patients with hematologic malignancies enrolled in clinical trials.

Hasenstab KA, Lu J, Leong LT, Bossard E, Pylarinou-Sinclair E, Devi K, Cunha GM

pubmed logopapersMay 31 2025
Investigate spleen diameter (d) and volume (v) relationship in patients with hematologic malignancies (HM) by determining volumetric thresholds that best correlate to established diameter thresholds for assessing response to treatment. Exploratorily, interrogate the impact of volumetric measurements in response categories and as a predictor of response. Secondary analysis of prospectively collected clinical trial data of 382 patients with HM. Spleen diameters were computed following Lugano criteria and volumes using deep learning segmentation. d and v relationship was estimated using power regression model, volumetric thresholds ([Formula: see text]) for treatment response estimated; threshold search to determine percentual change ([Formula: see text] and minimum volumetric increase ([Formula: see text]) that maximize agreement with Lugano criteria performed. Spleen diameter and volume predictive performance for clinical response investigated using random forest model. [Formula: see text] describes the relationship between spleen diameter and volume. [Formula: see text] for splenomegaly was 546 cm³. [Formula: see text], [Formula: see text], and [Formula: see text] for assessing response resulting in highest agreement with Lugano criteria were 570 cm<sup>3</sup>, 73%, and 170 cm<sup>3</sup>, respectively. Predictive performance for response between diameter and volume were not significantly different (P=0.78). This study provides empirical spleen volume threshold and percentual changes that best correlate with diameter thresholds, i.e., Lugano criteria, for assessment of response to treatment in patients with HM. In our dataset use of spleen volumetric thresholds versus diameter thresholds resulted in similar response assessment categories and did not signal differences in predictive values for response.

QoQ-Med: Building Multimodal Clinical Foundation Models with Domain-Aware GRPO Training

Wei Dai, Peilin Chen, Chanakya Ekbote, Paul Pu Liang

arxiv logopreprintMay 31 2025
Clinical decision-making routinely demands reasoning over heterogeneous data, yet existing multimodal language models (MLLMs) remain largely vision-centric and fail to generalize across clinical specialties. To bridge this gap, we introduce QoQ-Med-7B/32B, the first open generalist clinical foundation model that jointly reasons across medical images, time-series signals, and text reports. QoQ-Med is trained with Domain-aware Relative Policy Optimization (DRPO), a novel reinforcement-learning objective that hierarchically scales normalized rewards according to domain rarity and modality difficulty, mitigating performance imbalance caused by skewed clinical data distributions. Trained on 2.61 million instruction tuning pairs spanning 9 clinical domains, we show that DRPO training boosts diagnostic performance by 43% in macro-F1 on average across all visual domains as compared to other critic-free training methods like GRPO. Furthermore, with QoQ-Med trained on intensive segmentation data, it is able to highlight salient regions related to the diagnosis, with an IoU 10x higher than open models while reaching the performance of OpenAI o4-mini. To foster reproducibility and downstream research, we release (i) the full model weights, (ii) the modular training pipeline, and (iii) all intermediate reasoning traces at https://github.com/DDVD233/QoQ_Med.

MSLesSeg: baseline and benchmarking of a new Multiple Sclerosis Lesion Segmentation dataset.

Guarnera F, Rondinella A, Crispino E, Russo G, Di Lorenzo C, Maimone D, Pappalardo F, Battiato S

pubmed logopapersMay 31 2025
This paper presents MSLesSeg, a new, publicly accessible MRI dataset designed to advance research in Multiple Sclerosis (MS) lesion segmentation. The dataset comprises 115 scans of 75 patients including T1, T2 and FLAIR sequences, along with supplementary clinical data collected across different sources. Expert-validated annotations provide high-quality lesion segmentation labels, establishing a reliable human-labeled dataset for benchmarking. Part of the dataset was shared with expert scientists with the aim to compare the last automatic AI-based image segmentation solutions with an expert-biased handmade segmentation. In addition, an AI-based lesion segmentation of MSLesSeg was developed and technically validated against the last state-of-the-art methods. The dataset, the detailed analysis of researcher contributions, and the baseline results presented here mark a significant milestone for advancing automated MS lesion segmentation research.

CineMA: A Foundation Model for Cine Cardiac MRI

Yunguan Fu, Weixi Yi, Charlotte Manisty, Anish N Bhuva, Thomas A Treibel, James C Moon, Matthew J Clarkson, Rhodri Huw Davies, Yipeng Hu

arxiv logopreprintMay 31 2025
Cardiac magnetic resonance (CMR) is a key investigation in clinical cardiovascular medicine and has been used extensively in population research. However, extracting clinically important measurements such as ejection fraction for diagnosing cardiovascular diseases remains time-consuming and subjective. We developed CineMA, a foundation AI model automating these tasks with limited labels. CineMA is a self-supervised autoencoder model trained on 74,916 cine CMR studies to reconstruct images from masked inputs. After fine-tuning, it was evaluated across eight datasets on 23 tasks from four categories: ventricle and myocardium segmentation, left and right ventricle ejection fraction calculation, disease detection and classification, and landmark localisation. CineMA is the first foundation model for cine CMR to match or outperform convolutional neural networks (CNNs). CineMA demonstrated greater label efficiency than CNNs, achieving comparable or better performance with fewer annotations. This reduces the burden of clinician labelling and supports replacing task-specific training with fine-tuning foundation models in future cardiac imaging applications. Models and code for pre-training and fine-tuning are available at https://github.com/mathpluscode/CineMA, democratising access to high-performance models that otherwise require substantial computational resources, promoting reproducibility and accelerating clinical translation.

Subclinical atrial fibrillation prediction based on deep learning and strain analysis using echocardiography.

Huang SH, Lin YC, Chen L, Unankard S, Tseng VS, Tsao HM, Tang GJ

pubmed logopapersMay 31 2025
Subclinical atrial fibrillation (SCAF), also known as atrial high-rate episodes (AHREs), refers to asymptomatic heart rate elevations associated with increased risks of atrial fibrillation and cardiovascular events. Although deep learning (DL) models leveraging echocardiographic images from ultrasound are widely used for cardiac function analysis, their application to AHRE prediction remains unexplored. This study introduces a novel DL-based framework for automatic AHRE detection using echocardiograms. The approach encompasses left atrium (LA) segmentation, LA strain feature extraction, and AHRE classification. Data from 117 patients with cardiac implantable electronic devices undergoing echocardiography were analyzed, with 80% allocated to the development set and 20% to the test set. LA segmentation accuracy was quantified using the Dice coefficient, yielding scores of 0.923 for the LA cavity and 0.741 for the LA wall. For AHRE classification, metrics such as area under the curve (AUC), accuracy, sensitivity, and specificity were employed. A transformer-based model integrating patient characteristics demonstrated robust performance, achieving mean AUC of 0.815, accuracy of 0.809, sensitivity of 0.800, and specificity of 0.783 for a 24-h AHRE duration threshold. This framework represents a reliable tool for AHRE assessment and holds significant potential for early SCAF detection, enhancing clinical decision-making and patient outcomes.

A Mixed-attention Network for Automated Interventricular Septum Segmentation in Bright-blood Myocardial T2* MRI Relaxometry in Thalassemia.

Wu X, Wang H, Chen Z, Sun S, Lian Z, Zhang X, Peng P, Feng Y

pubmed logopapersMay 30 2025
This study develops a deep-learning method for automatic segmentation of the interventricular septum (IS) in MR images to measure myocardial T2* and estimate cardiac iron deposition in patients with thalassemia. This retrospective study used multiple-gradient-echo cardiac MR scans from 419 thalassemia patients to develop and evaluate the segmentation network. The network was trained on 1.5 T images from Center 1 and evaluated on 3.0 T unseen images from Center 1, all data from Center 2, and the CHMMOTv1 dataset. Model performance was assessed using five metrics, and T2* values were obtained by fitting the network output. Bland-Altman analysis, coefficient of variation (CoV), and regression analysis were used to evaluate the consistency between automatic and manual methods. MA-BBIsegNet achieved a Dice of 0.90 on the internal test set, 0.85 on the external test set, and 0.81 on the CHMMOTv1 dataset. Bland-Altman analysis showed mean differences of 0.08 (95% LoA: -2.79 ∼ 2.63) ms (internal), 0.29 (95% LoA: -4.12 ∼ 3.54) ms (external) and 0.19 (95% LoA: -3.50 ∼ 3.88) ms (CHMMOTv1), with CoV of 8.9%, 6.8%, and 9.3%. Regression analysis yielded r values of 0.98 for the internal and CHMMOTv1 datasets, and 0.99 for the external dataset (p < 0.05). The IS segmentation network based on multiple-gradient-echo bright-blood images yielded T2* values that were in strong agreement with manual measurements, highlighting its potential for the efficient, non-invasive monitoring of myocardial iron deposition in patients with thalassemia.

Fully automated measurement of aortic pulse wave velocity from routine cardiac MRI studies.

Jiang Y, Yao T, Paliwal N, Knight D, Punjabi K, Steeden J, Hughes AD, Muthurangu V, Davies R

pubmed logopapersMay 30 2025
Aortic pulse wave velocity (PWV) is a prognostic biomarker for cardiovascular disease, which can be measured by dividing the aortic path length by the pulse transit time. However, current MRI techniques require special sequences and time-consuming manual analysis. We aimed to fully automate the process using deep learning to measure PWV from standard sequences, facilitating PWV measurement in routine clinical and research scans. A deep learning (DL) model was developed to generate high-resolution 3D aortic segmentations from routine 2D trans-axial SSFP localizer images, and the centerlines of the resulting segmentations were used to estimate the aortic path length. A further DL model was built to automatically segment the ascending and descending aorta in phase contrast images, and pulse transit time was estimated from the sampled flow curves. Quantitative comparison with trained observers was performed for path length, aortic flow segmentation and transit time, either using an external clinical dataset with both localizers and paired 3D images acquired or on a sample of UK Biobank subjects. Potential application to clinical research scans was evaluated on 1053 subjects from the UK Biobank. Aortic path length measurement was accurate with no major difference between the proposed method (125 ± 19 mm) and manual measurement by a trained observer (124 ± 19 mm) (P = 0.88). Automated phase contrast image segmentation was similar to that of a trained observer for both the ascending (Dice vs manual: 0.96) and descending (Dice 0.89) aorta with no major difference in transit time estimation (proposed method = 21 ± 9 ms, manual = 22 ± 9 ms; P = 0.15). 966 of 1053 (92 %) UK Biobank subjects were successfully analyzed, with a median PWV of 6.8 m/s, increasing 27 % per decade of age and 6.5 % higher per 10 mmHg higher systolic blood pressure. We describe a fully automated method for measuring PWV from standard cardiac MRI localizers and a single phase contrast imaging plane. The method is robust and can be applied to routine clinical scans, and could unlock the potential of measuring PWV in large-scale clinical and population studies. All models and deployment codes are available online.
Page 112 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.