Sort by:
Page 123 of 1331322 results

SMURF: Scalable method for unsupervised reconstruction of flow in 4D flow MRI

Atharva Hans, Abhishek Singh, Pavlos Vlachos, Ilias Bilionis

arxiv logopreprintMay 18 2025
We introduce SMURF, a scalable and unsupervised machine learning method for simultaneously segmenting vascular geometries and reconstructing velocity fields from 4D flow MRI data. SMURF models geometry and velocity fields using multilayer perceptron-based functions incorporating Fourier feature embeddings and random weight factorization to accelerate convergence. A measurement model connects these fields to the observed image magnitude and phase data. Maximum likelihood estimation and subsampling enable SMURF to process high-dimensional datasets efficiently. Evaluations on synthetic, in vitro, and in vivo datasets demonstrate SMURF's performance. On synthetic internal carotid artery aneurysm data derived from CFD, SMURF achieves a quarter-voxel segmentation accuracy across noise levels of up to 50%, outperforming the state-of-the-art segmentation method by up to double the accuracy. In an in vitro experiment on Poiseuille flow, SMURF reduces velocity reconstruction RMSE by approximately 34% compared to raw measurements. In in vivo internal carotid artery aneurysm data, SMURF attains nearly half-voxel segmentation accuracy relative to expert annotations and decreases median velocity divergence residuals by about 31%, with a 27% reduction in the interquartile range. These results indicate that SMURF is robust to noise, preserves flow structure, and identifies patient-specific morphological features. SMURF advances 4D flow MRI accuracy, potentially enhancing the diagnostic utility of 4D flow MRI in clinical applications.

Mutual Evidential Deep Learning for Medical Image Segmentation

Yuanpeng He, Yali Bi, Lijian Li, Chi-Man Pun, Wenpin Jiao, Zhi Jin

arxiv logopreprintMay 18 2025
Existing semi-supervised medical segmentation co-learning frameworks have realized that model performance can be diminished by the biases in model recognition caused by low-quality pseudo-labels. Due to the averaging nature of their pseudo-label integration strategy, they fail to explore the reliability of pseudo-labels from different sources. In this paper, we propose a mutual evidential deep learning (MEDL) framework that offers a potentially viable solution for pseudo-label generation in semi-supervised learning from two perspectives. First, we introduce networks with different architectures to generate complementary evidence for unlabeled samples and adopt an improved class-aware evidential fusion to guide the confident synthesis of evidential predictions sourced from diverse architectural networks. Second, utilizing the uncertainty in the fused evidence, we design an asymptotic Fisher information-based evidential learning strategy. This strategy enables the model to initially focus on unlabeled samples with more reliable pseudo-labels, gradually shifting attention to samples with lower-quality pseudo-labels while avoiding over-penalization of mislabeled classes in high data uncertainty samples. Additionally, for labeled data, we continue to adopt an uncertainty-driven asymptotic learning strategy, gradually guiding the model to focus on challenging voxels. Extensive experiments on five mainstream datasets have demonstrated that MEDL achieves state-of-the-art performance.

The Role of Digital Technologies in Personalized Craniomaxillofacial Surgical Procedures.

Daoud S, Shhadeh A, Zoabi A, Redenski I, Srouji S

pubmed logopapersMay 17 2025
Craniomaxillofacial (CMF) surgery addresses complex challenges, balancing aesthetic and functional restoration. Digital technologies, including advanced imaging, virtual surgical planning, computer-aided design, and 3D printing, have revolutionized this field. These tools improve accuracy and optimize processes across all surgical phases, from diagnosis to postoperative evaluation. CMF's unique demands are met through patient-specific solutions that optimize outcomes. Emerging technologies like artificial intelligence, extended reality, robotics, and bioprinting promise to overcome limitations, driving the future of personalized, technology-driven CMF care.

Intracranial hemorrhage segmentation and classification framework in computer tomography images using deep learning techniques.

Ahmed SN, Prakasam P

pubmed logopapersMay 17 2025
By helping the neurosurgeon create treatment strategies that increase the survival rate, automotive diagnosis and CT (Computed Tomography) hemorrhage segmentation (CT) could be beneficial. Owing to the significance of medical image segmentation and the difficulties in carrying out human operations, a wide variety of automated techniques for this purpose have been developed, with a primary focus on particular image modalities. In this paper, MUNet (Multiclass-UNet) based Intracranial Hemorrhage Segmentation and Classification Framework (IHSNet) is proposed to successfully segment multiple kinds of hemorrhages while the fully connected layers help in classifying the type of hemorrhages.The segmentation accuracy rates for hemorrhages are 98.53% with classification accuracy stands at 98.71% when using the suggested approach. There is potential for this suggested approach to be expanded in the future to handle further medical picture segmentation issues. Intraventricular hemorrhage (IVH), Epidural hemorrhage (EDH), Intraparenchymal hemorrhage (IPH), Subdural hemorrhage (SDH), Subarachnoid hemorrhage (SAH) are the subtypes involved in intracranial hemorrhage (ICH) whose DICE coefficients are 0.77, 0.84, 0.64, 0.80, and 0.92 respectively.The proposed method has great deal of clinical application potential for computer-aided diagnostics, which can be expanded in the future to handle further medical picture segmentation and to tackle with the involved issues.

Fair ultrasound diagnosis via adversarial protected attribute aware perturbations on latent embeddings.

Xu Z, Tang F, Quan Q, Yao Q, Kong Q, Ding J, Ning C, Zhou SK

pubmed logopapersMay 17 2025
Deep learning techniques have significantly enhanced the convenience and precision of ultrasound image diagnosis, particularly in the crucial step of lesion segmentation. However, recent studies reveal that both train-from-scratch models and pre-trained models often exhibit performance disparities across sex and age attributes, leading to biased diagnoses for different subgroups. In this paper, we propose APPLE, a novel approach designed to mitigate unfairness without altering the parameters of the base model. APPLE achieves this by learning fair perturbations in the latent space through a generative adversarial network. Extensive experiments on both a publicly available dataset and an in-house ultrasound image dataset demonstrate that our method improves segmentation and diagnostic fairness across all sensitive attributes and various backbone architectures compared to the base models. Through this study, we aim to highlight the critical importance of fairness in medical segmentation and contribute to the development of a more equitable healthcare system.

Foundation versus Domain-Specific Models for Left Ventricular Segmentation on Cardiac Ultrasound

Chao, C.-J., Gu, Y., Kumar, W., Xiang, T., Appari, L., Wu, J., Farina, J. M., Wraith, R., Jeong, J., Arsanjani, R., Garvan, K. C., Oh, J. K., Langlotz, C. P., Banerjee, I., Li, F.-F., Adeli, E.

medrxiv logopreprintMay 17 2025
The Segment Anything Model (SAM) was fine-tuned on the EchoNet-Dynamic dataset and evaluated on external transthoracic echocardiography (TTE) and Point-of-Care Ultrasound (POCUS) datasets from CAMUS (University Hospital of St Etienne) and Mayo Clinic (99 patients: 58 TTE, 41 POCUS). Fine-tuned SAM was superior or comparable to MedSAM. The fine-tuned SAM also outperformed EchoNet and U-Net models, demonstrating strong generalization, especially on apical 2-chamber (A2C) images (fine-tuned SAM vs. EchoNet: CAMUS-A2C: DSC 0.891 {+/-} 0.040 vs. 0.752 {+/-} 0.196, p<0.0001) and POCUS (DSC 0.857 {+/-} 0.047 vs. 0.667 {+/-} 0.279, p<0.0001). Additionally, SAM-enhanced workflow reduced annotation time by 50% (11.6 {+/-} 4.5 sec vs. 5.7 {+/-} 1.7 sec, p<0.0001) while maintaining segmentation quality. We demonstrated an effective strategy for fine-tuning a vision foundation model for enhancing clinical workflow efficiency and supporting human-AI collaboration.

AI in motion: the impact of data augmentation strategies on mitigating MRI motion artifacts.

Westfechtel SD, Kußmann K, Aßmann C, Huppertz MS, Siepmann RM, Lemainque T, Winter VR, Barabasch A, Kuhl CK, Truhn D, Nebelung S

pubmed logopapersMay 17 2025
Artifacts in clinical MRI can compromise the performance of AI models. This study evaluates how different data augmentation strategies affect an AI model's segmentation performance under variable artifact severity. We used an AI model based on the nnU-Net architecture to automatically quantify lower limb alignment using axial T2-weighted MR images. Three versions of the AI model were trained with different augmentation strategies: (1) no augmentation ("baseline"), (2) standard nnU-net augmentations ("default"), and (3) "default" plus augmentations that emulate MR artifacts ("MRI-specific"). Model performance was tested on 600 MR image stacks (right and left; hip, knee, and ankle) from 20 healthy participants (mean age, 23 ± 3 years, 17 men), each imaged five times under standardized motion to induce artifacts. Two radiologists graded each stack's artifact severity as none, mild, moderate, and severe, and manually measured torsional angles. Segmentation quality was assessed using the Dice similarity coefficient (DSC), while torsional angles were compared between manual and automatic measurements using mean absolute deviation (MAD), intraclass correlation coefficient (ICC), and Pearson's correlation coefficient (r). Statistical analysis included parametric tests and a Linear Mixed-Effects Model. MRI-specific augmentation resulted in slightly (yet not significantly) better performance than the default strategy. Segmentation quality decreased with increasing artifact severity, which was partially mitigated by default and MRI-specific augmentations (e.g., severe artifacts, proximal femur: DSC<sub>baseline</sub> = 0.58 ± 0.22; DSC<sub>default</sub> = 0.72 ± 0.22; DSC<sub>MRI-specific</sub> = 0.79 ± 0.14 [p < 0.001]). These augmentations also maintained precise torsional angle measurements (e.g., severe artifacts, femoral torsion: MAD<sub>baseline</sub> = 20.6 ± 23.5°; MAD<sub>default</sub> = 7.0 ± 13.0°; MAD<sub>MRI-specific</sub> = 5.7 ± 9.5° [p < 0.001]; ICC<sub>baseline</sub> = -0.10 [p = 0.63; 95% CI: -0.61 to 0.47]; ICC<sub>default</sub> = 0.38 [p = 0.08; -0.17 to 0.76]; ICC<sub>MRI-specific</sub> = 0.86 [p < 0.001; 0.62 to 0.95]; r<sub>baseline</sub> = 0.58 [p < 0.001; 0.44 to 0.69]; r<sub>default</sub> = 0.68 [p < 0.001; 0.56 to 0.77]; r<sub>MRI-specific</sub> = 0.86 [p < 0.001; 0.81 to 0.9]). Motion artifacts negatively impact AI models, but general-purpose augmentations enhance robustness effectively. MRI-specific augmentations offer minimal additional benefit. Question Motion artifacts negatively impact the performance of diagnostic AI models for MRI, but mitigation methods remain largely unexplored. Findings Domain-specific augmentation during training can improve the robustness and performance of a model for quantifying lower limb alignment in the presence of severe artifacts. Clinical relevance Excellent robustness and accuracy are crucial for deploying diagnostic AI models in clinical practice. Including domain knowledge in model training can benefit clinical adoption.

A Robust Automated Segmentation Method for White Matter Hyperintensity of Vascular-origin.

He H, Jiang J, Peng S, He C, Sun T, Fan F, Song H, Sun D, Xu Z, Wu S, Lu D, Zhang J

pubmed logopapersMay 17 2025
White matter hyperintensity (WMH) is a primary manifestation of small vessel disease (SVD), leading to vascular cognitive impairment and other disorders. Accurate WMH quantification is vital for diagnosis and prognosis, but current automatic segmentation methods often fall short, especially across different datasets. The aims of this study are to develop and validate a robust deep learning segmentation method for WMH of vascular-origin. In this study, we developed a transformer-based method for the automatic segmentation of vascular-origin WMH using both 3D T1 and 3D T2-FLAIR images. Our initial dataset comprised 126 participants with varying WMH burdens due to SVD, each with manually segmented WMH masks used for training and testing. External validation was performed on two independent datasets: the WMH Segmentation Challenge 2017 dataset (170 subjects) and an in-house vascular risk factor dataset (70 subjects), which included scans acquired on eight different MRI systems at field strengths of 1.5T, 3T, and 5T. This approach enabled a comprehensive assessment of the method's generalizability across diverse imaging conditions. We further compared our method against LGA, LPA, BIANCA, UBO-detector and TrUE-Net in optimized settings. Our method consistently outperformed others, achieving a median Dice coefficient of 0.78±0.09 in our primary dataset, 0.72±0.15 in the external dataset 1, and 0.72±0.14 in the external dataset 2. The relative volume errors were 0.15±0.14, 0.50±0.86, and 0.47±1.02, respectively. The true positive rates were 0.81±0.13, 0.92±0.09, and 0.92±0.12, while the false positive rates were 0.20±0.09, 0.40±0.18, and 0.40±0.19. None of the external validation datasets were used for model training; instead, they comprise previously unseen MRI scans acquired from different scanners and protocols. This setup closely reflects real-world clinical scenarios and further demonstrates the robustness and generalizability of our model across diverse MRI systems and acquisition settings. As such, the proposed method provides a reliable solution for WMH segmentation in large-scale cohort studies.

Fully Automated Evaluation of Condylar Remodeling after Orthognathic Surgery in Skeletal Class II Patients Using Deep Learning and Landmarks.

Jia W, Wu H, Mei L, Wu J, Wang M, Cui Z

pubmed logopapersMay 17 2025
Condylar remodeling is a key prognostic indicator in maxillofacial surgery for skeletal class II patients. This study aimed to develop and validate a fully automated method leveraging landmark-guided segmentation and registration for efficient assessment of condylar remodeling. A V-Net-based deep learning workflow was developed to automatically segment the mandible and localize anatomical landmarks from CT images. Cutting planes were computed based on the landmarks to segment the condylar and ramus volumes from the mandible mask. The stable ramus served as a reference for registering pre- and post-operative condyles using the Iterative Closest Point (ICP) algorithm. Condylar remodeling was subsequently assessed through mesh registration, heatmap visualization, and quantitative metrics of surface distance and volumetric change. Experts also rated the concordance between automated assessments and clinical diagnoses. In the test set, condylar segmentation achieved a Dice coefficient of 0.98, and landmark prediction yielded a mean absolute error of 0.26 mm. The automated evaluation process was completed in 5.22 seconds, approximately 150 times faster than manual assessments. The method accurately quantified condylar volume changes, ranging from 2.74% to 50.67% across patients. Expert ratings for all test cases averaged 9.62. This study introduced a consistent, accurate, and fully automated approach for condylar remodeling evaluation. The well-defined anatomical landmarks guided precise segmentation and registration, while deep learning supported an end-to-end automated workflow. The test results demonstrated its broad clinical applicability across various degrees of condylar remodeling and high concordance with expert assessments. By integrating anatomical landmarks and deep learning, the proposed method improves efficiency by 150 times without compromising accuracy, thereby facilitating an efficient and accurate assessment of orthognathic prognosis. The personalized 3D condylar remodeling models aid in visualizing sequelae, such as joint pain or skeletal relapse, and guide individualized management of TMJ disorders.

MedVKAN: Efficient Feature Extraction with Mamba and KAN for Medical Image Segmentation

Hancan Zhu, Jinhao Chen, Guanghua He

arxiv logopreprintMay 17 2025
Medical image segmentation relies heavily on convolutional neural networks (CNNs) and Transformer-based models. However, CNNs are constrained by limited receptive fields, while Transformers suffer from scalability challenges due to their quadratic computational complexity. To address these limitations, recent advances have explored alternative architectures. The state-space model Mamba offers near-linear complexity while capturing long-range dependencies, and the Kolmogorov-Arnold Network (KAN) enhances nonlinear expressiveness by replacing fixed activation functions with learnable ones. Building on these strengths, we propose MedVKAN, an efficient feature extraction model integrating Mamba and KAN. Specifically, we introduce the EFC-KAN module, which enhances KAN with convolutional operations to improve local pixel interaction. We further design the VKAN module, integrating Mamba with EFC-KAN as a replacement for Transformer modules, significantly improving feature extraction. Extensive experiments on five public medical image segmentation datasets show that MedVKAN achieves state-of-the-art performance on four datasets and ranks second on the remaining one. These results validate the potential of Mamba and KAN for medical image segmentation while introducing an innovative and computationally efficient feature extraction framework. The code is available at: https://github.com/beginner-cjh/MedVKAN.
Page 123 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.