Sort by:
Page 96 of 1331328 results

Vector Representations of Vessel Trees

James Batten, Michiel Schaap, Matthew Sinclair, Ying Bai, Ben Glocker

arxiv logopreprintJun 11 2025
We introduce a novel framework for learning vector representations of tree-structured geometric data focusing on 3D vascular networks. Our approach employs two sequentially trained Transformer-based autoencoders. In the first stage, the Vessel Autoencoder captures continuous geometric details of individual vessel segments by learning embeddings from sampled points along each curve. In the second stage, the Vessel Tree Autoencoder encodes the topology of the vascular network as a single vector representation, leveraging the segment-level embeddings from the first model. A recursive decoding process ensures that the reconstructed topology is a valid tree structure. Compared to 3D convolutional models, this proposed approach substantially lowers GPU memory requirements, facilitating large-scale training. Experimental results on a 2D synthetic tree dataset and a 3D coronary artery dataset demonstrate superior reconstruction fidelity, accurate topology preservation, and realistic interpolations in latent space. Our scalable framework, named VeTTA, offers precise, flexible, and topologically consistent modeling of anatomical tree structures in medical imaging.

Evaluation of Semi-Automated versus Fully Automated Technologies for Computed Tomography Scalable Body Composition Analyses in Patients with Severe Acute Respiratory Syndrome Coronavirus-2.

Wozniak A, O'Connor P, Seigal J, Vasilopoulos V, Beg MF, Popuri K, Joyce C, Sheean P

pubmed logopapersJun 11 2025
Fully automated, artificial intelligence (AI) -based software has recently become available for scalable body composition analysis. Prior to broad application in the clinical arena, validation studies are needed. Our goal was to compare the results of a fully automated, AI-based software with a semi-automatic software in a sample of hospitalized patients. A diverse group of patients with Coronovirus-2 (COVID-19) and evaluable computed tomography (CT) images were included in this retrospective cohort. Our goal was to compare multiple aspects of body composition procuring results from fully automated and semi-automated body composition software. Bland-Altman analyses and correlation coefficients were used to calculate average bias and trend of bias for skeletal muscle (SM), visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), intermuscular adipose tissue (IMAT), and total adipose tissue (TAT-the sum of SAT, VAT, and IMAT). A total of 141 patients (average (standard deviation (SD)) age of 58.2 (18.9), 61% male, and 31% White Non-Hispanic, 31% Black Non-Hispanic, and 33% Hispanic) contributed to the analysis. Average bias (mean ± SD) was small (in comparison to the SD) and negative for SM (-3.79 cm<sup>2</sup> ± 7.56 cm<sup>2</sup>) and SAT (-7.06 cm<sup>2</sup> ± 19.77 cm<sup>2</sup>), and small and positive for VAT (2.29 cm<sup>2</sup> ± 15.54 cm<sup>2</sup>). A large negative bias was observed for IMAT (-7.77 cm<sup>2</sup> ± 5.09 cm<sup>2</sup>), where fully automated software underestimated intramuscular tissue quantity relative to the semi-automated software. The discrepancy in IMAT calculation was not uniform across its range given a correlation coefficient of -0.625; as average IMAT increased, the bias (underestimation by fully automated software) was greater. When compared to a semi-automated software, a fully automated, AI-based software provides consistent findings for key CT body composition measures (SM, SAT, VAT, TAT). While our findings support good overall agreement as evidenced by small biases and limited outliers, additional studies are needed in other clinical populations to further support validity and advanced precision, especially in the context of body composition and malnutrition assessment.

Automated Segmentation of Thoracic Aortic Lumen and Vessel Wall on 3D Bright- and Black-Blood MRI using nnU-Net.

Cesario M, Littlewood SJ, Nadel J, Fletcher TJ, Fotaki A, Castillo-Passi C, Hajhosseiny R, Pouliopoulos J, Jabbour A, Olivero R, Rodríguez-Palomares J, Kooi ME, Prieto C, Botnar RM

pubmed logopapersJun 11 2025
Magnetic resonance angiography (MRA) is an important tool for aortic assessment in several cardiovascular diseases. Assessment of MRA images relies on manual segmentation; a time-intensive process that is subject to operator variability. We aimed to optimize and validate two deep-learning models for automatic segmentation of the aortic lumen and vessel wall in high-resolution ECG-triggered free-breathing respiratory motion-corrected 3D bright- and black-blood MRA images. Manual segmentation, serving as the ground truth, was performed on 25 bright-blood and 15 black-blood 3D MRA image sets acquired with the iT2PrepIR-BOOST sequence (1.5T) in thoracic aortopathy patients. The training was performed with nnU-Net for bright-blood (lumen) and black-blood image sets (lumen and vessel wall). Training consisted of a 70:20:10% training: validation: testing split. Inference was run on datasets (single vendor) from different centres (UK, Spain, and Australia), sequences (iT2PrepIR-BOOST, T2 prepared CMRA, and TWIST MRA), acquired resolutions (from 0.9 mm<sup>3</sup> to 3 mm<sup>3</sup>), and field strengths (0.55T, 1.5T, and 3T). Predictive measurements comprised Dice Similarity Coefficient (DSC), and Intersection over Union (IoU). Postprocessing (3D slicer) included centreline extraction, diameter measurement, and curved planar reformatting (CPR). The optimal configuration was the 3D U-Net. Bright blood segmentation at 1.5T on iT2PrepIR-BOOST datasets (1.3 and 1.8 mm<sup>3</sup>) and 3D CMRA datasets (0.9 mm<sup>3</sup>) resulted in DSC ≥ 0.96 and IoU ≥ 0.92. For bright-blood segmentation on 3D CMRA at 0.55T, the nnUNet achieved DSC and IoU scores of 0.93 and 0.88 at 1.5 mm³, and 0.68 and 0.52 at 3.0 mm³, respectively. DSC and IoU scores of 0.89 and 0.82 were obtained for CMRA image sets (1 mm<sup>3</sup>) at 1.5T (Barcelona dataset). DSC and IoU score of the BRnnUNet model were 0.90 and 0.82 respectively for the contrast-enhanced dataset (TWIST MRA). Lumen segmentation on black blood 1.5T iT2PrepIR-BOOST image sets achieved DSC ≥ 0.95 and IoU ≥ 0.90, and vessel wall segmentation resulted in DSC ≥ 0.80 and IoU ≥ 0.67. Automated centreline tracking, diameter measurement and CPR were successfully implemented in all subjects. Automated aortic lumen and wall segmentation on 3D bright- and black-blood image sets demonstrated excellent agreement with ground truth. This technique demonstrates a fast and comprehensive assessment of aortic morphology with great potential for future clinical application in various cardiovascular diseases.

Implementation of biomedical segmentation for brain tumor utilizing an adapted U-net model.

Alkhalid FF, Salih NZ

pubmed logopapersJun 11 2025
Using radio signals from a magnetic field, magnetic resonance imaging (MRI) represents a medical procedure that produces images to provide more information than typical scans. Diagnosing brain tumors from MRI is difficult because of the wide range of tumor shapes, areas, and visual features, thus universal and automated system to handle this task is required. Among the best deep learning methods, the U-Net architecture is the most widely used in diagnostic medical images. Therefore, U-Net-based attention is the most effective automated model in medical image segmentation dealing with various modalities. The self-attention structures that are used in the U-Net design allow for fast global preparation and better feature visualization. This research aims to study the progress of U-Net design and show how it improves the performance of brain tumor segmentation. We have investigated three U-Net designs (standard U-Net, Attention U-Net, and self-attention U-Net) for five epochs to find the last segmentation. An MRI image dataset that includes 3064 images from the Kaggle website is used to give a more comprehensive overview. Also, we offer a comparison with several studies that are based on U-Net structures to illustrate the evolution of this network from an accuracy standpoint. U-Net-based self-attention has demonstrated superior performance compared to other studies because self-attention can enhance segmentation quality, particularly for unclear structures, by concentrating on the most significant parts. Four main metrics are applied with a loss function of 5.03 %, a validation loss function of 4.82 %, a validation accuracy of 98.49 %, and an accuracy of 98.45 %.

A machine learning approach for personalized breast radiation dosimetry in CT: Integrating radiomics and deep neural networks.

Tzanis E, Stratakis J, Damilakis J

pubmed logopapersJun 11 2025
To develop a machine learning-based workflow for patient-specific breast radiation dosimetry in CT. Two hundred eighty-six chest CT examinations, with corresponding right and left breast contours, were retrospectively collected from the radiotherapy department at our institution to develop and validate breast segmentation U-Nets. Additionally, Monte Carlo simulations were performed for each CT scan to determine radiation doses to the breasts. The derived breast doses, along with predictors such as X-ray tube current and radiomic features, were then used to train deep neural networks (DNNs) for breast dose prediction. The breast segmentation models achieved a mean dice similarity coefficient of 0.92, with precision and sensitivity scores above 0.90 for both breasts, indicating high segmentation accuracy. The DNNs demonstrated close alignment with ground truth values, with mean predicted doses of 5.05 ± 0.50 mGy for the right breast and 5.06 ± 0.55 mGy for the left breast, compared to ground truth values of 5.03 ± 0.57 mGy and 5.02 ± 0.61 mGy, respectively. The mean absolute percentage errors were 4.01 % (range: 3.90 %-4.12 %) for the right breast and 4.82 % (range: 4.56 %-5.11 %) for the left breast. The mean inference time was 30.2 ± 4.3 s. Statistical analysis showed no significant differences between predicted and actual doses (p ≥ 0.07). This study presents an automated, machine learning-based workflow for breast radiation dosimetry in CT, integrating segmentation and dose prediction models. The models and code are available at: https://github.com/eltzanis/ML-based-Breast-Radiation-Dosimetry-in-CT.

A Multi-Resolution Hybrid CNN-Transformer Network With Scale-Guided Attention for Medical Image Segmentation.

Zhu S, Li Y, Dai X, Mao T, Wei L, Yan Y

pubmed logopapersJun 11 2025
Medical image segmentation remains a challenging task due to the intricate nature of anatomical structures and the wide range of target sizes. In this paper, we propose a novel U -shaped segmentation network that integrates CNN and Transformer architectures to address these challenges. Specifically, our network architecture consists of three main components. In the encoder, we integrate an attention-guided multi-scale feature extraction module with a dual-path downsampling block to learn hierarchical features. The decoder employs an advanced feature aggregation and fusion module that effectively models inter-dependencies across different hierarchical levels. For the bottleneck, we explore multi-scale feature activation and multi-layer context Transformer modules to facilitate high-level semantic feature learning and global context modeling. Additionally, we implement a multi-resolution input-output strategy throughout the network to enrich feature representations and ensure fine-grained segmentation outputs across different scales. The experimental results on diverse multi-modal medical image datasets (ultrasound, gastrointestinal polyp, MR, and CT images) demonstrate that our approach can achieve superior performance over state-of-the-art methods in both quantitative measurements and qualitative assessments. The code is available at https://github.com/zsj0577/MSAGHNet.

DCD: A Semantic Segmentation Model for Fetal Ultrasound Four-Chamber View

Donglian Li, Hui Guo, Minglang Chen, Huizhen Chen, Jialing Chen, Bocheng Liang, Pengchen Liang, Ying Tan

arxiv logopreprintJun 10 2025
Accurate segmentation of anatomical structures in the apical four-chamber (A4C) view of fetal echocardiography is essential for early diagnosis and prenatal evaluation of congenital heart disease (CHD). However, precise segmentation remains challenging due to ultrasound artifacts, speckle noise, anatomical variability, and boundary ambiguity across different gestational stages. To reduce the workload of sonographers and enhance segmentation accuracy, we propose DCD, an advanced deep learning-based model for automatic segmentation of key anatomical structures in the fetal A4C view. Our model incorporates a Dense Atrous Spatial Pyramid Pooling (Dense ASPP) module, enabling superior multi-scale feature extraction, and a Convolutional Block Attention Module (CBAM) to enhance adaptive feature representation. By effectively capturing both local and global contextual information, DCD achieves precise and robust segmentation, contributing to improved prenatal cardiac assessment.

Adapting Vision-Language Foundation Model for Next Generation Medical Ultrasound Image Analysis

Jingguo Qu, Xinyang Han, Tonghuan Xiao, Jia Ai, Juan Wu, Tong Zhao, Jing Qin, Ann Dorothy King, Winnie Chiu-Wing Chu, Jing Cai, Michael Tin-Cheung Ying

arxiv logopreprintJun 10 2025
Medical ultrasonography is an essential imaging technique for examining superficial organs and tissues, including lymph nodes, breast, and thyroid. It employs high-frequency ultrasound waves to generate detailed images of the internal structures of the human body. However, manually contouring regions of interest in these images is a labor-intensive task that demands expertise and often results in inconsistent interpretations among individuals. Vision-language foundation models, which have excelled in various computer vision applications, present new opportunities for enhancing ultrasound image analysis. Yet, their performance is hindered by the significant differences between natural and medical imaging domains. This research seeks to overcome these challenges by developing domain adaptation methods for vision-language foundation models. In this study, we explore the fine-tuning pipeline for vision-language foundation models by utilizing large language model as text refiner with special-designed adaptation strategies and task-driven heads. Our approach has been extensively evaluated on six ultrasound datasets and two tasks: segmentation and classification. The experimental results show that our method can effectively improve the performance of vision-language foundation models for ultrasound image analysis, and outperform the existing state-of-the-art vision-language and pure foundation models. The source code of this study is available at https://github.com/jinggqu/NextGen-UIA.

Adapting Vision-Language Foundation Model for Next Generation Medical Ultrasound Image Analysis

Jingguo Qu, Xinyang Han, Tonghuan Xiao, Jia Ai, Juan Wu, Tong Zhao, Jing Qin, Ann Dorothy King, Winnie Chiu-Wing Chu, Jing Cai, Michael Tin-Cheung Yingınst

arxiv logopreprintJun 10 2025
Medical ultrasonography is an essential imaging technique for examining superficial organs and tissues, including lymph nodes, breast, and thyroid. It employs high-frequency ultrasound waves to generate detailed images of the internal structures of the human body. However, manually contouring regions of interest in these images is a labor-intensive task that demands expertise and often results in inconsistent interpretations among individuals. Vision-language foundation models, which have excelled in various computer vision applications, present new opportunities for enhancing ultrasound image analysis. Yet, their performance is hindered by the significant differences between natural and medical imaging domains. This research seeks to overcome these challenges by developing domain adaptation methods for vision-language foundation models. In this study, we explore the fine-tuning pipeline for vision-language foundation models by utilizing large language model as text refiner with special-designed adaptation strategies and task-driven heads. Our approach has been extensively evaluated on six ultrasound datasets and two tasks: segmentation and classification. The experimental results show that our method can effectively improve the performance of vision-language foundation models for ultrasound image analysis, and outperform the existing state-of-the-art vision-language and pure foundation models. The source code of this study is available at \href{https://github.com/jinggqu/NextGen-UIA}{GitHub}.

SSS: Semi-Supervised SAM-2 with Efficient Prompting for Medical Imaging Segmentation

Hongjie Zhu, Xiwei Liu, Rundong Xue, Zeyu Zhang, Yong Xu, Daji Ergu, Ying Cai, Yang Zhao

arxiv logopreprintJun 10 2025
In the era of information explosion, efficiently leveraging large-scale unlabeled data while minimizing the reliance on high-quality pixel-level annotations remains a critical challenge in the field of medical imaging. Semi-supervised learning (SSL) enhances the utilization of unlabeled data by facilitating knowledge transfer, significantly improving the performance of fully supervised models and emerging as a highly promising research direction in medical image analysis. Inspired by the ability of Vision Foundation Models (e.g., SAM-2) to provide rich prior knowledge, we propose SSS (Semi-Supervised SAM-2), a novel approach that leverages SAM-2's robust feature extraction capabilities to uncover latent knowledge in unlabeled medical images, thus effectively enhancing feature support for fully supervised medical image segmentation. Specifically, building upon the single-stream "weak-to-strong" consistency regularization framework, this paper introduces a Discriminative Feature Enhancement (DFE) mechanism to further explore the feature discrepancies introduced by various data augmentation strategies across multiple views. By leveraging feature similarity and dissimilarity across multi-scale augmentation techniques, the method reconstructs and models the features, thereby effectively optimizing the salient regions. Furthermore, a prompt generator is developed that integrates Physical Constraints with a Sliding Window (PCSW) mechanism to generate input prompts for unlabeled data, fulfilling SAM-2's requirement for additional prompts. Extensive experiments demonstrate the superiority of the proposed method for semi-supervised medical image segmentation on two multi-label datasets, i.e., ACDC and BHSD. Notably, SSS achieves an average Dice score of 53.15 on BHSD, surpassing the previous state-of-the-art method by +3.65 Dice. Code will be available at https://github.com/AIGeeksGroup/SSS.
Page 96 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.