Sort by:
Page 76 of 1341333 results

Deep learning-based segmentation of T1 and T2 cardiac MRI maps for automated disease detection

Andreea Bianca Popescu, Andreas Seitz, Heiko Mahrholdt, Jens Wetzl, Athira Jacob, Lucian Mihai Itu, Constantin Suciu, Teodora Chitiboi

arxiv logopreprintJul 1 2025
Objectives Parametric tissue mapping enables quantitative cardiac tissue characterization but is limited by inter-observer variability during manual delineation. Traditional approaches relying on average relaxation values and single cutoffs may oversimplify myocardial complexity. This study evaluates whether deep learning (DL) can achieve segmentation accuracy comparable to inter-observer variability, explores the utility of statistical features beyond mean T1/T2 values, and assesses whether machine learning (ML) combining multiple features enhances disease detection. Materials & Methods T1 and T2 maps were manually segmented. The test subset was independently annotated by two observers, and inter-observer variability was assessed. A DL model was trained to segment left ventricle blood pool and myocardium. Average (A), lower quartile (LQ), median (M), and upper quartile (UQ) were computed for the myocardial pixels and employed in classification by applying cutoffs or in ML. Dice similarity coefficient (DICE) and mean absolute percentage error evaluated segmentation performance. Bland-Altman plots assessed inter-user and model-observer agreement. Receiver operating characteristic analysis determined optimal cutoffs. Pearson correlation compared features from model and manual segmentations. F1-score, precision, and recall evaluated classification performance. Wilcoxon test assessed differences between classification methods, with p < 0.05 considered statistically significant. Results 144 subjects were split into training (100), validation (15) and evaluation (29) subsets. Segmentation model achieved a DICE of 85.4%, surpassing inter-observer agreement. Random forest applied to all features increased F1-score (92.7%, p < 0.001). Conclusion DL facilitates segmentation of T1/ T2 maps. Combining multiple features with ML improves disease detection.

DMCIE: Diffusion Model with Concatenation of Inputs and Errors to Improve the Accuracy of the Segmentation of Brain Tumors in MRI Images

Sara Yavari, Rahul Nitin Pandya, Jacob Furst

arxiv logopreprintJul 1 2025
Accurate segmentation of brain tumors in MRI scans is essential for reliable clinical diagnosis and effective treatment planning. Recently, diffusion models have demonstrated remarkable effectiveness in image generation and segmentation tasks. This paper introduces a novel approach to corrective segmentation based on diffusion models. We propose DMCIE (Diffusion Model with Concatenation of Inputs and Errors), a novel framework for accurate brain tumor segmentation in multi-modal MRI scans. We employ a 3D U-Net to generate an initial segmentation mask, from which an error map is generated by identifying the differences between the prediction and the ground truth. The error map, concatenated with the original MRI images, are used to guide a diffusion model. Using multimodal MRI inputs (T1, T1ce, T2, FLAIR), DMCIE effectively enhances segmentation accuracy by focusing on misclassified regions, guided by the original inputs. Evaluated on the BraTS2020 dataset, DMCIE outperforms several state-of-the-art diffusion-based segmentation methods, achieving a Dice Score of 93.46 and an HD95 of 5.94 mm. These results highlight the effectiveness of error-guided diffusion in producing precise and reliable brain tumor segmentations.

Development and validation of AI-based automatic segmentation and measurement of thymus on chest CT scans.

Guo Y, Gong B, Jiang G, Du W, Dai S, Wan Q, Zhu D, Liu C, Li Y, Sun Q, Fan Q, Liang B, Yang L, Zheng C

pubmed logopapersJul 1 2025
Due to the complex anatomical structure and dynamic involution process of the thymus, segmentation and evaluation of the thymus in medical imaging present significant challenges. The aim of this study is to develop a deep-learning tool "Thy-uNET" for automatic segmentation and measurement of the thymus or thymic region on chest CT imaging, and to validate its performance with multicenter data. Utilizing the segmentation and measurement results from two experts, training of Thy-uNET was conducted on training cohort (n = 500). The segmented regions include thymus or thymic region, and 7 features of the thymic region were measured. The automatic segmentation performance was assessed using Dice and Intersection over Union (IOU) on CT data from three test cohorts (n = 286). Spearman correlation analysis and intraclass correlation coefficient (ICC) were used to evaluate the correlation and reliability of the automatic measurement results. Six radiologists with varying levels of experience were invited to participate in a reader study to assess the measurement performance of Thy-uNET and its ability to assist doctors. Thy-uNET demonstrated consistent segmentation performance across different subgroups, with Dice = 0.83 in the internal test set, and Dice = 0.82 in the external test sets. For automatic measurement of thymic features, Thy-uNET achieved high correlation coefficients and ICC for key measurements (R = 0.829 and ICC = 0.841 for CT attenuation measurement). Its performance was comparable to that of radiology residents and junior radiologists, with significantly shorter measurement time. Providing Thy-uNET measurements to readers reduced their measurement time and improved residents' performance in some thymic feature measurements. Thy-uNET can provide reliable automatic segmentation and automatic measurement information of the thymus or thymic region on routine CT, reducing time costs and improving the consistency of evaluations.

Patient-specific deep learning tracking for real-time 2D pancreas localisation in kV-guided radiotherapy.

Ahmed AM, Madden L, Stewart M, Chow BVY, Mylonas A, Brown R, Metz G, Shepherd M, Coronel C, Ambrose L, Turk A, Crispin M, Kneebone A, Hruby G, Keall P, Booth JT

pubmed logopapersJul 1 2025
In pancreatic stereotactic body radiotherapy (SBRT), accurate motion management is crucial for the safe delivery of high doses per fraction. Intra-fraction tracking with magnetic resonance imaging-guidance for gated SBRT has shown potential for improved local control. Visualisation of pancreas (and surrounding organs) remains challenging in intra-fraction kilo-voltage (kV) imaging, requiring implanted fiducials. In this study, we investigate patient-specific deep-learning approaches to track the gross-tumour-volume (GTV), pancreas-head and the whole-pancreas in intra-fraction kV images. Conditional-generative-adversarial-networks were trained and tested on data from 25 patients enrolled in an ethics-approved pancreatic SBRT trial for contour prediction on intra-fraction 2D kV images. Labelled digitally-reconstructed-radiographs (DRRs) were generated from contoured planning-computed-tomography (CTs) (CT-DRRs) and cone-beam-CTs (CBCT-DRRs). A population model was trained using CT-DRRs of 19 patients. Two patient-specific model types were created for six additional patients by fine-tuning the population model using CBCT-DRRs (CBCT-models) or CT-DRRs (CT-models) acquired in exhale-breath-hold. Model predictions on unseen triggered-kV images from the corresponding six patients were evaluated against projected-contours using Dice-Similarity-Coefficient (DSC), centroid-error (CE), average Hausdorff-distance (AHD), and Hausdorff-distance at 95th-percentile (HD95). The mean ± 1SD (standard-deviation) DSCs were 0.86 ± 0.09 (CBCT-models) and 0.78 ± 0.12 (CT-models). For AHD and CE, the CBCT-model predicted contours within 2.0 mm ≥90.3 % of the time, while HD95 was within 5.0 mm ≥90.0 % of the time, and had a prediction time of 29.2 ± 3.7 ms per contour. The patient-specific CBCT-models outperformed the CT-models and predicted the three contours with 90th-percentile error ≤2.0 mm, indicating the potential for clinical real-time application.

Automated 3D segmentation of the hyoid bone in CBCT using nnU-Net v2: a retrospective study on model performance and potential clinical utility.

Gümüssoy I, Haylaz E, Duman SB, Kalabalik F, Say S, Celik O, Bayrakdar IS

pubmed logopapersJul 1 2025
This study aimed to identify the hyoid bone (HB) using the nnU-Net based artificial intelligence (AI) model in cone beam computed tomography (CBCT) images and assess the model's success in automatic segmentation. CBCT images of 190 patients were randomly selected. The raw data was converted to DICOM format and transferred to the 3D Slicer Imaging Software (Version 4.10.2; MIT, Cambridge, MA, USA). HB was labeled manually using the 3D Slicer. The dataset was divided into training, validation, and test sets in a ratio of 8:1:1. The nnU-Net v2 architecture was utilized to process the training and test datasets, generating the algorithm weight factors. To assess the model's accuracy and performance, a confusion matrix was employed. F1-score, Dice coefficient (DC), 95% Hausdorff distance (95% HD), and Intersection over Union (IoU) metrics were calculated to evaluate the results. The model's performance metrics were as follows: DC = 0.9434, IoU = 0.8941, F1-score = 0.9446, and 95% HD = 1.9998. The receiver operating characteristic (ROC) curve was generated, yielding an AUC value of 0.98. The results indicated that the nnU-Net v2 model achieved high precision and accuracy in HB segmentation on CBCT images. Automatic segmentation of HB can enhance clinicians' decision-making speed and accuracy in diagnosing and treating various clinical conditions. Not applicable.

Deep learning-based segmentation of the trigeminal nerve and surrounding vasculature in trigeminal neuralgia.

Halbert-Elliott KM, Xie ME, Dong B, Das O, Wang X, Jackson CM, Lim M, Huang J, Yedavalli VS, Bettegowda C, Xu R

pubmed logopapersJul 1 2025
Preoperative workup of trigeminal neuralgia (TN) consists of identification of neurovascular features on MRI. In this study, the authors apply and evaluate the performance of deep learning models for segmentation of the trigeminal nerve and surrounding vasculature to quantify anatomical features of the nerve and vessels. Six U-Net-based neural networks, each with a different encoder backbone, were trained to label constructive interference in steady-state MRI voxels as nerve, vasculature, or background. A retrospective dataset of 50 TN patients at the authors' institution who underwent preoperative high-resolution MRI in 2022 was utilized to train and test the models. Performance was measured by the Dice coefficient and intersection over union (IoU) metrics. Anatomical characteristics, such as surface area of neurovascular contact and distance to the contact point, were computed and compared between the predicted and ground truth segmentations. Of the evaluated models, the best performing was U-Net with an SE-ResNet50 backbone (Dice score = 0.775 ± 0.015, IoU score = 0.681 ± 0.015). When the SE-ResNet50 backbone was used, the average surface area of neurovascular contact in the testing dataset was 6.90 mm2, which was not significantly different from the surface area calculated from manual segmentation (p = 0.83). The average calculated distance from the brainstem to the contact point was 4.34 mm, which was also not significantly different from manual segmentation (p = 0.29). U-Net-based neural networks perform well for segmenting trigeminal nerve and vessels from preoperative MRI volumes. This technology enables the development of quantitative and objective metrics for radiographic evaluation of TN.

Cerebrovascular morphology: Insights into normal variations, aging effects and disease implications.

Deshpande A, Zhang LQ, Balu R, Yahyavi-Firouz-Abadi N, Badjatia N, Laksari K, Tahsili-Fahadan P

pubmed logopapersJul 1 2025
Cerebrovascular morphology plays a critical role in brain health, influencing cerebral blood flow (CBF) and contributing to the pathogenesis of various neurological diseases. This review examines the anatomical structure of the cerebrovascular network and its variations in healthy and diseased populations and highlights age-related changes and their implications in various neurological conditions. Normal variations, including the completeness and anatomical anomalies of the Circle of Willis and collateral circulation, are discussed in relation to their impact on CBF and susceptibility to ischemic events. Age-related changes in the cerebrovascular system, such as alterations in vessel geometry and density, are explored for their contributions to age-related neurological disorders, including Alzheimer's disease and vascular dementia. Advances in medical imaging and computational methods have enabled automatic quantitative assessment of cerebrovascular structures, facilitating the identification of pathological changes in both acute and chronic cerebrovascular disorders. Emerging technologies, including machine learning and computational fluid dynamics, offer new tools for predicting disease risk and patient outcomes based on vascular morphology. This review underscores the importance of understanding cerebrovascular remodeling for early diagnosis and the development of novel therapeutic approaches in brain diseases.

Breast cancer detection based on histological images using fusion of diffusion model outputs.

Akbari Y, Abdullakutty F, Al Maadeed S, Bouridane A, Hamoudi R

pubmed logopapersJul 1 2025
The precise detection of breast cancer in histopathological images remains a critical challenge in computational pathology, where accurate tissue segmentation significantly enhances diagnostic accuracy. This study introduces a novel approach leveraging a Conditional Denoising Diffusion Probabilistic Model (DDPM) to improve breast cancer detection through advanced segmentation and feature fusion. The method employs a conditional channel within the DDPM framework, first trained on a breast cancer histopathology dataset and extended to additional datasets to achieve regional-level segmentation of tumor areas and other tissue regions. These segmented regions, combined with predicted noise from the diffusion model and original images, are processed through an EfficientNet-B0 network to extract enhanced features. A transformer decoder then fuses these features to generate final detection results. Extensive experiments optimizing the network architecture and fusion strategies were conducted, and the proposed method was evaluated across four distinct datasets, achieving a peak accuracy of 92.86% on the BRACS dataset, 100% on the BreCaHAD dataset, 96.66% the ICIAR2018 dataset. This approach represents a significant advancement in computational pathology, offering a robust tool for breast cancer detection with potential applications in broader medical imaging contexts.

Zero-shot segmentation of spinal vertebrae with metastatic lesions: an analysis of Meta's Segment Anything Model 2 and factors affecting learning free segmentation.

Khazanchi R, Govind S, Jain R, Du R, Dahdaleh NS, Ahuja CS, El Tecle N

pubmed logopapersJul 1 2025
Accurate vertebral segmentation is an important step in imaging analysis pipelines for diagnosis and subsequent treatment of spinal metastases. Segmenting these metastases is especially challenging given their radiological heterogeneity. Conventional approaches for segmenting vertebrae have included manual review or deep learning; however, manual review is time-intensive with interrater reliability issues, while deep learning requires large datasets to build. The rise of generative AI, notably tools such as Meta's Segment Anything Model 2 (SAM 2), holds promise in its ability to rapidly generate segmentations of any image without pretraining (zero-shot). The authors of this study aimed to assess the ability of SAM 2 to segment vertebrae with metastases. A publicly available set of spinal CT scans from The Cancer Imaging Archive was used, which included patient sex, BMI, vertebral locations, types of metastatic lesion (lytic, blastic, or mixed), and primary cancer type. Ground-truth segmentations for each vertebra, derived by neuroradiologists, were further extracted from the dataset. SAM 2 then produced segmentations for each vertebral slice without any training data, all of which were compared to gold standard segmentations using the Dice similarity coefficient (DSC). Relative performance differences were assessed across clinical subgroups using standard statistical techniques. Imaging data were extracted for 55 patients and 779 unique thoracolumbar vertebrae, 167 of which had metastatic tumor involvement. Across these vertebrae, SAM 2 had a mean volumetric DSC of 0.833 ± 0.053. SAM 2 performed significantly worse on thoracic vertebrae relative to lumbar vertebrae, female patients relative to male patients, and obese patients relative to non-obese patients. These results demonstrate that general-purpose segmentation models like SAM 2 can provide reasonable vertebral segmentation accuracy with no pretraining, with efficacy comparable to previously published trained models. Future research should include optimizations of spine segmentation models for vertebral location and patient body habitus, as well as for variations in imaging quality approaches.

Generative AI for weakly supervised segmentation and downstream classification of brain tumors on MR images.

Yoo JJ, Namdar K, Wagner MW, Yeom KW, Nobre LF, Tabori U, Hawkins C, Ertl-Wagner BB, Khalvati F

pubmed logopapersJul 1 2025
Segmenting abnormalities is a leading problem in medical imaging. Using machine learning for segmentation generally requires manually annotated segmentations, demanding extensive time and resources from radiologists. We propose a weakly supervised approach that utilizes binary image-level labels, which are much simpler to acquire, rather than manual annotations to segment brain tumors on magnetic resonance images. The proposed method generates healthy variants of cancerous images for use as priors when training the segmentation model. However, using weakly supervised segmentations for downstream tasks such as classification can be challenging due to occasional unreliable segmentations. To address this, we propose using the generated non-cancerous variants to identify the most effective segmentations without requiring ground truths. Our proposed method generates segmentations that achieve Dice coefficients of 79.27% on the Multimodal Brain Tumor Segmentation (BraTS) 2020 dataset and 73.58% on an internal dataset of pediatric low-grade glioma (pLGG), which increase to 88.69% and 80.29%, respectively, when removing suboptimal segmentations identified using the proposed method. Using the segmentations for tumor classification results with Area Under the Characteristic Operating Curve (AUC) of 93.54% and 83.74% on the BraTS and pLGG datasets, respectively. These are comparable to using manual annotations which achieve AUCs of 95.80% and 83.03% on the BraTS and pLGG datasets, respectively.
Page 76 of 1341333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.