Sort by:
Page 68 of 1341333 results

Prediction of tissue and clinical thrombectomy outcome in acute ischaemic stroke using deep learning.

von Braun MS, Starke K, Peter L, Kürsten D, Welle F, Schneider HR, Wawrzyniak M, Kaiser DPO, Prasse G, Richter C, Kellner E, Reisert M, Klingbeil J, Stockert A, Hoffmann KT, Scheuermann G, Gillmann C, Saur D

pubmed logopapersJul 7 2025
The advent of endovascular thrombectomy has significantly improved outcomes for stroke patients with intracranial large vessel occlusion, yet individual benefits can vary widely. As demand for thrombectomy rises and geographical disparities in stroke care access persist, there is a growing need for predictive models that quantify individual benefits. However, current imaging methods for estimating outcomes may not fully capture the dynamic nature of cerebral ischaemia and lack a patient-specific assessment of thrombectomy benefits. Our study introduces a deep learning approach to predict individual responses to thrombectomy in acute ischaemic stroke patients. The proposed models provide predictions for both tissue and clinical outcomes under two scenarios: one assuming successful reperfusion and another assuming unsuccessful reperfusion. The resulting simulations of penumbral salvage and difference in National Institutes of Health Stroke Scale (NIHSS) at discharge quantify the potential individual benefits of the intervention. Our models were developed on an extensive dataset from routine stroke care, which included 405 ischaemic stroke patients who underwent thrombectomy. We used acute data for training (n = 304), including multimodal CT imaging and clinical characteristics, along with post hoc markers such as thrombectomy success, final infarct localization and NIHSS at discharge. We benchmarked our tissue outcome predictions under the observed reperfusion scenario against a thresholding-based clinical method and a generalized linear model. Our deep learning model showed significant superiority, with a mean Dice score of 0.48 on internal test data (n = 50) and 0.52 on external test data (n = 51), versus 0.26/0.36 and 0.34/0.35 for the baselines, respectively. The NIHSS sum score prediction achieved median absolute errors of 1.5 NIHSS points on the internal test dataset and 3.0 NIHSS points on the external test dataset, outperforming other machine learning models. By predicting the patient-specific response to thrombectomy for both tissue and clinical outcomes, our approach offers an innovative biomarker that captures the dynamics of cerebral ischaemia. We believe this method holds significant potential to enhance personalized therapeutic strategies and to facilitate efficient resource allocation in acute stroke care.

Efficacy of Image Similarity as a Metric for Augmenting Small Dataset Retinal Image Segmentation

Thomas Wallace, Ik Siong Heng, Senad Subasic, Chris Messenger

arxiv logopreprintJul 7 2025
Synthetic images are an option for augmenting limited medical imaging datasets to improve the performance of various machine learning models. A common metric for evaluating synthetic image quality is the Fr\'echet Inception Distance (FID) which measures the similarity of two image datasets. In this study we evaluate the relationship between this metric and the improvement which synthetic images, generated by a Progressively Growing Generative Adversarial Network (PGGAN), grant when augmenting Diabetes-related Macular Edema (DME) intraretinal fluid segmentation performed by a U-Net model with limited amounts of training data. We find that the behaviour of augmenting with standard and synthetic images agrees with previously conducted experiments. Additionally, we show that dissimilar (high FID) datasets do not improve segmentation significantly. As FID between the training and augmenting datasets decreases, the augmentation datasets are shown to contribute to significant and robust improvements in image segmentation. Finally, we find that there is significant evidence to suggest that synthetic and standard augmentations follow separate log-normal trends between FID and improvements in model performance, with synthetic data proving more effective than standard augmentation techniques. Our findings show that more similar datasets (lower FID) will be more effective at improving U-Net performance, however, the results also suggest that this improvement may only occur when images are sufficiently dissimilar.

Gender difference in cross-sectional area and fat infiltration of thigh muscles in the elderly population on MRI: an AI-based analysis.

Bizzozero S, Bassani T, Sconfienza LM, Messina C, Bonato M, Inzaghi C, Marmondi F, Cinque P, Banfi G, Borghi S

pubmed logopapersJul 7 2025
Aging alters musculoskeletal structure and function, affecting muscle mass, composition, and strength, increasing the risk of falls and loss of independence in older adults. This study assessed cross-sectional area (CSA) and fat infiltration (FI) of six thigh muscles through a validated deep learning model. Gender differences and correlations between fat, muscle parameters, and age were also analyzed. We retrospectively analyzed 141 participants (67 females, 74 males) aged 52-82 years. Participants underwent magnetic resonance imaging (MRI) scans of the right thigh and dual-energy x-ray absorptiometry to determine appendicular skeletal muscle mass index (ASMMI) and body fat percentage (FAT%). A deep learning-based application was developed to automate the segmentation of six thigh muscle groups. Deep learning model accuracy was evaluated using the "intersection over union" (IoU) metric, with average IoU values across muscle groups ranging from 0.84 to 0.99. Mean CSA was 10,766.9 mm² (females 8,892.6 mm², males 12,463.9 mm², p < 0.001). The mean FI value was 14.92% (females 17.42%, males 12.62%, p < 0.001). Males showed larger CSA and lower FI in all thigh muscles compared to females. Positive correlations were identified in females between the FI of posterior thigh muscle groups (biceps femoris, semimembranosus, and semitendinosus) and age (r or ρ = 0.35-0.48; p ≤ 0.004), while no significant correlations were observed between CSA, ASMMI, or FAT% and age. Deep learning accurately quantifies muscle CSA and FI, reducing analysis time and human error. Aging impacts on muscle composition and distribution and gender-specific assessments in older adults is needed. Efficient deep learning-based MRI image segmentation to assess the composition of six thigh muscle groups in over 50 individuals revealed gender differences in thigh muscle CSA and FI. These findings have potential clinical applications in assessing muscle quality, decline, and frailty. Deep learning model enhanced MRI segmentation, providing high assessment accuracy. Significant gender differences in cross-sectional area and fat infiltration across all thigh muscles were observed. In females, fat infiltration of the posterior thigh muscles was positively correlated with age.

Towards Reliable Healthcare Imaging: A Multifaceted Approach in Class Imbalance Handling for Medical Image Segmentation.

Cui L, Xu M, Liu C, Liu T, Yan X, Zhang Y, Yang X

pubmed logopapersJul 7 2025
Class imbalance is a dominant challenge in medical image segmentation when dealing with MRI images from highly imbalanced datasets. This study introduces a comprehensive, multifaceted approach to enhance the accuracy and reliability of segmentation models under such conditions. Our model integrates advanced data augmentation, innovative algorithmic adjustments, and novel architectural features to address class label distribution effectively. To ensure the multiple aspects of training process, we have customized the data augmentation technique for medical imaging with multi-dimensional angles. The multi-dimensional augmentation technique helps to reduce the bias towards majority classes. We have implemented novel attention mechanisms, i.e., Enhanced Attention Module (EAM) and spatial attention. These attention mechanisms enhance the focus of the model on the most relevant features. Further, our architecture incorporates a dual decoder system and Pooling Integration Layer (PIL) to capture accurate foreground and background details. We also introduce a hybrid loss function, which is designed to handle the class imbalance by guiding the training process. For experimental purposes, we have used multiple datasets such as Digital Database Thyroid Image (DDTI), Breast Ultrasound Images Dataset (BUSI) and LiTS MICCAI 2017 to demonstrate the prowess of the proposed network using key evaluation metrics, i.e., IoU, Dice coefficient, precision, and recall.

AG-MS3D-CNN multiscale attention guided 3D convolutional neural network for robust brain tumor segmentation across MRI protocols.

Lilhore UK, Sunder R, Simaiya S, Alsafyani M, Monish Khan MD, Alroobaea R, Alsufyani H, Baqasah AM

pubmed logopapersJul 7 2025
Accurate segmentation of brain tumors from multimodal Magnetic Resonance Imaging (MRI) plays a critical role in diagnosis, treatment planning, and disease monitoring in neuro-oncology. Traditional methods of tumor segmentation, often manual and labour-intensive, are prone to inconsistencies and inter-observer variability. Recently, deep learning models, particularly Convolutional Neural Networks (CNNs), have shown great promise in automating this process. However, these models face challenges in terms of generalization across diverse datasets, accurate tumor boundary delineation, and uncertainty estimation. To address these challenges, we propose AG-MS3D-CNN, an attention-guided multiscale 3D convolutional neural network for brain tumor segmentation. Our model integrates local and global contextual information through multiscale feature extraction and leverages spatial attention mechanisms to enhance boundary delineation, particularly in complex tumor regions. We also introduce Monte Carlo dropout for uncertainty estimation, providing clinicians with confidence scores for each segmentation, which is crucial for informed decision-making. Furthermore, we adopt a multitask learning framework, which enables the simultaneous segmentation, classification, and volume estimation of tumors. To ensure robustness and generalizability across diverse MRI acquisition protocols and scanners, we integrate a domain adaptation module into the network. Extensive evaluations on the BraTS 2021 dataset and additional external datasets, such as OASIS, ADNI, and IXI, demonstrate the superior performance of AG-MS3D-CNN compared to existing state-of-the-art methods. Our model achieves high Dice scores and shows excellent robustness, making it a valuable tool for clinical decision support in neuro-oncology.

Automated Deep Learning-Based 3D-to-2D Segmentation of Geographic Atrophy in Optical Coherence Tomography Data

Al-khersan, H., Oakley, J. D., Russakoff, D. B., Cao, J. A., Saju, S. M., Zhou, A., Sodhi, S. K., Pattathil, N., Choudhry, N., Boyer, D. S., Wykoff, C. C.

medrxiv logopreprintJul 7 2025
PurposeWe report on a deep learning-based approach to the segmentation of geographic atrophy (GA) in patients with advanced age-related macular degeneration (AMD). MethodThree-dimensional (3D) optical coherence tomography (OCT) data was collected from two instruments at two different retina practices. This totaled 367 and 348 volumes, respectively, of routinely collected clinical data. For all data, the accuracy of a 3D-to-2D segmentation model was assessed relative to ground-truth manual labeling. ResultsDice Similarity Scores (DSC) averaged 0.824 and 0.826 for each data set. Correlations (r2) between manual and automated areas were 0.883 and 0.906, respectively. The inclusion of near Infra-red imagery as an additional information channel to the algorithm did not notably improve performance. ConclusionAccurate assessment of GA in real-world clinical OCT data can be achieved using deep learning. In the advent of therapeutics to slow the rate of GA progression, reliable, automated assessment is a clinical objective and this work validates one such method.

Artifact-robust Deep Learning-based Segmentation of 3D Phase-contrast MR Angiography: A Novel Data Augmentation Approach.

Tamada D, Oechtering TH, Heidenreich JF, Starekova J, Takai E, Reeder SB

pubmed logopapersJul 5 2025
This study presents a novel data augmentation approach to improve deep learning (DL)-based segmentation for 3D phase-contrast magnetic resonance angiography (PC-MRA) images affected by pulsation artifacts. Augmentation was achieved by simulating pulsation artifacts through the addition of periodic errors in k-space magnitude. The approach was evaluated on PC-MRA datasets from 16 volunteers, comparing DL segmentation with and without pulsation artifact augmentation to a level-set algorithm. Results demonstrate that DL methods significantly outperform the level-set approach and that pulsation artifact augmentation further improves segmentation accuracy, especially for images with lower velocity encoding. Quantitative analysis using Dice-Sørensen coefficient, Intersection over Union, and Average Symmetric Surface Distance metrics confirms the effectiveness of the proposed method. This technique shows promise for enhancing vascular segmentation in various anatomical regions affected by pulsation artifacts, potentially improving clinical applications of PC-MRA.

A novel recursive transformer-based U-Net architecture for enhanced multi-scale medical image segmentation.

Li S, Liu X, Fu M, Khelifi F

pubmed logopapersJul 5 2025
Automatic medical image segmentation techniques are vital for assisting clinicians in making accurate diagnoses and treatment plans. Although the U-shaped network (U-Net) has been widely adopted in medical image analysis, it still faces challenges in capturing long-range dependencies, particularly in complex and textured medical images where anatomical structures often blend into the surrounding background. To address these limitations, a novel network architecture, called recursive transformer-based U-Net (ReT-UNet), which integrates recursive feature learning and transformer technology, is proposed. One of the key innovations of ReT-UNet is the multi-scale global feature fusion (Multi-GF) module, inspired by transformer models and multi-scale pooling mechanisms. This module captures long-range dependencies, enhancing the abstraction and contextual understanding of multi-level features. Additionally, a recursive feature accumulation block is introduced to iteratively update features across layers, improving the network's ability to model spatial correlations and represent deep features in medical images. To improve sensitivity to local details, a lightweight atrous spatial pyramid pooling (ASPP) module is appended after the Multi-GF module. Furthermore, the segmentation head is redesigned to emphasize feature aggregation and fusion. During the encoding phase, a hybrid pooling layer is employed to ensure comprehensive feature sampling, thereby enabling a broader range of feature representation and improving detailed information learning. Results: The proposed method has been evaluated through ablation experiments, demonstrating generally consistent performance across multiple trials. When applied to cardiac, pulmonary nodule, and polyp segmentation datasets, the method showed a reduction in mis-segmented regions. The experimental results suggest that the approach can improve segmentation accuracy and stability compared to competing state-of-the-art methods. Experimental findings highlight the superiority of the proposed ReT-UNet over related methods and demonstrate its potential for applications in medical image segmentation.

Att-BrainNet: Attention-based BrainNet for lung cancer segmentation network.

Xiao X, Wang Z, Yao J, Wei J, Zhang B, Chen W, Geng Z, Song E

pubmed logopapersJul 5 2025
Most current medical image segmentation models employ a unified feature modeling strategy for all target regions. However, they overlook the significant heterogeneity in lesion structure, boundary characteristics, and semantic texture, which frequently restricts their ability to accurately segment morphologically diverse lesions in complex imaging contexts, thereby reducing segmentation accuracy and robustness. To address this issue, we propose a brain-inspired segmentation framework named BrainNet, which adopts a tri-level backbone encoder-Brain Network-decoder architecture. Such an architecture enables globally guided, locally differentiated feature modeling. We further instantiate the framework with an attention-enhanced segmentation model, termed Att-BrainNet. In this model, a Thalamus Gating Module (TGM) dynamically selects and activates structurally identical but functionally diverse Encephalic Region Networks (ERNs) to collaboratively extract lesion-specific features. In addition, an S-F image enhancement module is incorporated to improve sensitivity to boundaries and fine structures. Meanwhile, multi-head self-attention is embedded in the encoder to strengthen global semantic modeling and regional coordination. Experiments conducted on two lung cancer CT segmentation datasets and the Synapse multi-organ dataset demonstrate that Att-BrainNet outperforms existing mainstream segmentation models in terms of both accuracy and generalization. Further ablation studies and mechanism visualizations confirm the effectiveness of the BrainNet architecture and the dynamic scheduling strategy. This research provides a novel structural paradigm for medical image segmentation and holds promise for extension to other complex segmentation scenarios.

A Multimodal Ultrasound-Driven Approach for Automated Tumor Assessment with B-Mode and Multi-Frequency Harmonic Motion Images.

Hu S, Liu Y, Wang R, Li X, Konofagou EE

pubmed logopapersJul 4 2025
Harmonic Motion Imaging (HMI) is an ultrasound elasticity imaging method that measures the mechanical properties of tissue using amplitude-modulated acoustic radiation force (AM-ARF). Multi-frequency HMI (MF-HMI) excites tissue at various AM frequencies simultaneously, allowing for image optimization without prior knowledge of inclusion size and stiffness. However, challenges remain in size estimation as inconsistent boundary effects result in different perceived sizes across AM frequencies. Herein, we developed an automated assessment method for tumor and focused ultrasound surgery (FUS) induced lesions using a transformer-based multi-modality neural network, HMINet, and further automated neoadjuvant chemotherapy (NACT) response prediction. HMINet was trained on 380 pairs of MF-HMI and B-mode images of phantoms and in vivo orthotopic breast cancer mice (4T1). Test datasets included phantoms (n = 32), in vivo 4T1 mice (n = 24), breast cancer patients (n = 20), FUS-induced lesions in ex vivo animal tissue and in vivo clinical settings with real-time inference, with average segmentation accuracy (Dice) of 0.91, 0.83, 0.80, and 0.81, respectively. HMINet outperformed state-of-the-art models; we also demonstrated the enhanced robustness of the multi-modality strategy over B-mode-only, both quantitatively through Dice scores and in terms of interpretation using saliency analysis. The contribution of AM frequency based on the number of salient pixels showed that the most significant AM frequencies are 800 and 200 Hz across clinical cases. We developed an automated, multimodality ultrasound-based tumor and FUS lesion assessment method, which facilitates the clinical translation of stiffness-based breast cancer treatment response prediction and real-time image-guided FUS therapy.
Page 68 of 1341333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.