Sort by:
Page 63 of 95943 results

Predicting infarct outcomes after extended time window thrombectomy in large vessel occlusion using knowledge guided deep learning.

Dai L, Yuan L, Zhang H, Sun Z, Jiang J, Li Z, Li Y, Zha Y

pubmed logopapersJun 6 2025
Predicting the final infarct after an extended time window mechanical thrombectomy (MT) is beneficial for treatment planning in acute ischemic stroke (AIS). By introducing guidance from prior knowledge, this study aims to improve the accuracy of the deep learning model for post-MT infarct prediction using pre-MT brain perfusion data. This retrospective study collected CT perfusion data at admission for AIS patients receiving MT over 6 hours after symptom onset, from January 2020 to December 2024, across three centers. Infarct on post-MT diffusion weighted imaging served as ground truth. Five Swin transformer based models were developed for post-MT infarct segmentation using pre-MT CT perfusion parameter maps: BaselineNet served as the basic model for comparative analysis, CollateralFlowNet included a collateral circulation evaluation score, InfarctProbabilityNet incorporated infarct probability mapping, ArterialTerritoryNet was guided by artery territory mapping, and UnifiedNet combined all prior knowledge sources. Model performance was evaluated using the Dice coefficient and intersection over union (IoU). A total of 221 patients with AIS were included (65.2% women) with a median age of 73 years. Baseline ischemic core based on CT perfusion threshold achieved a Dice coefficient of 0.50 and IoU of 0.33. BaselineNet improved to a Dice coefficient of 0.69 and IoU of 0.53. Compared with BaselineNet, models incorporating medical knowledge demonstrated higher performance: CollateralFlowNet (Dice coefficient 0.72, IoU 0.56), InfarctProbabilityNet (Dice coefficient 0.74, IoU 0.58), ArterialTerritoryNet (Dice coefficient 0.75, IoU 0.60), and UnifiedNet (Dice coefficient 0.82, IoU 0.71) (all P<0.05). In this study, integrating medical knowledge into deep learning models enhanced the accuracy of infarct predictions in AIS patients undergoing extended time window MT.

DWI and Clinical Characteristics Correlations in Acute Ischemic Stroke After Thrombolysis

Li, J., Huang, C., Liu, Y., Li, Y., Zhang, J., Xiao, M., yan, Z., zhao, H., Zeng, X., Mu, J.

medrxiv logopreprintJun 5 2025
ObjectiveMagnetic Resonance Diffusion-Weighted Imaging (DWI) is a crucial tool for diagnosing acute ischemic stroke, yet some patients present as DWI-negative. This study aims to analyze the imaging differences and associated clinical characteristics in acute ischemic stroke patients receiving intravenous thrombolysis, in order to enhance understanding of DWI-negative strokes. MethodsRetrospective collection of clinical data from acute ischemic stroke patients receiving intravenous thrombolysis at the Stroke Center of the First Affiliated Hospital of Chongqing Medical University from January 2017 to June 2023, categorized into DWI-positive and negative groups. Descriptive statistics, univariate analysis, binary logistic regression, and machine learning model were utilized to assess the predictive value of clinical features. Additionally, telephone follow-up was conducted for DWI-negative patients to record medication compliance, stroke recurrence, and mortality, with Fine-Gray competing risk model used to analyze recurrent risk factors. ResultsThe incidence rate of DWI-negative ischemic stroke is 22.74%. Factors positively associated with DWI-positive cases include onset to needle time (ONT), onset to first MRI time (OMT), NIHSS score at 1 week of hospitalization (NIHSS-1w), hyperlipidemia (HLP), and atrial fibrillation (AF) (p<0.05, OR>1). Conversely, recurrent ischemic stroke (RIS) and platelet count (PLT) are negatively correlated with DWI-positive cases (p<0.05, OR<1). Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification significantly influences DWI presentation (p=0.01), but the specific impact of etiological subtypes remains unclear. Machine learning models suggest that the features with the highest predictive value, in descending order, are AF, HLP, OMT, ONT, NIHSS difference within 24 hours post-thrombolysis(NIHSS-d(0-24h)PT), and RIS. ConclusionsNIHSS-1w, OMT, ONT, HLP, and AF can predict DWI-positive findings, while platelet count and RIS are associated with DWI-negative cases. AF and HLP demonstrate the highest predictive value. DWI-negative patients have a higher risk of stroke recurrence than mortality in the short term, with a potential correlation between TOAST classification and recurrence risk.

StrokeNeXt: an automated stroke classification model using computed tomography and magnetic resonance images.

Ekingen E, Yildirim F, Bayar O, Akbal E, Sercek I, Hafeez-Baig A, Dogan S, Tuncer T

pubmed logopapersJun 5 2025
Stroke ranks among the leading causes of disability and death worldwide. Timely detection can reduce its impact. Machine learning delivers powerful tools for image‑based diagnosis. This study introduces StrokeNeXt, a lightweight convolutional neural network (CNN) for computed tomography (CT) and magnetic resonance (MR) scans, and couples it with deep feature engineering (DFE) to improve accuracy and facilitate clinical deployment. We assembled a multimodal dataset of CT and MR images, each labeled as stroke or control. StrokeNeXt employs a ConvNeXt‑inspired block and a squeeze‑and‑excitation (SE) unit across four stages: stem, StrokeNeXt block, downsampling, and output. In the DFE pipeline, StrokeNeXt extracts features from fixed‑size patches, iterative neighborhood component analysis (INCA) selects the top features, and a t algorithm-based k-nearest neighbors (tkNN) classifier has been utilized for classification. StrokeNeXt achieved 93.67% test accuracy on the assembled dataset. Integrating DFE raised accuracy to 97.06%. This combined approach outperformed StrokeNeXt alone and reduced classification time. StrokeNeXt paired with DFE offers an effective solution for stroke detection on CT and MR images. Its high accuracy and fewer learnable parameters make it lightweight and it is suitable for integration into clinical workflows. This research lays a foundation for real‑time decision support in emergency and radiology settings.

Noise-induced self-supervised hybrid UNet transformer for ischemic stroke segmentation with limited data annotations.

Soh WK, Rajapakse JC

pubmed logopapersJun 5 2025
We extend the Hybrid Unet Transformer (HUT) foundation model, which combines the advantages of the CNN and Transformer architectures with a noisy self-supervised approach, and demonstrate it in an ischemic stroke lesion segmentation task. We introduce a self-supervised approach using a noise anchor and show that it can perform better than a supervised approach under a limited amount of annotated data. We supplement our pre-training process with an additional unannotated CT perfusion dataset to validate our approach. Compared to the supervised version, the noisy self-supervised HUT (HUT-NSS) outperforms its counterpart by a margin of 2.4% in terms of dice score. HUT-NSS, on average, gained a further margin of 7.2% dice score and 28.1% Hausdorff Distance score over the state-of-the-art network USSLNet on the CT perfusion scans of the Ischemic Stroke Lesion Segmentation (ISLES2018) dataset. In limited annotated data sets, we show that HUT-NSS gained 7.87% of the dice score over USSLNet when we used 50% of the annotated data sets for training. HUT-NSS gained 7.47% of the dice score over USSLNet when we used 10% of the annotated datasets, and HUT-NSS gained 5.34% of the dice score over USSLNet when we used 1% of the annotated datasets for training. The code is available at https://github.com/vicsohntu/HUTNSS_CT .

Automated Brain Tumor Classification and Grading Using Multi-scale Graph Neural Network with Spatio-Temporal Transformer Attention Through MRI Scans.

Srivastava S, Jain P, Pandey SK, Dubey G, Das NN

pubmed logopapersJun 5 2025
The medical field uses Magnetic Resonance Imaging (MRI) as an essential diagnostic tool which provides doctors non-invasive images of brain structures and pathological conditions. Brain tumor detection stands as a vital application that needs specific and effective approaches for both medical diagnosis and treatment procedures. The challenges from manual examination of MRI scans stem from inconsistent tumor features including heterogeneity and irregular dimensions which results in inaccurate assessments of tumor size. To address these challenges, this paper proposes an Automated Classification and Grading Diagnosis Model (ACGDM) using MRI images. Unlike conventional methods, ACGDM introduces a Multi-Scale Graph Neural Network (MSGNN), which dynamically captures hierarchical and multi-scale dependencies in MRI data, enabling more accurate feature representation and contextual analysis. Additionally, the Spatio-Temporal Transformer Attention Mechanism (STTAM) effectively models both spatial MRI patterns and temporal evolution by incorporating cross-frame dependencies, enhancing the model's sensitivity to subtle disease progression. By analyzing multi-modal MRI sequences, ACGDM dynamically adjusts its focus across spatial and temporal dimensions, enabling precise identification of salient features. Simulations are conducted using Python and standard libraries to evaluate the model on the BRATS 2018, 2019, 2020 datasets and the Br235H dataset, encompassing diverse MRI scans with expert annotations. Extensive experimentation demonstrates 99.8% accuracy in detecting various tumor types, showcasing its potential to revolutionize diagnostic practices and improve patient outcomes.

Prediction of impulse control disorders in Parkinson's disease: a longitudinal machine learning study

Vamvakas, A., van Balkom, T., van Wingen, G., Booij, J., Weintraub, D., Berendse, H. W., van den Heuvel, O. A., Vriend, C.

medrxiv logopreprintJun 5 2025
BackgroundImpulse control disorders (ICD) in Parkinsons disease (PD) patients mainly occur as adverse effects of dopamine replacement therapy. Despite several known risk factors associated with ICD development, this cannot yet be accurately predicted at PD diagnosis. ObjectivesWe aimed to investigate the predictability of incident ICD by baseline measures of demographic, clinical, dopamine transporter single photon emission computed tomography (DAT-SPECT), and genetic variables. MethodsWe used demographic and clinical data of medication-free PD patients from two longitudinal datasets; Parkinsons Progression Markers Initiative (PPMI) (n=311) and Amsterdam UMC (n=72). We extracted radiomic and latent features from DAT-SPECT. We used single nucleotic polymorphisms (SNPs) from PPMIs NeuroX and Exome sequencing data. Four machine learning classifiers were trained on combinations of the input feature sets, to predict incident ICD at any follow-up assessment. Classification performance was measured with 10x5-fold cross-validation. ResultsICD prevalence at any follow-up was 0.32. The highest performance in predicting incident ICD (AUC=0.66) was achieved by the models trained on clinical features only. Anxiety severity and age of PD onset were identified as the most important features. Performance did not improve with adding features from DAT-SPECT or SNPs. We observed significantly higher performance (AUC=0.74) when classifying patients who developed ICD within four years from diagnosis compared with those tested negative for seven or more years. ConclusionsPrediction accuracy for later ICD development, at the time of PD diagnosis, is limited; however, it increases for shorter time-to-event predictions. Neither DAT-SPECT nor genetic data improve the predictability obtained using demographic and clinical variables alone.

Subgrouping autism and ADHD based on structural MRI population modelling centiles.

Pecci-Terroba C, Lai MC, Lombardo MV, Chakrabarti B, Ruigrok ANV, Suckling J, Anagnostou E, Lerch JP, Taylor MJ, Nicolson R, Georgiades S, Crosbie J, Schachar R, Kelley E, Jones J, Arnold PD, Seidlitz J, Alexander-Bloch AF, Bullmore ET, Baron-Cohen S, Bedford SA, Bethlehem RAI

pubmed logopapersJun 4 2025
Autism and attention deficit hyperactivity disorder (ADHD) are two highly heterogeneous neurodevelopmental conditions with variable underlying neurobiology. Imaging studies have yielded varied results, and it is now clear that there is unlikely to be one characteristic neuroanatomical profile of either condition. Parsing this heterogeneity could allow us to identify more homogeneous subgroups, either within or across conditions, which may be more clinically informative. This has been a pivotal goal for neurodevelopmental research using both clinical and neuroanatomical features, though results thus far have again been inconsistent with regards to the number and characteristics of subgroups. Here, we use population modelling to cluster a multi-site dataset based on global and regional centile scores of cortical thickness, surface area and grey matter volume. We use HYDRA, a novel semi-supervised machine learning algorithm which clusters based on differences to controls and compare its performance to a traditional clustering approach. We identified distinct subgroups within autism and ADHD, as well as across diagnosis, often with opposite neuroanatomical alterations relatively to controls. These subgroups were characterised by different combinations of increased or decreased patterns of morphometrics. We did not find significant clinical differences across subgroups. Crucially, however, the number of subgroups and their membership differed vastly depending on chosen features and the algorithm used, highlighting the impact and importance of careful method selection. We highlight the importance of examining heterogeneity in autism and ADHD and demonstrate that population modelling is a useful tool to study subgrouping in autism and ADHD. We identified subgroups with distinct patterns of alterations relative to controls but note that these results rely heavily on the algorithm used and encourage detailed reporting of methods and features used in future studies.

Vascular segmentation of functional ultrasound images using deep learning.

Sebia H, Guyet T, Pereira M, Valdebenito M, Berry H, Vidal B

pubmed logopapersJun 4 2025
Segmentation of medical images is a fundamental task with numerous applications. While MRI, CT, and PET modalities have significantly benefited from deep learning segmentation techniques, more recent modalities, like functional ultrasound (fUS), have seen limited progress. fUS is a non invasive imaging method that measures changes in cerebral blood volume (CBV) with high spatio-temporal resolution. However, distinguishing arterioles from venules in fUS is challenging due to opposing blood flow directions within the same pixel. Ultrasound localization microscopy (ULM) can enhance resolution by tracking microbubble contrast agents but is invasive, and lacks dynamic CBV quantification. In this paper, we introduce the first deep learning-based application for fUS image segmentation, capable of differentiating signals based on vertical flow direction (upward vs. downward), using ULM-based automatic annotation, and enabling dynamic CBV quantification. In the cortical vasculature, this distinction in flow direction provides a proxy for differentiating arteries from veins. We evaluate various UNet architectures on fUS images of rat brains, achieving competitive segmentation performance, with 90% accuracy, a 71% F1 score, and an IoU of 0.59, using only 100 temporal frames from a fUS stack. These results are comparable to those from tubular structure segmentation in other imaging modalities. Additionally, models trained on resting-state data generalize well to images captured during visual stimulation, highlighting robustness. Although it does not reach the full granularity of ULM, the proposed method provides a practical, non-invasive and cost-effective solution for inferring flow direction-particularly valuable in scenarios where ULM is not available or feasible. Our pipeline shows high linear correlation coefficients between signals from predicted and actual compartments, showcasing its ability to accurately capture blood flow dynamics.

A review on learning-based algorithms for tractography and human brain white matter tracts recognition.

Barati Shoorche A, Farnia P, Makkiabadi B, Leemans A

pubmed logopapersJun 4 2025
Human brain fiber tractography using diffusion magnetic resonance imaging is a crucial stage in mapping brain white matter structures, pre-surgical planning, and extracting connectivity patterns. Accurate and reliable tractography, by providing detailed geometric information about the position of neural pathways, minimizes the risk of damage during neurosurgical procedures. Both tractography itself and its post-processing steps such as bundle segmentation are usually used in these contexts. Many approaches have been put forward in the past decades and recently, multiple data-driven tractography algorithms and automatic segmentation pipelines have been proposed to address the limitations of traditional methods. Several of these recent methods are based on learning algorithms that have demonstrated promising results. In this study, in addition to introducing diffusion MRI datasets, we review learning-based algorithms such as conventional machine learning, deep learning, reinforcement learning and dictionary learning methods that have been used for white matter tract, nerve and pathway recognition as well as whole brain streamlines or whole brain tractogram creation. The contribution is to discuss both tractography and tract recognition methods, in addition to extending previous related reviews with most recent methods, covering architectures as well as network details, assess the efficiency of learning-based methods through a comprehensive comparison in this field, and finally demonstrate the important role of learning-based methods in tractography.

3D Quantification of Viral Transduction Efficiency in Living Human Retinal Organoids

Rogler, T. S., Salbaum, K. A., Brinkop, A. T., Sonntag, S. M., James, R., Shelton, E. R., Thielen, A., Rose, R., Babutzka, S., Klopstock, T., Michalakis, S., Serwane, F.

biorxiv logopreprintJun 4 2025
The development of therapeutics builds on testing their efficiency in vitro. To optimize gene therapies, for example, fluorescent reporters expressed by treated cells are typically utilized as readouts. Traditionally, their global fluorescence signal has been used as an estimate of transduction efficiency. However, analysis in individual cells within a living 3D tissue remains a challenge. Readout on a single-cell level can be realized via fluo-rescence-based flow cytometry at the cost of tissue dissociation and loss of spatial information. Complementary, spatial information is accessible via immunofluorescence of fixed samples. Both approaches impede time-dependent studies on the delivery of the vector to the cells. Here, quantitative 3D characterization of viral transduction efficiencies in living retinal organoids is introduced. The approach combines quantified gene delivery efficiency in space and time, leveraging human retinal organ-oids, engineered adeno-associated virus (AAV) vectors, confocal live imaging, and deep learning-based image segmentation. The integration of these tools in an organoid imaging and analysis pipeline allows quantitative testing of future treatments and other gene delivery methods. It has the potential to guide the development of therapies in biomedical applications.
Page 63 of 95943 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.