Sort by:
Page 26 of 58579 results

Inconsistency of AI in intracranial aneurysm detection with varying dose and image reconstruction.

Goelz L, Laudani A, Genske U, Scheel M, Bohner G, Bauknecht HC, Mutze S, Hamm B, Jahnke P

pubmed logopapersJun 6 2025
Scanner-related changes in data quality are common in medical imaging, yet monitoring their impact on diagnostic AI performance remains challenging. In this study, we performed standardized consistency testing of an FDA-cleared and CE-marked AI for triage and notification of intracranial aneurysms across changes in image data quality caused by dose and image reconstruction. Our assessment was based on repeated examinations of a head CT phantom designed for AI evaluation, replicating a patient with three intracranial aneurysms in the anterior, middle and posterior circulation. We show that the AI maintains stable performance within the medium dose range but produces inconsistent results at reduced dose and, unexpectedly, at higher dose when filtered back projection is used. Data quality standards required for AI are stricter than those for neuroradiologists, who report higher aneurysm visibility rates and experience performance degradation only at substantially lower doses, with no decline at higher doses.

Predicting infarct outcomes after extended time window thrombectomy in large vessel occlusion using knowledge guided deep learning.

Dai L, Yuan L, Zhang H, Sun Z, Jiang J, Li Z, Li Y, Zha Y

pubmed logopapersJun 6 2025
Predicting the final infarct after an extended time window mechanical thrombectomy (MT) is beneficial for treatment planning in acute ischemic stroke (AIS). By introducing guidance from prior knowledge, this study aims to improve the accuracy of the deep learning model for post-MT infarct prediction using pre-MT brain perfusion data. This retrospective study collected CT perfusion data at admission for AIS patients receiving MT over 6 hours after symptom onset, from January 2020 to December 2024, across three centers. Infarct on post-MT diffusion weighted imaging served as ground truth. Five Swin transformer based models were developed for post-MT infarct segmentation using pre-MT CT perfusion parameter maps: BaselineNet served as the basic model for comparative analysis, CollateralFlowNet included a collateral circulation evaluation score, InfarctProbabilityNet incorporated infarct probability mapping, ArterialTerritoryNet was guided by artery territory mapping, and UnifiedNet combined all prior knowledge sources. Model performance was evaluated using the Dice coefficient and intersection over union (IoU). A total of 221 patients with AIS were included (65.2% women) with a median age of 73 years. Baseline ischemic core based on CT perfusion threshold achieved a Dice coefficient of 0.50 and IoU of 0.33. BaselineNet improved to a Dice coefficient of 0.69 and IoU of 0.53. Compared with BaselineNet, models incorporating medical knowledge demonstrated higher performance: CollateralFlowNet (Dice coefficient 0.72, IoU 0.56), InfarctProbabilityNet (Dice coefficient 0.74, IoU 0.58), ArterialTerritoryNet (Dice coefficient 0.75, IoU 0.60), and UnifiedNet (Dice coefficient 0.82, IoU 0.71) (all P<0.05). In this study, integrating medical knowledge into deep learning models enhanced the accuracy of infarct predictions in AIS patients undergoing extended time window MT.

Reliable Evaluation of MRI Motion Correction: Dataset and Insights

Kun Wang, Tobit Klug, Stefan Ruschke, Jan S. Kirschke, Reinhard Heckel

arxiv logopreprintJun 6 2025
Correcting motion artifacts in MRI is important, as they can hinder accurate diagnosis. However, evaluating deep learning-based and classical motion correction methods remains fundamentally difficult due to the lack of accessible ground-truth target data. To address this challenge, we study three evaluation approaches: real-world evaluation based on reference scans, simulated motion, and reference-free evaluation, each with its merits and shortcomings. To enable evaluation with real-world motion artifacts, we release PMoC3D, a dataset consisting of unprocessed Paired Motion-Corrupted 3D brain MRI data. To advance evaluation quality, we introduce MoMRISim, a feature-space metric trained for evaluating motion reconstructions. We assess each evaluation approach and find real-world evaluation together with MoMRISim, while not perfect, to be most reliable. Evaluation based on simulated motion systematically exaggerates algorithm performance, and reference-free evaluation overrates oversmoothed deep learning outputs.

TissUnet: Improved Extracranial Tissue and Cranium Segmentation for Children through Adulthood

Markian Mandzak, Elvira Yang, Anna Zapaishchykova, Yu-Hui Chen, Lucas Heilbroner, John Zielke, Divyanshu Tak, Reza Mojahed-Yazdi, Francesca Romana Mussa, Zezhong Ye, Sridhar Vajapeyam, Viviana Benitez, Ralph Salloum, Susan N. Chi, Houman Sotoudeh, Jakob Seidlitz, Sabine Mueller, Hugo J. W. L. Aerts, Tina Y. Poussaint, Benjamin H. Kann

arxiv logopreprintJun 6 2025
Extracranial tissues visible on brain magnetic resonance imaging (MRI) may hold significant value for characterizing health conditions and clinical decision-making, yet they are rarely quantified. Current tools have not been widely validated, particularly in settings of developing brains or underlying pathology. We present TissUnet, a deep learning model that segments skull bone, subcutaneous fat, and muscle from routine three-dimensional T1-weighted MRI, with or without contrast enhancement. The model was trained on 155 paired MRI-computed tomography (CT) scans and validated across nine datasets covering a wide age range and including individuals with brain tumors. In comparison to AI-CT-derived labels from 37 MRI-CT pairs, TissUnet achieved a median Dice coefficient of 0.79 [IQR: 0.77-0.81] in a healthy adult cohort. In a second validation using expert manual annotations, median Dice was 0.83 [IQR: 0.83-0.84] in healthy individuals and 0.81 [IQR: 0.78-0.83] in tumor cases, outperforming previous state-of-the-art method. Acceptability testing resulted in an 89% acceptance rate after adjudication by a tie-breaker(N=108 MRIs), and TissUnet demonstrated excellent performance in the blinded comparative review (N=45 MRIs), including both healthy and tumor cases in pediatric populations. TissUnet enables fast, accurate, and reproducible segmentation of extracranial tissues, supporting large-scale studies on craniofacial morphology, treatment effects, and cardiometabolic risk using standard brain T1w MRI.

TissUnet: Improved Extracranial Tissue and Cranium Segmentation for Children through Adulthood

Markiian Mandzak, Elvira Yang, Anna Zapaishchykova, Yu-Hui Chen, Lucas Heilbroner, John Zielke, Divyanshu Tak, Reza Mojahed-Yazdi, Francesca Romana Mussa, Zezhong Ye, Sridhar Vajapeyam, Viviana Benitez, Ralph Salloum, Susan N. Chi, Houman Sotoudeh, Jakob Seidlitz, Sabine Mueller, Hugo J. W. L. Aerts, Tina Y. Poussaint, Benjamin H. Kann

arxiv logopreprintJun 6 2025
Extracranial tissues visible on brain magnetic resonance imaging (MRI) may hold significant value for characterizing health conditions and clinical decision-making, yet they are rarely quantified. Current tools have not been widely validated, particularly in settings of developing brains or underlying pathology. We present TissUnet, a deep learning model that segments skull bone, subcutaneous fat, and muscle from routine three-dimensional T1-weighted MRI, with or without contrast enhancement. The model was trained on 155 paired MRI-computed tomography (CT) scans and validated across nine datasets covering a wide age range and including individuals with brain tumors. In comparison to AI-CT-derived labels from 37 MRI-CT pairs, TissUnet achieved a median Dice coefficient of 0.79 [IQR: 0.77-0.81] in a healthy adult cohort. In a second validation using expert manual annotations, median Dice was 0.83 [IQR: 0.83-0.84] in healthy individuals and 0.81 [IQR: 0.78-0.83] in tumor cases, outperforming previous state-of-the-art method. Acceptability testing resulted in an 89% acceptance rate after adjudication by a tie-breaker(N=108 MRIs), and TissUnet demonstrated excellent performance in the blinded comparative review (N=45 MRIs), including both healthy and tumor cases in pediatric populations. TissUnet enables fast, accurate, and reproducible segmentation of extracranial tissues, supporting large-scale studies on craniofacial morphology, treatment effects, and cardiometabolic risk using standard brain T1w MRI.

Magnetic resonance imaging and the evaluation of vestibular schwannomas: a systematic review

Lee, K. S., Wijetilake, N., Connor, S., Vercauteren, T., Shapey, J.

medrxiv logopreprintJun 6 2025
IntroductionThe assessment of vestibular schwannoma (VS) requires a standardized measurement approach as growth is a key element in defining treatment strategy for VS. Volumetric measurements offer higher sensitivity and precision, but existing methods of segmentation, are labour-intensive, lack standardisation and are prone to variability and subjectivity. A new core set of measurement indicators reported consistently, will support clinical decision-making and facilitate evidence synthesis. This systematic review aimed to identify indicators used in 1) magnetic resonance imaging (MRI) acquisition and 2) measurement or 3) growth of VS. This work is expected to inform a Delphi consensus. MethodsSystematic searches of Medline, Embase and Cochrane Central were undertaken on 4th October 2024. Studies that assessed the evaluation of VS with MRI, between 2014 and 2024 were included. ResultsThe final dataset consisted of 102 studies and 19001 patients. Eighty-six (84.3%) studies employed post contrast T1 as the MRI acquisition of choice for evaluating VS. Nine (8.8%) studies additionally employed heavily weighted T2 sequences such as constructive interference in steady state (CISS) and FIESTA-C. Only 45 (44.1%) studies reported the slice thickness with the majority 38 (84.4%) choosing <3mm in thickness. Fifty-eight (56.8%) studies measured volume whilst 49 (48.0%) measured the largest linear dimension; 14 (13.7%) studies used both measurements. Four studies employed semi-automated or automated segmentation processes to measure the volumes of VS. Of 68 studies investigating growth, 54 (79.4%) provided a threshold. Significant variation in volumetric growth was observed but the threshold for significant percentage change reported by most studies was 20% (n = 18). ConclusionSubstantial variation in MRI acquisition, and methods for evaluating measurement and growth of VS, exists across the literature. This lack of standardization is likely attributed to resource constraints and the fact that currently available volumetric segmentation methods are very labour-intensive. Following the identification of the indicators employed in the literature, this study aims to develop a Delphi consensus for the standardized measurement of VS and uptake in employing a data-driven artificial intelligence-based measuring tools.

Detecting neurodegenerative changes in glaucoma using deep mean kurtosis-curve-corrected tractometry

Kasa, L. W., Schierding, W., Kwon, E., Holdsworth, S., Danesh-Meyer, H. V.

medrxiv logopreprintJun 6 2025
Glaucoma is increasingly recognized as a neurodegenerative condition involving both retinal and central nervous system structures. Here, we present an integrated framework that combines MK-Curve-corrected diffusion kurtosis imaging (DKI), tractometry, and deep autoencoder-based normative modeling to detect localized white matter abnormalities associated with glaucoma. Using UK Biobank diffusion MRI data, we show that MK-Curve approach corrects anatomically implausible values and improves the reliability of DKI metrics - particularly mean (MK), radial (RK), and axial kurtosis (AK) - in regions of complex fiber architecture. Tractometry revealed reduced MK in glaucoma patients along the optic radiation, inferior longitudinal fasciculus, and inferior fronto-occipital fasciculus, but not in a non-visual control tract, supporting disease specificity. These abnormalities were spatially localized, with significant changes observed at multiple points along the tracts. MK demonstrated greater sensitivity than MD and exhibited altered distributional features, reflecting microstructural heterogeneity not captured by standard metrics. Node-wise MK values in the right optic radiation showed weak but significant correlations with retinal OCT measures (ganglion cell layer and retinal nerve fiber layer thickness), reinforcing the biological relevance of these findings. Deep autoencoder-based modeling further enabled subject-level anomaly detection that aligned spatially with group-level changes and outperformed traditional approaches. Together, our results highlight the potential of advanced diffusion modeling and deep learning for sensitive, individualized detection of glaucomatous neurodegeneration and support their integration into future multimodal imaging pipelines in neuro-ophthalmology.

StrokeNeXt: an automated stroke classification model using computed tomography and magnetic resonance images.

Ekingen E, Yildirim F, Bayar O, Akbal E, Sercek I, Hafeez-Baig A, Dogan S, Tuncer T

pubmed logopapersJun 5 2025
Stroke ranks among the leading causes of disability and death worldwide. Timely detection can reduce its impact. Machine learning delivers powerful tools for image‑based diagnosis. This study introduces StrokeNeXt, a lightweight convolutional neural network (CNN) for computed tomography (CT) and magnetic resonance (MR) scans, and couples it with deep feature engineering (DFE) to improve accuracy and facilitate clinical deployment. We assembled a multimodal dataset of CT and MR images, each labeled as stroke or control. StrokeNeXt employs a ConvNeXt‑inspired block and a squeeze‑and‑excitation (SE) unit across four stages: stem, StrokeNeXt block, downsampling, and output. In the DFE pipeline, StrokeNeXt extracts features from fixed‑size patches, iterative neighborhood component analysis (INCA) selects the top features, and a t algorithm-based k-nearest neighbors (tkNN) classifier has been utilized for classification. StrokeNeXt achieved 93.67% test accuracy on the assembled dataset. Integrating DFE raised accuracy to 97.06%. This combined approach outperformed StrokeNeXt alone and reduced classification time. StrokeNeXt paired with DFE offers an effective solution for stroke detection on CT and MR images. Its high accuracy and fewer learnable parameters make it lightweight and it is suitable for integration into clinical workflows. This research lays a foundation for real‑time decision support in emergency and radiology settings.

Noise-induced self-supervised hybrid UNet transformer for ischemic stroke segmentation with limited data annotations.

Soh WK, Rajapakse JC

pubmed logopapersJun 5 2025
We extend the Hybrid Unet Transformer (HUT) foundation model, which combines the advantages of the CNN and Transformer architectures with a noisy self-supervised approach, and demonstrate it in an ischemic stroke lesion segmentation task. We introduce a self-supervised approach using a noise anchor and show that it can perform better than a supervised approach under a limited amount of annotated data. We supplement our pre-training process with an additional unannotated CT perfusion dataset to validate our approach. Compared to the supervised version, the noisy self-supervised HUT (HUT-NSS) outperforms its counterpart by a margin of 2.4% in terms of dice score. HUT-NSS, on average, gained a further margin of 7.2% dice score and 28.1% Hausdorff Distance score over the state-of-the-art network USSLNet on the CT perfusion scans of the Ischemic Stroke Lesion Segmentation (ISLES2018) dataset. In limited annotated data sets, we show that HUT-NSS gained 7.87% of the dice score over USSLNet when we used 50% of the annotated data sets for training. HUT-NSS gained 7.47% of the dice score over USSLNet when we used 10% of the annotated datasets, and HUT-NSS gained 5.34% of the dice score over USSLNet when we used 1% of the annotated datasets for training. The code is available at https://github.com/vicsohntu/HUTNSS_CT .

Automated Brain Tumor Classification and Grading Using Multi-scale Graph Neural Network with Spatio-Temporal Transformer Attention Through MRI Scans.

Srivastava S, Jain P, Pandey SK, Dubey G, Das NN

pubmed logopapersJun 5 2025
The medical field uses Magnetic Resonance Imaging (MRI) as an essential diagnostic tool which provides doctors non-invasive images of brain structures and pathological conditions. Brain tumor detection stands as a vital application that needs specific and effective approaches for both medical diagnosis and treatment procedures. The challenges from manual examination of MRI scans stem from inconsistent tumor features including heterogeneity and irregular dimensions which results in inaccurate assessments of tumor size. To address these challenges, this paper proposes an Automated Classification and Grading Diagnosis Model (ACGDM) using MRI images. Unlike conventional methods, ACGDM introduces a Multi-Scale Graph Neural Network (MSGNN), which dynamically captures hierarchical and multi-scale dependencies in MRI data, enabling more accurate feature representation and contextual analysis. Additionally, the Spatio-Temporal Transformer Attention Mechanism (STTAM) effectively models both spatial MRI patterns and temporal evolution by incorporating cross-frame dependencies, enhancing the model's sensitivity to subtle disease progression. By analyzing multi-modal MRI sequences, ACGDM dynamically adjusts its focus across spatial and temporal dimensions, enabling precise identification of salient features. Simulations are conducted using Python and standard libraries to evaluate the model on the BRATS 2018, 2019, 2020 datasets and the Br235H dataset, encompassing diverse MRI scans with expert annotations. Extensive experimentation demonstrates 99.8% accuracy in detecting various tumor types, showcasing its potential to revolutionize diagnostic practices and improve patient outcomes.
Page 26 of 58579 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.