Sort by:
Page 74 of 1611610 results

Cardiovascular risk in childhood and young adulthood is associated with the hemodynamic response function in midlife: The Bogalusa Heart Study.

Chuang KC, Naseri M, Ramakrishnapillai S, Madden K, Amant JS, McKlveen K, Gwizdala K, Dhullipudi R, Bazzano L, Carmichael O

pubmed logopapersJun 18 2025
In functional MRI, a hemodynamic response function (HRF) describes how neural events are translated into a blood oxygenation response detected through imaging. The HRF has the potential to quantify neurovascular mechanisms by which cardiovascular risks modify brain health, but relationships among HRF characteristics, brain health, and cardiovascular modifiers of brain health have not been well studied to date. One hundred and thirty-seven middle-aged participants (mean age: 53.6±4.7, female (62%), 78% White American participants and 22% African American participants) in the exploratory analysis from Bogalusa Heart Study completed clinical evaluations from childhood to midlife and an adaptive Stroop task during fMRI in midlife. The HRFs of each participant within seventeen brain regions of interest (ROIs) previously identified as activated by this task were calculated using a convolutional neural network approach. Faster and more efficient neurovascular functioning was characterized in terms of five HRF characteristics: faster time to peak (TTP), shorter full width at half maximum (FWHM), smaller peak magnitude (PM), smaller trough magnitude (TM), and smaller area under the HRF curve (AUHRF). The composite HRF summary characteristics over all ROIs were calculated for multivariable and simple linear regression analyses. In multivariable models, faster and more efficient HRF characteristic was found in non-smoker compared to smokers (AUHRF, p = 0.029). Faster and more efficient HRF characteristics were associated with lower systolic and diastolic blood pressures (FWHM, TM, and AUHRF, p = 0.030, 0.042, and 0.032) and cerebral amyloid burden (FWHM, p-value = 0.027) in midlife; as well as greater response rate on the Stroop task (FWHM, p = 0.042) in midlife. In simple linear regression models, faster and more efficient HRF characteristics were found in women compared to men (TM, p = 0.019); in White American participants compared to African American participants (AUHRF, p = 0.044); and in non-smokers compared to smokers (TTP and AUHRF, p = 0.019 and 0.010). Faster and more efficient HRF characteristics were associated with lower systolic and diastolic blood pressures (FWHM and TM, p = 0.019 and 0.029), and lower BMI (FWHM and AUHRF, p = 0.025 and 0.017), in childhood and adolescence; and lower BMI (TTP, p = 0.049), cerebral amyloid burden (FWHM, p = 0.002), and white matter hyperintensity burden (FWHM, p = 0.046) in midlife; as well as greater accuracy on the Stroop task (AUHRF, p = 0.037) in midlife. In a diverse middle-aged community sample, HRF-based indicators of faster and more efficient neurovascular functioning were associated with better brain health and cognitive function, as well as better lifespan cardiovascular health.

Hierarchical refinement with adaptive deformation cascaded for multi-scale medical image registration.

Hussain N, Yan Z, Cao W, Anwar M

pubmed logopapersJun 18 2025
Deformable image registration is a fundamental task in medical image analysis, which is crucial in enabling early detection and accurate disease diagnosis. Although transformer-based architectures have demonstrated strong potential through attention mechanisms, challenges remain in ineffective feature extraction and spatial alignment, particularly within hierarchical attention frameworks. To address these limitations, we propose a novel registration framework that integrates hierarchical feature encoding in the encoder and an adaptive cascaded refinement strategy in the decoder. The model employs hierarchical cross-attention between fixed and moving images at multiple scales, enabling more precise alignment and improved registration accuracy. The decoder incorporates the Adaptive Cascaded Module (ACM), facilitating progressive deformation field refinement across multiple resolution levels. This approach captures coarse global transformations and acceptable local variations, resulting in smooth and anatomically consistent alignment. However, rather than relying solely on the final decoder output, our framework leverages intermediate representations at each stage of the network, enhancing the robustness and precision of the registration process. Our method achieves superior accuracy and adaptability by integrating deformations across all scales. Comprehensive experiments on two widely used 3D brain MRI datasets, OASIS and LPBA40, demonstrate that the proposed framework consistently outperforms existing state-of-the-art approaches across multiple evaluation metrics regarding accuracy, robustness, and generalizability.

Multimodal deep learning for predicting unsuccessful recanalization in refractory large vessel occlusion.

González JD, Canals P, Rodrigo-Gisbert M, Mayol J, García-Tornel A, Ribó M

pubmed logopapersJun 18 2025
This study explores a multi-modal deep learning approach that integrates pre-intervention neuroimaging and clinical data to predict endovascular therapy (EVT) outcomes in acute ischemic stroke patients. To this end, consecutive stroke patients undergoing EVT were included in the study, including patients with suspected Intracranial Atherosclerosis-related Large Vessel Occlusion ICAD-LVO and other refractory occlusions. A retrospective, single-center cohort of patients with anterior circulation LVO who underwent EVT between 2017-2023 was analyzed. Refractory LVO (rLVO) defined class, comprised patients who presented any of the following: final angiographic stenosis > 50 %, unsuccessful recanalization (eTICI 0-2a) or required rescue treatments (angioplasty +/- stenting). Neuroimaging data included non-contrast CT and CTA volumes, automated vascular segmentation, and CT perfusion parameters. Clinical data included demographics, comorbidities and stroke severity. Imaging features were encoded using convolutional neural networks and fused with clinical data using a DAFT module. Data were split 80 % for training (with four-fold cross-validation) and 20 % for testing. Explainability methods were used to analyze the contribution of clinical variables and regions of interest in the images. The final sample comprised 599 patients; 481 for training the model (77, 16.0 % rLVO), and 118 for testing (16, 13.6 % rLVO). The best model predicting rLVO using just imaging achieved an AUC of 0.53 ± 0.02 and F1 of 0.19 ± 0.05 while the proposed multimodal model achieved an AUC of 0.70 ± 0.02 and F1 of 0.39 ± 0.02 in testing. Combining vascular segmentation, clinical variables, and imaging data improved prediction performance over single-source models. This approach offers an early alert to procedural complexity, potentially guiding more tailored, timely intervention strategies in the EVT workflow.

Federated Learning for MRI-based BrainAGE: a multicenter study on post-stroke functional outcome prediction

Vincent Roca, Marc Tommasi, Paul Andrey, Aurélien Bellet, Markus D. Schirmer, Hilde Henon, Laurent Puy, Julien Ramon, Grégory Kuchcinski, Martin Bretzner, Renaud Lopes

arxiv logopreprintJun 18 2025
$\textbf{Objective:}$ Brain-predicted age difference (BrainAGE) is a neuroimaging biomarker reflecting brain health. However, training robust BrainAGE models requires large datasets, often restricted by privacy concerns. This study evaluates the performance of federated learning (FL) for BrainAGE estimation in ischemic stroke patients treated with mechanical thrombectomy, and investigates its association with clinical phenotypes and functional outcomes. $\textbf{Methods:}$ We used FLAIR brain images from 1674 stroke patients across 16 hospital centers. We implemented standard machine learning and deep learning models for BrainAGE estimates under three data management strategies: centralized learning (pooled data), FL (local training at each site), and single-site learning. We reported prediction errors and examined associations between BrainAGE and vascular risk factors (e.g., diabetes mellitus, hypertension, smoking), as well as functional outcomes at three months post-stroke. Logistic regression evaluated BrainAGE's predictive value for these outcomes, adjusting for age, sex, vascular risk factors, stroke severity, time between MRI and arterial puncture, prior intravenous thrombolysis, and recanalisation outcome. $\textbf{Results:}$ While centralized learning yielded the most accurate predictions, FL consistently outperformed single-site models. BrainAGE was significantly higher in patients with diabetes mellitus across all models. Comparisons between patients with good and poor functional outcomes, and multivariate predictions of these outcomes showed the significance of the association between BrainAGE and post-stroke recovery. $\textbf{Conclusion:}$ FL enables accurate age predictions without data centralization. The strong association between BrainAGE, vascular risk factors, and post-stroke recovery highlights its potential for prognostic modeling in stroke care.

Pediatric Pancreas Segmentation from MRI Scans with Deep Learning

Elif Keles, Merve Yazol, Gorkem Durak, Ziliang Hong, Halil Ertugrul Aktas, Zheyuan Zhang, Linkai Peng, Onkar Susladkar, Necati Guzelyel, Oznur Leman Boyunaga, Cemal Yazici, Mark Lowe, Aliye Uc, Ulas Bagci

arxiv logopreprintJun 18 2025
Objective: Our study aimed to evaluate and validate PanSegNet, a deep learning (DL) algorithm for pediatric pancreas segmentation on MRI in children with acute pancreatitis (AP), chronic pancreatitis (CP), and healthy controls. Methods: With IRB approval, we retrospectively collected 84 MRI scans (1.5T/3T Siemens Aera/Verio) from children aged 2-19 years at Gazi University (2015-2024). The dataset includes healthy children as well as patients diagnosed with AP or CP based on clinical criteria. Pediatric and general radiologists manually segmented the pancreas, then confirmed by a senior pediatric radiologist. PanSegNet-generated segmentations were assessed using Dice Similarity Coefficient (DSC) and 95th percentile Hausdorff distance (HD95). Cohen's kappa measured observer agreement. Results: Pancreas MRI T2W scans were obtained from 42 children with AP/CP (mean age: 11.73 +/- 3.9 years) and 42 healthy children (mean age: 11.19 +/- 4.88 years). PanSegNet achieved DSC scores of 88% (controls), 81% (AP), and 80% (CP), with HD95 values of 3.98 mm (controls), 9.85 mm (AP), and 15.67 mm (CP). Inter-observer kappa was 0.86 (controls), 0.82 (pancreatitis), and intra-observer agreement reached 0.88 and 0.81. Strong agreement was observed between automated and manual volumes (R^2 = 0.85 in controls, 0.77 in diseased), demonstrating clinical reliability. Conclusion: PanSegNet represents the first validated deep learning solution for pancreatic MRI segmentation, achieving expert-level performance across healthy and diseased states. This tool, algorithm, along with our annotated dataset, are freely available on GitHub and OSF, advancing accessible, radiation-free pediatric pancreatic imaging and fostering collaborative research in this underserved domain.

Implicit neural representations for accurate estimation of the standard model of white matter

Tom Hendriks, Gerrit Arends, Edwin Versteeg, Anna Vilanova, Maxime Chamberland, Chantal M. W. Tax

arxiv logopreprintJun 18 2025
Diffusion magnetic resonance imaging (dMRI) enables non-invasive investigation of tissue microstructure. The Standard Model (SM) of white matter aims to disentangle dMRI signal contributions from intra- and extra-axonal water compartments. However, due to the model its high-dimensional nature, extensive acquisition protocols with multiple b-values and diffusion tensor shapes are typically required to mitigate parameter degeneracies. Even then, accurate estimation remains challenging due to noise. This work introduces a novel estimation framework based on implicit neural representations (INRs), which incorporate spatial regularization through the sinusoidal encoding of the input coordinates. The INR method is evaluated on both synthetic and in vivo datasets and compared to parameter estimates using cubic polynomials, supervised neural networks, and nonlinear least squares. Results demonstrate superior accuracy of the INR method in estimating SM parameters, particularly in low signal-to-noise conditions. Additionally, spatial upsampling of the INR can represent the underlying dataset anatomically plausibly in a continuous way, which is unattainable with linear or cubic interpolation. The INR is fully unsupervised, eliminating the need for labeled training data. It achieves fast inference ($\sim$6 minutes), is robust to both Gaussian and Rician noise, supports joint estimation of SM kernel parameters and the fiber orientation distribution function with spherical harmonics orders up to at least 8 and non-negativity constraints, and accommodates spatially varying acquisition protocols caused by magnetic gradient non-uniformities. The combination of these properties along with the possibility to easily adapt the framework to other dMRI models, positions INRs as a potentially important tool for analyzing and interpreting diffusion MRI data.

Diffusion-based Counterfactual Augmentation: Towards Robust and Interpretable Knee Osteoarthritis Grading

Zhe Wang, Yuhua Ru, Aladine Chetouani, Tina Shiang, Fang Chen, Fabian Bauer, Liping Zhang, Didier Hans, Rachid Jennane, William Ewing Palmer, Mohamed Jarraya, Yung Hsin Chen

arxiv logopreprintJun 18 2025
Automated grading of Knee Osteoarthritis (KOA) from radiographs is challenged by significant inter-observer variability and the limited robustness of deep learning models, particularly near critical decision boundaries. To address these limitations, this paper proposes a novel framework, Diffusion-based Counterfactual Augmentation (DCA), which enhances model robustness and interpretability by generating targeted counterfactual examples. The method navigates the latent space of a diffusion model using a Stochastic Differential Equation (SDE), governed by balancing a classifier-informed boundary drive with a manifold constraint. The resulting counterfactuals are then used within a self-corrective learning strategy to improve the classifier by focusing on its specific areas of uncertainty. Extensive experiments on the public Osteoarthritis Initiative (OAI) and Multicenter Osteoarthritis Study (MOST) datasets demonstrate that this approach significantly improves classification accuracy across multiple model architectures. Furthermore, the method provides interpretability by visualizing minimal pathological changes and revealing that the learned latent space topology aligns with clinical knowledge of KOA progression. The DCA framework effectively converts model uncertainty into a robust training signal, offering a promising pathway to developing more accurate and trustworthy automated diagnostic systems. Our code is available at https://github.com/ZWang78/DCA.

D2Diff : A Dual Domain Diffusion Model for Accurate Multi-Contrast MRI Synthesis

Sanuwani Dayarathna, Himashi Peiris, Kh Tohidul Islam, Tien-Tsin Wong, Zhaolin Chen

arxiv logopreprintJun 18 2025
Multi contrast MRI synthesis is inherently challenging due to the complex and nonlinear relationships among different contrasts. Each MRI contrast highlights unique tissue properties, but their complementary information is difficult to exploit due to variations in intensity distributions and contrast specific textures. Existing methods for multi contrast MRI synthesis primarily utilize spatial domain features, which capture localized anatomical structures but struggle to model global intensity variations and distributed patterns. Conversely, frequency domain features provide structured inter contrast correlations but lack spatial precision, limiting their ability to retain finer details. To address this, we propose a dual domain learning framework that integrates spatial and frequency domain information across multiple MRI contrasts for enhanced synthesis. Our method employs two mutually trained denoising networks, one conditioned on spatial domain and the other on frequency domain contrast features through a shared critic network. Additionally, an uncertainty driven mask loss directs the models focus toward more critical regions, further improving synthesis accuracy. Extensive experiments show that our method outperforms SOTA baselines, and the downstream segmentation performance highlights the diagnostic value of the synthetic results.

Deep learning model using CT images for longitudinal prediction of benign and malignant ground-glass nodules.

Yang X, Wang J, Wang P, Li Y, Wen Z, Shang J, Chen K, Tang C, Liang S, Meng W

pubmed logopapersJun 18 2025
To develop and validate a CT image-based multiple time-series deep learning model for the longitudinal prediction of benign and malignant pulmonary ground-glass nodules (GGNs). A total of 486 GGNs from an equal number of patients were included in this research, which took place at two medical centers. Each nodule underwent surgical removal and was confirmed pathologically. The patients were randomly assigned to a training set, validation set, and test set, following a distribution ratio of 7:2:1. We established a transformer-based deep learning framework that leverages multi-temporal CT images for the longitudinal prediction of GGNs, focusing on distinguishing between benign and malignant types. Additionally, we utilized 13 different machine learning algorithms to formulate clinical models, delta-radiomics models, and combined models that merge deep learning with CT semantic features. The predictive capabilities of the models were assessed using the receiver operating characteristic (ROC) curve and the area under the curve (AUC). The multiple time-series deep learning model based on CT images surpassed both the clinical model and the delta-radiomics model, showcasing strong predictive capabilities for GGNs across the training, validation, and test sets, with AUCs of 0.911 (95% CI, 0.879-0.939), 0.809 (95% CI,0.715-0.908), and 0.817 (95% CI,0.680-0.937), respectively. Furthermore, the models that integrated deep learning with CT semantic features achieved the highest performance, resulting in AUCs of 0.960 (95% CI, 0.912-0.977), 0.878 (95% CI,0.801-0.942), and 0.890(95% CI, 0.790-0.968). The multiple time-series deep learning model utilizing CT images was effective in predicting benign and malignant GGNs.

MDEANet: A multi-scale deep enhanced attention net for popliteal fossa segmentation in ultrasound images.

Chen F, Fang W, Wu Q, Zhou M, Guo W, Lin L, Chen Z, Zou Z

pubmed logopapersJun 18 2025
Popliteal sciatic nerve block is a widely used technique for lower limb anesthesia. However, despite ultrasound guidance, the complex anatomical structures of the popliteal fossa can present challenges, potentially leading to complications. To accurately identify the bifurcation of the sciatic nerve for nerve blockade, we propose MDEANet, a deep learning-based segmentation network designed for the precise localization of nerves, muscles, and arteries in ultrasound images of the popliteal region. MDEANet incorporates Cascaded Multi-scale Atrous Convolutions (CMAC) to enhance multi-scale feature extraction, Enhanced Spatial Attention Mechanism (ESAM) to focus on key anatomical regions, and Cross-level Feature Fusion (CLFF) to improve contextual representation. This integration markedly improves segmentation of nerves, muscles, and arteries. Experimental results demonstrate that MDEANet achieves an average Intersection over Union (IoU) of 88.60% and a Dice coefficient of 93.95% across all target structures, outperforming state-of-the-art models by 1.68% in IoU and 1.66% in Dice coefficient. Specifically, for nerve segmentation, the Dice coefficient reaches 93.31%, underscoring the effectiveness of our approach. MDEANet has the potential to provide decision-support assistance for anesthesiologists, thereby enhancing the accuracy and efficiency of ultrasound-guided nerve blockade procedures.
Page 74 of 1611610 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.