Sort by:
Page 242 of 3883875 results

Automated MRI Tumor Segmentation using hybrid U-Net with Transformer and Efficient Attention

Syed Haider Ali, Asrar Ahmad, Muhammad Ali, Asifullah Khan, Muhammad Shahban, Nadeem Shaukat

arxiv logopreprintJun 18 2025
Cancer is an abnormal growth with potential to invade locally and metastasize to distant organs. Accurate auto-segmentation of the tumor and surrounding normal tissues is required for radiotherapy treatment plan optimization. Recent AI-based segmentation models are generally trained on large public datasets, which lack the heterogeneity of local patient populations. While these studies advance AI-based medical image segmentation, research on local datasets is necessary to develop and integrate AI tumor segmentation models directly into hospital software for efficient and accurate oncology treatment planning and execution. This study enhances tumor segmentation using computationally efficient hybrid UNet-Transformer models on magnetic resonance imaging (MRI) datasets acquired from a local hospital under strict privacy protection. We developed a robust data pipeline for seamless DICOM extraction and preprocessing, followed by extensive image augmentation to ensure model generalization across diverse clinical settings, resulting in a total dataset of 6080 images for training. Our novel architecture integrates UNet-based convolutional neural networks with a transformer bottleneck and complementary attention modules, including efficient attention, Squeeze-and-Excitation (SE) blocks, Convolutional Block Attention Module (CBAM), and ResNeXt blocks. To accelerate convergence and reduce computational demands, we used a maximum batch size of 8 and initialized the encoder with pretrained ImageNet weights, training the model on dual NVIDIA T4 GPUs via checkpointing to overcome Kaggle's runtime limits. Quantitative evaluation on the local MRI dataset yielded a Dice similarity coefficient of 0.764 and an Intersection over Union (IoU) of 0.736, demonstrating competitive performance despite limited data and underscoring the importance of site-specific model development for clinical deployment.

Diffusion-based Counterfactual Augmentation: Towards Robust and Interpretable Knee Osteoarthritis Grading

Zhe Wang, Yuhua Ru, Aladine Chetouani, Tina Shiang, Fang Chen, Fabian Bauer, Liping Zhang, Didier Hans, Rachid Jennane, William Ewing Palmer, Mohamed Jarraya, Yung Hsin Chen

arxiv logopreprintJun 18 2025
Automated grading of Knee Osteoarthritis (KOA) from radiographs is challenged by significant inter-observer variability and the limited robustness of deep learning models, particularly near critical decision boundaries. To address these limitations, this paper proposes a novel framework, Diffusion-based Counterfactual Augmentation (DCA), which enhances model robustness and interpretability by generating targeted counterfactual examples. The method navigates the latent space of a diffusion model using a Stochastic Differential Equation (SDE), governed by balancing a classifier-informed boundary drive with a manifold constraint. The resulting counterfactuals are then used within a self-corrective learning strategy to improve the classifier by focusing on its specific areas of uncertainty. Extensive experiments on the public Osteoarthritis Initiative (OAI) and Multicenter Osteoarthritis Study (MOST) datasets demonstrate that this approach significantly improves classification accuracy across multiple model architectures. Furthermore, the method provides interpretability by visualizing minimal pathological changes and revealing that the learned latent space topology aligns with clinical knowledge of KOA progression. The DCA framework effectively converts model uncertainty into a robust training signal, offering a promising pathway to developing more accurate and trustworthy automated diagnostic systems. Our code is available at https://github.com/ZWang78/DCA.

Federated Learning for MRI-based BrainAGE: a multicenter study on post-stroke functional outcome prediction

Vincent Roca, Marc Tommasi, Paul Andrey, Aurélien Bellet, Markus D. Schirmer, Hilde Henon, Laurent Puy, Julien Ramon, Grégory Kuchcinski, Martin Bretzner, Renaud Lopes

arxiv logopreprintJun 18 2025
$\textbf{Objective:}$ Brain-predicted age difference (BrainAGE) is a neuroimaging biomarker reflecting brain health. However, training robust BrainAGE models requires large datasets, often restricted by privacy concerns. This study evaluates the performance of federated learning (FL) for BrainAGE estimation in ischemic stroke patients treated with mechanical thrombectomy, and investigates its association with clinical phenotypes and functional outcomes. $\textbf{Methods:}$ We used FLAIR brain images from 1674 stroke patients across 16 hospital centers. We implemented standard machine learning and deep learning models for BrainAGE estimates under three data management strategies: centralized learning (pooled data), FL (local training at each site), and single-site learning. We reported prediction errors and examined associations between BrainAGE and vascular risk factors (e.g., diabetes mellitus, hypertension, smoking), as well as functional outcomes at three months post-stroke. Logistic regression evaluated BrainAGE's predictive value for these outcomes, adjusting for age, sex, vascular risk factors, stroke severity, time between MRI and arterial puncture, prior intravenous thrombolysis, and recanalisation outcome. $\textbf{Results:}$ While centralized learning yielded the most accurate predictions, FL consistently outperformed single-site models. BrainAGE was significantly higher in patients with diabetes mellitus across all models. Comparisons between patients with good and poor functional outcomes, and multivariate predictions of these outcomes showed the significance of the association between BrainAGE and post-stroke recovery. $\textbf{Conclusion:}$ FL enables accurate age predictions without data centralization. The strong association between BrainAGE, vascular risk factors, and post-stroke recovery highlights its potential for prognostic modeling in stroke care.

Implicit neural representations for accurate estimation of the standard model of white matter

Tom Hendriks, Gerrit Arends, Edwin Versteeg, Anna Vilanova, Maxime Chamberland, Chantal M. W. Tax

arxiv logopreprintJun 18 2025
Diffusion magnetic resonance imaging (dMRI) enables non-invasive investigation of tissue microstructure. The Standard Model (SM) of white matter aims to disentangle dMRI signal contributions from intra- and extra-axonal water compartments. However, due to the model its high-dimensional nature, extensive acquisition protocols with multiple b-values and diffusion tensor shapes are typically required to mitigate parameter degeneracies. Even then, accurate estimation remains challenging due to noise. This work introduces a novel estimation framework based on implicit neural representations (INRs), which incorporate spatial regularization through the sinusoidal encoding of the input coordinates. The INR method is evaluated on both synthetic and in vivo datasets and compared to parameter estimates using cubic polynomials, supervised neural networks, and nonlinear least squares. Results demonstrate superior accuracy of the INR method in estimating SM parameters, particularly in low signal-to-noise conditions. Additionally, spatial upsampling of the INR can represent the underlying dataset anatomically plausibly in a continuous way, which is unattainable with linear or cubic interpolation. The INR is fully unsupervised, eliminating the need for labeled training data. It achieves fast inference ($\sim$6 minutes), is robust to both Gaussian and Rician noise, supports joint estimation of SM kernel parameters and the fiber orientation distribution function with spherical harmonics orders up to at least 8 and non-negativity constraints, and accommodates spatially varying acquisition protocols caused by magnetic gradient non-uniformities. The combination of these properties along with the possibility to easily adapt the framework to other dMRI models, positions INRs as a potentially important tool for analyzing and interpreting diffusion MRI data.

Multimodal MRI Marker of Cognition Explains the Association Between Cognition and Mental Health in UK Biobank

Buianova, I., Silvestrin, M., Deng, J., Pat, N.

medrxiv logopreprintJun 18 2025
BackgroundCognitive dysfunction often co-occurs with psychopathology. Advances in neuroimaging and machine learning have led to neural indicators that predict individual differences in cognition with reasonable performance. We examined whether these neural indicators explain the relationship between cognition and mental health in the UK Biobank cohort (n > 14000). MethodsUsing machine learning, we quantified the covariation between general cognition and 133 mental health indices and derived neural indicators of cognition from 72 neuroimaging phenotypes across diffusion-weighted MRI (dwMRI), resting-state functional MRI (rsMRI), and structural MRI (sMRI). With commonality analyses, we investigated how much of the cognition-mental health covariation is captured by each neural indicator and neural indicators combined within and across MRI modalities. ResultsThe predictive association between mental health and cognition was at out-of-sample r = 0.3. Neuroimaging phenotypes captured 2.1% to 25.8% of the cognition-mental health covariation. The highest proportion of variance explained by dwMRI was attributed to the number of streamlines connecting cortical regions (19.3%), by rsMRI through functional connectivity between 55 large-scale networks (25.8%), and by sMRI via the volumetric characteristics of subcortical structures (21.8%). Combining neuroimaging phenotypes within modalities improved the explanation to 25.5% for dwMRI, 29.8% for rsMRI, and 31.6% for sMRI, and combining them across all MRI modalities enhanced the explanation to 48%. ConclusionsWe present an integrated approach to derive multimodal MRI markers of cognition that can be transdiagnostically linked to psychopathology. This demonstrates that the predictive ability of neural indicators extends beyond the prediction of cognition itself, enabling us to capture the cognition-mental health covariation.

Artificial Intelligence in Breast US Diagnosis and Report Generation.

Wang J, Tian H, Yang X, Wu H, Zhu X, Chen R, Chang A, Chen Y, Dou H, Huang R, Cheng J, Zhou Y, Gao R, Yang K, Li G, Chen J, Ni D, Dong F, Xu J, Gu N

pubmed logopapersJun 18 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and evaluate an artificial intelligence (AI) system for generating breast ultrasound (BUS) reports. Materials and Methods This retrospective study included 104,364 cases from three hospitals (January 2020-December 2022). The AI system was trained on 82,896 cases, validated on 10,385 cases, and tested on an internal set (10,383 cases) and two external sets (300 and 400 cases). Under blind review, three senior radiologists (> 10 years of experience) evaluated AI-generated reports and those written by one midlevel radiologist (7 years of experience), as well as reports from three junior radiologists (2-3 years of experience) with and without AI assistance. The primary outcomes included the acceptance rates of Breast Imaging Reporting and Data System (BI-RADS) categories and lesion characteristics. Statistical analysis included one-sided and two-sided McNemar tests for non-inferiority and significance testing. Results In external test set 1 (300 cases), the midlevel radiologist and AI system achieved BI-RADS acceptance rates of 95.00% [285/300] versus 92.33% [277/300] (<i>P</i> < .001; non-inferiority test with a prespecified margin of 10%). In external test set 2 (400 cases), three junior radiologists had BI-RADS acceptance rates of 87.00% [348/400] versus 90.75% [363/400] (<i>P</i> = .06), 86.50% [346/400] versus 92.00% [368/400] ( <i>P</i> = .007), and 84.75% [339/400] versus 90.25% [361/400] (<i>P</i> = .02) with and without AI assistance, respectively. Conclusion The AI system performed comparably to a midlevel radiologist and aided junior radiologists in BI-RADS classification. ©RSNA, 2025.

Multimodal deep learning for predicting unsuccessful recanalization in refractory large vessel occlusion.

González JD, Canals P, Rodrigo-Gisbert M, Mayol J, García-Tornel A, Ribó M

pubmed logopapersJun 18 2025
This study explores a multi-modal deep learning approach that integrates pre-intervention neuroimaging and clinical data to predict endovascular therapy (EVT) outcomes in acute ischemic stroke patients. To this end, consecutive stroke patients undergoing EVT were included in the study, including patients with suspected Intracranial Atherosclerosis-related Large Vessel Occlusion ICAD-LVO and other refractory occlusions. A retrospective, single-center cohort of patients with anterior circulation LVO who underwent EVT between 2017-2023 was analyzed. Refractory LVO (rLVO) defined class, comprised patients who presented any of the following: final angiographic stenosis > 50 %, unsuccessful recanalization (eTICI 0-2a) or required rescue treatments (angioplasty +/- stenting). Neuroimaging data included non-contrast CT and CTA volumes, automated vascular segmentation, and CT perfusion parameters. Clinical data included demographics, comorbidities and stroke severity. Imaging features were encoded using convolutional neural networks and fused with clinical data using a DAFT module. Data were split 80 % for training (with four-fold cross-validation) and 20 % for testing. Explainability methods were used to analyze the contribution of clinical variables and regions of interest in the images. The final sample comprised 599 patients; 481 for training the model (77, 16.0 % rLVO), and 118 for testing (16, 13.6 % rLVO). The best model predicting rLVO using just imaging achieved an AUC of 0.53 ± 0.02 and F1 of 0.19 ± 0.05 while the proposed multimodal model achieved an AUC of 0.70 ± 0.02 and F1 of 0.39 ± 0.02 in testing. Combining vascular segmentation, clinical variables, and imaging data improved prediction performance over single-source models. This approach offers an early alert to procedural complexity, potentially guiding more tailored, timely intervention strategies in the EVT workflow.

Deep Learning-Based Adrenal Gland Volumetry for the Prediction of Diabetes.

Ku EJ, Yoon SH, Park SS, Yoon JW, Kim JH

pubmed logopapersJun 18 2025
The long-term association between adrenal gland volume (AGV) and type 2 diabetes (T2D) remains unclear. We aimed to determine the association between deep learning-based AGV and current glycemic status and incident T2D. In this observational study, adults who underwent abdominopelvic computed tomography (CT) for health checkups (2011-2012), but had no adrenal nodules, were included. AGV was measured from CT images using a three-dimensional nnU-Net deep learning algorithm. We assessed the association between AGV and T2D using a cross-sectional and longitudinal design. We used 500 CT scans (median age, 52.3 years; 253 men) for model development and a Multi-Atlas Labeling Beyond the Cranial Vault dataset for external testing. A clinical cohort included a total of 9708 adults (median age, 52.0 years; 5,769 men). The deep learning model demonstrated a dice coefficient of 0.71±0.11 for adrenal segmentation and a mean volume difference of 0.6± 0.9 mL in the external dataset. Participants with T2D at baseline had a larger AGV than those without (7.3 cm3 vs. 6.7 cm3 and 6.3 cm3 vs. 5.5 cm3 for men and women, respectively, all P<0.05). The optimal AGV cutoff values for predicting T2D were 7.2 cm3 in men and 5.5 cm3 in women. Over a median 7.0-year follow-up, T2D developed in 938 participants. Cumulative T2D risk was accentuated with high AGV compared with low AGV (adjusted hazard ratio, 1.27; 95% confidence interval, 1.11 to 1.46). AGV, measured using deep learning algorithms, is associated with current glycemic status and can significantly predict the development of T2D.

Identification, characterisation and outcomes of pre-atrial fibrillation in heart failure with reduced ejection fraction.

Helbitz A, Nadarajah R, Mu L, Larvin H, Ismail H, Wahab A, Thompson P, Harrison P, Harris M, Joseph T, Plein S, Petrie M, Metra M, Wu J, Swoboda P, Gale CP

pubmed logopapersJun 18 2025
Atrial fibrillation (AF) in heart failure with reduced ejection fraction (HFrEF) has prognostic implications. Using a machine learning algorithm (FIND-AF), we aimed to explore clinical events and the cardiac magnetic resonance (CMR) characteristics of the pre-AF phenotype in HFrEF. A cohort of individuals aged ≥18 years with HFrEF without AF from the MATCH 1 and MATCH 2 studies (2018-2024) stratified by FIND-AF score. All received cardiac magnetic resonance using Cvi42 software for volumetric and T1/T2. The primary outcome was time to a composite of MACE inclusive of heart failure hospitalisation, myocardial infarction, stroke and all-cause mortality. Secondary outcomes included the association between CMR findings and FIND-AF score. Of 385 patients [mean age 61.7 (12.6) years, 39.0% women] with a median 2.5 years follow-up, the primary outcome occurred in 58 (30.2%) patients in the high FIND-AF risk group and 23 (11.9%) in the low FIND-AF risk group (hazard ratio 3.25, 95% CI 2.00-5.28, P < 0.001). Higher FIND-AF score was associated with higher indexed left ventricular mass (β = 4.7, 95% CI 0.5-8.9), indexed left atrial volume (β = 5.9, 95% CI 2.2-9.6), higher indexed left ventricular end-diastolic volume (β = 9.55, 95% CI 1.37-17.74, P = 0.022), native T1 signal (β = 18.0, 95% CI 7.0-29.1) and extracellular volume (β = 1.6, 95% CI 0.6-2.5). A pre-AF HFrEF subgroup with distinct CMR characteristics and poor prognosis may be identified, potentially guiding interventions to reduce clinical events.

Quality control system for patient positioning and filling in meta-information for chest X-ray examinations.

Borisov AA, Semenov SS, Kirpichev YS, Arzamasov KM, Omelyanskaya OV, Vladzymyrskyy AV, Vasilev YA

pubmed logopapersJun 18 2025
During radiography, irregularities occur, leading to decrease in the diagnostic value of the images obtained. The purpose of this work was to develop a system for automated quality assurance of patient positioning in chest radiographs, with detection of suboptimal contrast, brightness, and metadata errors. The quality assurance system was trained and tested using more than 69,000 X-rays of the chest and other anatomical areas from the Unified Radiological Information Service (URIS) and several open datasets. Our dataset included studies regardless of a patient's gender and race, while the sole exclusion criterion being age below 18 years. A training dataset of radiographs labeled by expert radiologists was used to train an ensemble of modified deep convolutional neural networks architectures ResNet152V2 and VGG19 to identify various quality deficiencies. Model performance was accessed using area under the receiver operating characteristic curve (ROC-AUC), precision, recall, F1-score, and accuracy metrics. Seven neural network models were trained to classify radiographs by the following quality deficiencies: failure to capture the target anatomic region, chest rotation, suboptimal brightness, incorrect anatomical area, projection errors, and improper photometric interpretation. All metrics for each model exceed 95%, indicating high predictive value. All models were combined into a unified system for evaluating radiograph quality. The processing time per image is approximately 3 s. The system supports multiple use cases: integration into an automated radiographic workstations, external quality assurance system for radiology departments, acquisition quality audits for municipal health systems, and routing of studies to diagnostic AI models.
Page 242 of 3883875 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.