Sort by:
Page 112 of 3543538 results

Postmortem Validation of Quantitative MRI for White Matter Hyperintensities in Alzheimer's Disease

Mojtabai, M., Kumar, R., Honnorat, N., Li, K., Wang, D., Li, J., Lee, R. F., Richardson, T. E., Cavazos, J. E., Bouhrara, M., Toledo, J. B., Heckbert, S., Flanagan, M. E., Bieniek, K. F., Walker, J. M., Seshadri, S., Habes, M.

medrxiv logopreprintAug 8 2025
White matter hyperintensities (WMH) are frequently observed on MRI in aging and Alzheimers disease (AD), yet their microstructural pathology remains poorly characterized. Conventional MRI sequences provide limited information to describe the tissue abnormalities underlying WMH, while histopathology--the gold standard--can only be applied postmortem. Quantitative MRI (qMRI) offers promising non-invasive alternatives to postmortem histopathology, but lacks histological validation of these metrics in AD. In this study, we examined the relationship between MRI metrics and histopathology in postmortem brain scans from eight donors with AD from the South Texas Alzheimers Disease Research Center. Regions of interest are delineated by aligning MRI-identified WMH in the brain donor scans with postmortem histological sections. Histopathological features, including myelin integrity, tissue vacuolation, and gliosis, are quantified within these regions using machine learning. We report the correlations between these histopathological measures and two qMRI metrics: T2 and absolute myelin water signal (aMWS) maps, as well as conventional T1w/T2w MRI. The results derived from aMWS and T2 mapping indicate a strong association between WMH, myelin loss, and increased tissue vacuolation. Bland-Altman analyses indicated that T2 mapping showed more consistent agreement with histopathology, whereas the derived aMWS demonstrated signs of systematic bias. T1w/T2w values exhibited weaker associations with histological alterations. Additionally, we observed distinct patterns of gliosis in periventricular and subcortical WMH. Our study presents one of the first histopathological validations of qMRI in AD, confirming that aMWS and T2 mapping are robust, non-invasive biomarkers that offer promising ways to monitor white matter pathology in neurodegenerative disorders.

MAISI-v2: Accelerated 3D High-Resolution Medical Image Synthesis with Rectified Flow and Region-specific Contrastive Loss

Can Zhao, Pengfei Guo, Dong Yang, Yucheng Tang, Yufan He, Benjamin Simon, Mason Belue, Stephanie Harmon, Baris Turkbey, Daguang Xu

arxiv logopreprintAug 7 2025
Medical image synthesis is an important topic for both clinical and research applications. Recently, diffusion models have become a leading approach in this area. Despite their strengths, many existing methods struggle with (1) limited generalizability that only work for specific body regions or voxel spacings, (2) slow inference, which is a common issue for diffusion models, and (3) weak alignment with input conditions, which is a critical issue for medical imaging. MAISI, a previously proposed framework, addresses generalizability issues but still suffers from slow inference and limited condition consistency. In this work, we present MAISI-v2, the first accelerated 3D medical image synthesis framework that integrates rectified flow to enable fast and high quality generation. To further enhance condition fidelity, we introduce a novel region-specific contrastive loss to enhance the sensitivity to region of interest. Our experiments show that MAISI-v2 can achieve SOTA image quality with $33 \times$ acceleration for latent diffusion model. We also conducted a downstream segmentation experiment to show that the synthetic images can be used for data augmentation. We release our code, training details, model weights, and a GUI demo to facilitate reproducibility and promote further development within the community.

CT-based Radiomics Signature of Visceral Adipose Tissue for Prediction of Early Recurrence in Patients With NMIBC: a Multicentre Cohort Study.

Yu N, Li J, Cao D, Chen X, Yang D, Jiang N, Wu J, Zhao C, Zheng Y, Chen Y, Jin X

pubmed logopapersAug 7 2025
The objective of this study is to investigate the predictive ability of abdominal fat features derived from computed tomography (CT) to predict early recurrence within a year following the initial transurethral resection of bladder tumor (TURBT) in patients with non-muscle-invasive bladder cancer (NMIBC). A predictive model is constructed in combination with clinical factors to aid in the evaluation of the risk of early recurrence among patients with NMIBC after initial TURBT. This retrospective study enrolled 325 NMIBC patients from three centers. Machine-learning-based visceral adipose tissue (VAT) radiomics models (VAT-RM) and subcutaneous adipose tissue (SAT) radiomics models (SAT-RM) were constructed to identify patients with early recurrence. A combined model integrating VAT-RM and clinical factors was established. The predictive performance of each variable and model was analyzed using the area under the receiver operating characteristic curve (AUC). The net benefit of each variable and model was presented through decision curve analysis (DCA). The calibration was evaluated utilizing the Hosmer-Lemeshow test. The VAT-RM demonstrated satisfactory performance in the training cohort (AUC = 0.853, 95% CI 0.768-0.937), test cohort 1 (AUC = 0.823, 95% CI 0.730-0.916), and test cohort 2 (AUC = 0.808, 95% CI 0.681-0.935). Across all cohorts, the AUC values of the VAT-RM were higher than those of the SAT-RM (P < 0.001). The DCA curves further confirmed that the clinical net profit of the VAT-RM was superior to that of the SAT-RM. In multivariate logistic regression analysis, the VAT-RM emerged as the most significant independent predictor (odds ratio [OR] = 0.295, 95% CI 0.141-0.508, P < 0.001). The fusion model exhibited excellent AUC values of 0.938, 0.909, and 0.905 across three cohorts. The fusion model surpassed the traditional risk assessment frameworks in both predictive efficacy and clinical net benefit. VAT serves as a crucial factor in early postoperative recurrence in NMIBC patients. The VAT-RM can accurately identify high-risk patients with early postoperative recurrence, offering significant advantages over SAT-RM. The new predictive model constructed by integrating the VAT-RM and clinical factors exhibits excellent predictive performance, clinical net benefits, and calibration accuracy.

Gastrointestinal bleeding detection on digital subtraction angiography using convolutional neural networks with and without temporal information.

Smetanick D, Naidu S, Wallace A, Knuttinen MG, Patel I, Alzubaidi S

pubmed logopapersAug 7 2025
Digital subtraction angiography (DSA) offers a real-time approach to locating lower gastrointestinal (GI) bleeding. However, many sources of bleeding are not easily visible on angiograms. This investigation aims to develop a machine learning tool that can locate GI bleeding on DSA prior to transarterial embolization. All mesenteric artery angiograms and arterial embolization DSA images obtained in the interventional radiology department between January 1, 2007, and December 31, 2021, were analyzed. These images were acquired using fluoroscopy imaging systems (Siemens Healthineers, USA). Thirty-nine unique series of bleeding images were augmented to train two-dimensional (2D) and three-dimensional (3D) residual neural networks (ResUNet++) for image segmentation. The 2D ResUNet++ network was trained on 3,548 images and tested on 394 images, whereas the 3D ResUNet++ network was trained on 316 3D objects and tested on 35 objects. For each case, both manually cropped images focused on the GI bleed and uncropped images were evaluated, with a superimposition post-processing (SIPP) technique applied to both image types. Based on both quantitative and qualitative analyses, the 2D ResUNet++ network significantly outperformed the 3D ResUNet++ model. In the qualitative evaluation, the 2D ResUNet++ model achieved the highest accuracy across both 128 × 128 and 256 × 256 input resolutions when enhanced with the SIPP technique, reaching accuracy rates between 95% and 97%. However, despite the improved detection consistency provided by SIPP, a reduction in Dice similarity coefficients was observed compared with models without post-processing. Specifically, the 2D ResUNet++ model combined with SIPP achieved a Dice accuracy of only 80%. This decline is primarily attributed to an increase in false positive predictions introduced by the temporal propagation of segmentation masks across frames. Both 2D and 3D ResUNet++ networks can be trained to locate GI bleeding on DSA images prior to transarterial embolization. However, further research and refinement are needed before this technology can be implemented in DSA for real-time prediction. Automated detection of GI bleeding in DSA may reduce time to embolization, thereby improving patient outcomes.

Unsupervised learning for inverse problems in computed tomography

Laura Hellwege, Johann Christopher Engster, Moritz Schaar, Thorsten M. Buzug, Maik Stille

arxiv logopreprintAug 7 2025
This study presents an unsupervised deep learning approach for computed tomography (CT) image reconstruction, leveraging the inherent similarities between deep neural network training and conventional iterative reconstruction methods. By incorporating forward and backward projection layers within the deep learning framework, we demonstrate the feasibility of reconstructing images from projection data without relying on ground-truth images. Our method is evaluated on the two-dimensional 2DeteCT dataset, showcasing superior performance in terms of mean squared error (MSE) and structural similarity index (SSIM) compared to traditional filtered backprojection (FBP) and maximum likelihood (ML) reconstruction techniques. Additionally, our approach significantly reduces reconstruction time, making it a promising alternative for real-time medical imaging applications. Future work will focus on extending this methodology to three-dimensional reconstructions and enhancing the adaptability of the projection geometry.

UltimateSynth: MRI Physics for Pan-Contrast AI

Adams, R., Huynh, K. M., Zhao, W., Hu, S., Lyu, W., Ahmad, S., Ma, D., Yap, P.-T.

biorxiv logopreprintAug 7 2025
Magnetic resonance imaging (MRI) is commonly used in healthcare for its ability to generate diverse tissue contrasts without ionizing radiation. However, this flexibility complicates downstream analysis, as computational tools are often tailored to specific types of MRI and lack generalizability across the full spectrum of scans used in healthcare. Here, we introduce a versatile framework for the development and validation of AI models that can robustly process and analyze the full spectrum of scans achievable with MRI, enabling model deployment across scanner models, scan sequences, and age groups. Core to our framework is UltimateSynth, a technology that combines tissue physiology and MR physics in synthesizing realistic images across a comprehensive range of meaningful contrasts. This pan-contrast capability bolsters the AI development life cycle through efficient data labeling, generalizable model training, and thorough performance benchmarking. We showcase the effectiveness of UltimateSynth by training an off-the-shelf U-Net to generalize anatomical segmentation across any MR contrast. The U-Net yields highly robust tissue volume estimates, with variability under 4% across 150,000 unique-contrast images, 3.8% across 2,000+ low-field 0.3T scans, and 3.5% across 8,000+ images spanning the human lifespan from ages 0 to 100.

Artificial Intelligence for the Detection of Fetal Ultrasound Findings Concerning for Major Congenital Heart Defects.

Zelop CM, Lam-Rachlin J, Arunamata A, Punn R, Behera SK, Lachaud M, David N, DeVore GR, Rebarber A, Fox NS, Gayanilo M, Garmel S, Boukobza P, Uzan P, Joly H, Girardot R, Cohen L, Stos B, De Boisredon M, Askinazi E, Thorey V, Gardella C, Levy M, Geiger M

pubmed logopapersAug 7 2025
To evaluate the performance of an artificial intelligence (AI)-based software to identify second-trimester fetal ultrasound examinations suspicious for congenital heart defects. The software analyzes all grayscale two-dimensional ultrasound cine clips of an examination to evaluate eight morphologic findings associated with severe congenital heart defects. A data set of 877 examinations was retrospectively collected from 11 centers. The presence of suspicious findings was determined by a panel of expert pediatric cardiologists, who determined that 311 examinations had at least one of the eight suspicious findings. The AI software processed each examination, labeling each finding as present, absent, or inconclusive. Of the 280 examinations with known severe congenital heart defects, 278 (sensitivity 0.993, 95% CI, 0.974-0.998) had at least one of the eight suspicious findings present as determined by the fetal cardiologists, highlighting the relevance of these eight findings. We then evaluated the performance of the AI software, which identified at least one finding as present in 271 examinations, that all eight findings were absent in five examinations, and was inconclusive in four of the 280 examinations with severe congenital heart defects, yielding a sensitivity of 0.968 (95% CI, 0.940-0.983) for severe congenital heart defects. When comparing the AI to the determination of findings by fetal cardiologists, the detection of any finding by the AI had a sensitivity of 0.987 (95% CI, 0.967-0.995) and a specificity of 0.977 (95% CI, 0.961-0.986) after exclusion of inconclusive examinations. The AI rendered a decision for any finding (either present or absent) in 98.7% of examinations. The AI-based software demonstrated high accuracy in identification of suspicious findings associated with severe congenital heart defects, yielding a high sensitivity for detecting severe congenital heart defects. These results show that AI has potential to improve antenatal congenital heart defect detection.

MLAgg-UNet: Advancing Medical Image Segmentation with Efficient Transformer and Mamba-Inspired Multi-Scale Sequence.

Jiang J, Lei S, Li H, Sun Y

pubmed logopapersAug 7 2025
Transformers and state space sequence models (SSMs) have attracted interest in biomedical image segmentation for their ability to capture long-range dependency. However, traditional visual state space (VSS) methods suffer from the incompatibility of image tokens with autoregressive assumption. Although Transformer attention does not require this assumption, its high computational cost limits effective channel-wise information utilization. To overcome these limitations, we propose the Mamba-Like Aggregated UNet (MLAgg-UNet), which introduces Mamba-inspired mechanism to enrich Transformer channel representation and exploit implicit autoregressive characteristic within U-shaped architecture. For establishing dependencies among image tokens in single scale, the Mamba-Like Aggregated Attention (MLAgg) block is designed to balance representational ability and computational efficiency. Inspired by the human foveal vision system, Mamba macro-structure, and differential attention, MLAgg block can slide its focus over each image token, suppress irrelevant tokens, and simultaneously strengthen channel-wise information utilization. Moreover, leveraging causal relationships between consecutive low-level and high-level features in U-shaped architecture, we propose the Multi-Scale Mamba Module with Implicit Causality (MSMM) to optimize complementary information across scales. Embedded within skip connections, this module enhances semantic consistency between encoder and decoder features. Extensive experiments on four benchmark datasets, including AbdomenMRI, ACDC, BTCV, and EndoVis17, which cover MRI, CT, and endoscopy modalities, demonstrate that the proposed MLAgg-UNet consistently outperforms state-of-the-art CNN-based, Transformer-based, and Mamba-based methods. Specifically, it achieves improvements of at least 1.24%, 0.20%, 0.33%, and 0.39% in DSC scores on these datasets, respectively. These results highlight the model's ability to effectively capture feature correlations and integrate complementary multi-scale information, providing a robust solution for medical image segmentation. The implementation is publicly available at https://github.com/aticejiang/MLAgg-UNet.

Enhancing Domain Generalization in Medical Image Segmentation With Global and Local Prompts.

Zhao C, Li X

pubmed logopapersAug 7 2025
Enhancing domain generalization (DG) is a crucial and compelling research pursuit within the field of medical image segmentation, owing to the inherent heterogeneity observed in medical images. The recent success with large-scale pre-trained vision models (PVMs), such as Vision Transformer (ViT), inspires us to explore their application in this specific area. While a straightforward strategy involves fine-tuning the PVM using supervised signals from the source domains, this approach overlooks the domain shift issue and neglects the rich knowledge inherent in the instances themselves. To overcome these limitations, we introduce a novel framework enhanced by global and local prompts (GLPs). Specifically, to adapt PVM in the medical DG scenario, we explicitly separate domain-shared and domain-specific knowledge in the form of GLPs. Furthermore, we develop an individualized domain adapter to intricately investigate the relationship between each target domain sample and the source domains. To harness the inherent knowledge within instances, we devise two innovative regularization terms from both the consistency and anatomy perspectives, encouraging the model to preserve instance discriminability and organ position invariance. Extensive experiments and in-depth discussions in both vanilla and semi-supervised DG scenarios deriving from five diverse medical datasets consistently demonstrate the superior segmentation performance achieved by GLP. Our code and datasets are publicly available at https://github.com/xmed-lab/GLP.

Best Machine Learning Model for Predicting Axial Symptoms After Unilateral Laminoplasty: Based on C2 Spinous Process Muscle Radiomics Features and Sagittal Parameters.

Zheng B, Zhu Z, Liang Y, Liu H

pubmed logopapersAug 7 2025
Study DesignRetrospective study.ObjectiveTo develop a machine learning model for predicting axial symptoms (AS) after unilateral laminoplasty by integrating C2 spinous process muscle radiomics features and cervical sagittal parameters.MethodsIn this retrospective study of 96 cervical myelopathy patients (30 with AS, 66 without) who underwent unilateral laminoplasty between 2018-2022, we extracted radiomics features from preoperative MRI of C2 spinous muscles using PyRadiomics. Clinical data including C2-C7 Cobb angle, cervical sagittal vertical axis (cSVA), T1 slope (T1S) and C2 muscle fat infiltration are collected for clinical model construction. After LASSO regression feature selection, we constructed six machine learning models (SVM, KNN, Random Forest, ExtraTrees, XGBoost, and LightGBM) and evaluated their performance using ROC curves and AUC.ResultsThe AS group demonstrated significantly lower preoperative C2-C7 Cobb angles (12.80° ± 7.49° vs 18.02° ± 8.59°, <i>P</i> = .006), higher cSVA (3.01 cm ± 0.87 vs 2.46 ± 1.19 cm, <i>P</i> = .026), T1S (26.68° ± 5.12° vs 23.66° ± 7.58°, <i>P</i> = .025) and higher C2 muscle fat infiltration (23.73 ± 7.78 vs 20.62 ± 6.93 <i>P</i> = .026). Key radiomics features included local binary pattern texture features and wavelet transform characteristics. The combined model integrating radiomics and clinical parameters achieved the best performance with test AUC of 0.881, sensitivity of 0.833, and specificity of 0.786.ConclusionThe machine learning model based on C2 spinous process muscle radiomics features and clinical parameters (C2-C7 Cobb angle, cSVA, T1S and C2 muscle infiltration) effectively predicts AS occurrence after unilateral laminoplasty, providing clinicians with a valuable tool for preoperative risk assessment and personalized treatment planning.
Page 112 of 3543538 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.