Sort by:
Page 416 of 4524519 results

From Low Field to High Value: Robust Cortical Mapping from Low-Field MRI

Karthik Gopinath, Annabel Sorby-Adams, Jonathan W. Ramirez, Dina Zemlyanker, Jennifer Guo, David Hunt, Christine L. Mac Donald, C. Dirk Keene, Timothy Coalson, Matthew F. Glasser, David Van Essen, Matthew S. Rosen, Oula Puonti, W. Taylor Kimberly, Juan Eugenio Iglesias

arxiv logopreprintMay 18 2025
Three-dimensional reconstruction of cortical surfaces from MRI for morphometric analysis is fundamental for understanding brain structure. While high-field MRI (HF-MRI) is standard in research and clinical settings, its limited availability hinders widespread use. Low-field MRI (LF-MRI), particularly portable systems, offers a cost-effective and accessible alternative. However, existing cortical surface analysis tools are optimized for high-resolution HF-MRI and struggle with the lower signal-to-noise ratio and resolution of LF-MRI. In this work, we present a machine learning method for 3D reconstruction and analysis of portable LF-MRI across a range of contrasts and resolutions. Our method works "out of the box" without retraining. It uses a 3D U-Net trained on synthetic LF-MRI to predict signed distance functions of cortical surfaces, followed by geometric processing to ensure topological accuracy. We evaluate our method using paired HF/LF-MRI scans of the same subjects, showing that LF-MRI surface reconstruction accuracy depends on acquisition parameters, including contrast type (T1 vs T2), orientation (axial vs isotropic), and resolution. A 3mm isotropic T2-weighted scan acquired in under 4 minutes, yields strong agreement with HF-derived surfaces: surface area correlates at r=0.96, cortical parcellations reach Dice=0.98, and gray matter volume achieves r=0.93. Cortical thickness remains more challenging with correlations up to r=0.70, reflecting the difficulty of sub-mm precision with 3mm voxels. We further validate our method on challenging postmortem LF-MRI, demonstrating its robustness. Our method represents a step toward enabling cortical surface analysis on portable LF-MRI. Code is available at https://surfer.nmr.mgh.harvard.edu/fswiki/ReconAny

Deep learning feature-based model for predicting lymphovascular invasion in urothelial carcinoma of bladder using CT images.

Xiao B, Lv Y, Peng C, Wei Z, Xv Q, Lv F, Jiang Q, Liu H, Li F, Xv Y, He Q, Xiao M

pubmed logopapersMay 18 2025
Lymphovascular invasion significantly impacts the prognosis of urothelial carcinoma of the bladder. Traditional lymphovascular invasion detection methods are time-consuming and costly. This study aims to develop a deep learning-based model to preoperatively predict lymphovascular invasion status in urothelial carcinoma of bladder using CT images. Data and CT images of 577 patients across four medical centers were retrospectively collected. The largest tumor slices from the transverse, coronal, and sagittal planes were selected and used to train CNN models (InceptionV3, DenseNet121, ResNet18, ResNet34, ResNet50, and VGG11). Deep learning features were extracted and visualized using Grad-CAM. Principal Component Analysis reduced features to 64. Using the extracted features, Decision Tree, XGBoost, and LightGBM models were trained with 5-fold cross-validation and ensembled in a stacking model. Clinical risk factors were identified through logistic regression analyses and combined with DL scores to enhance lymphovascular invasion prediction accuracy. The ResNet50-based model achieved an AUC of 0.818 in the validation set and 0.708 in the testing set. The combined model showed an AUC of 0.794 in the validation set and 0.767 in the testing set, demonstrating robust performance across diverse data. We developed a robust radiomics model based on deep learning features from CT images to preoperatively predict lymphovascular invasion status in urothelial carcinoma of the bladder. This model offers a non-invasive, cost-effective tool to assist clinicians in personalized treatment planning. We developed a robust radiomics model based on deep learning features from CT images to preoperatively predict lymphovascular invasion status in urothelial carcinoma of the bladder. We developed a deep learning feature-based stacking model to predict lymphovascular invasion in urothelial carcinoma of the bladder patients using CT. Max cross sections from three dimensions of the CT image are used to train the CNN model. We made comparisons across six CNN networks, including ResNet50.

SMURF: Scalable method for unsupervised reconstruction of flow in 4D flow MRI

Atharva Hans, Abhishek Singh, Pavlos Vlachos, Ilias Bilionis

arxiv logopreprintMay 18 2025
We introduce SMURF, a scalable and unsupervised machine learning method for simultaneously segmenting vascular geometries and reconstructing velocity fields from 4D flow MRI data. SMURF models geometry and velocity fields using multilayer perceptron-based functions incorporating Fourier feature embeddings and random weight factorization to accelerate convergence. A measurement model connects these fields to the observed image magnitude and phase data. Maximum likelihood estimation and subsampling enable SMURF to process high-dimensional datasets efficiently. Evaluations on synthetic, in vitro, and in vivo datasets demonstrate SMURF's performance. On synthetic internal carotid artery aneurysm data derived from CFD, SMURF achieves a quarter-voxel segmentation accuracy across noise levels of up to 50%, outperforming the state-of-the-art segmentation method by up to double the accuracy. In an in vitro experiment on Poiseuille flow, SMURF reduces velocity reconstruction RMSE by approximately 34% compared to raw measurements. In in vivo internal carotid artery aneurysm data, SMURF attains nearly half-voxel segmentation accuracy relative to expert annotations and decreases median velocity divergence residuals by about 31%, with a 27% reduction in the interquartile range. These results indicate that SMURF is robust to noise, preserves flow structure, and identifies patient-specific morphological features. SMURF advances 4D flow MRI accuracy, potentially enhancing the diagnostic utility of 4D flow MRI in clinical applications.

Harnessing Artificial Intelligence for Accurate Diagnosis and Radiomics Analysis of Combined Pulmonary Fibrosis and Emphysema: Insights from a Multicenter Cohort Study

Zhang, S., Wang, H., Tang, H., Li, X., Wu, N.-W., Lang, Q., Li, B., Zhu, H., Chen, X., Chen, K., Xie, B., Zhou, A., Mo, C.

medrxiv logopreprintMay 18 2025
Combined Pulmonary Fibrosis and Emphysema (CPFE), formally recognized as a distinct pulmonary syndrome in 2022, is characterized by unique clinical features and pathogenesis that may lead to respiratory failure and death. However, the diagnosis of CPFE presents significant challenges that hinder effective treatment. Here, we assembled three-dimensional (3D) reconstruction data of the chest High-Resolution Computed Tomography (HRCT) of patients from multiple hospitals across different provinces in China, including Xiangya Hospital, West China Hospital, and Fujian Provincial Hospital. Using this dataset, we developed CPFENet, a deep learning-based diagnostic model for CPFE. It accurately differentiates CPFE from COPD, with performance comparable to that of professional radiologists. Additionally, we developed a CPFE score based on radiomic analysis of 3D CT images to quantify disease characteristics. Notably, female patients demonstrated significantly higher CPFE scores than males, suggesting potential sex-specific differences in CPFE. Overall, our study establishes the first diagnostic framework for CPFE, providing a diagnostic model and clinical indicators that enable accurate classification and characterization of the syndrome.

FreqSelect: Frequency-Aware fMRI-to-Image Reconstruction

Junliang Ye, Lei Wang, Md Zakir Hossain

arxiv logopreprintMay 18 2025
Reconstructing natural images from functional magnetic resonance imaging (fMRI) data remains a core challenge in natural decoding due to the mismatch between the richness of visual stimuli and the noisy, low resolution nature of fMRI signals. While recent two-stage models, combining deep variational autoencoders (VAEs) with diffusion models, have advanced this task, they treat all spatial-frequency components of the input equally. This uniform treatment forces the model to extract meaning features and suppress irrelevant noise simultaneously, limiting its effectiveness. We introduce FreqSelect, a lightweight, adaptive module that selectively filters spatial-frequency bands before encoding. By dynamically emphasizing frequencies that are most predictive of brain activity and suppressing those that are uninformative, FreqSelect acts as a content-aware gate between image features and natural data. It integrates seamlessly into standard very deep VAE-diffusion pipelines and requires no additional supervision. Evaluated on the Natural Scenes dataset, FreqSelect consistently improves reconstruction quality across both low- and high-level metrics. Beyond performance gains, the learned frequency-selection patterns offer interpretable insights into how different visual frequencies are represented in the brain. Our method generalizes across subjects and scenes, and holds promise for extension to other neuroimaging modalities, offering a principled approach to enhancing both decoding accuracy and neuroscientific interpretability.

MedAgentBoard: Benchmarking Multi-Agent Collaboration with Conventional Methods for Diverse Medical Tasks

Yinghao Zhu, Ziyi He, Haoran Hu, Xiaochen Zheng, Xichen Zhang, Zixiang Wang, Junyi Gao, Liantao Ma, Lequan Yu

arxiv logopreprintMay 18 2025
The rapid advancement of Large Language Models (LLMs) has stimulated interest in multi-agent collaboration for addressing complex medical tasks. However, the practical advantages of multi-agent collaboration approaches remain insufficiently understood. Existing evaluations often lack generalizability, failing to cover diverse tasks reflective of real-world clinical practice, and frequently omit rigorous comparisons against both single-LLM-based and established conventional methods. To address this critical gap, we introduce MedAgentBoard, a comprehensive benchmark for the systematic evaluation of multi-agent collaboration, single-LLM, and conventional approaches. MedAgentBoard encompasses four diverse medical task categories: (1) medical (visual) question answering, (2) lay summary generation, (3) structured Electronic Health Record (EHR) predictive modeling, and (4) clinical workflow automation, across text, medical images, and structured EHR data. Our extensive experiments reveal a nuanced landscape: while multi-agent collaboration demonstrates benefits in specific scenarios, such as enhancing task completeness in clinical workflow automation, it does not consistently outperform advanced single LLMs (e.g., in textual medical QA) or, critically, specialized conventional methods that generally maintain better performance in tasks like medical VQA and EHR-based prediction. MedAgentBoard offers a vital resource and actionable insights, emphasizing the necessity of a task-specific, evidence-based approach to selecting and developing AI solutions in medicine. It underscores that the inherent complexity and overhead of multi-agent collaboration must be carefully weighed against tangible performance gains. All code, datasets, detailed prompts, and experimental results are open-sourced at https://medagentboard.netlify.app/.

SMFusion: Semantic-Preserving Fusion of Multimodal Medical Images for Enhanced Clinical Diagnosis

Haozhe Xiang, Han Zhang, Yu Cheng, Xiongwen Quan, Wanwan Huang

arxiv logopreprintMay 18 2025
Multimodal medical image fusion plays a crucial role in medical diagnosis by integrating complementary information from different modalities to enhance image readability and clinical applicability. However, existing methods mainly follow computer vision standards for feature extraction and fusion strategy formulation, overlooking the rich semantic information inherent in medical images. To address this limitation, we propose a novel semantic-guided medical image fusion approach that, for the first time, incorporates medical prior knowledge into the fusion process. Specifically, we construct a publicly available multimodal medical image-text dataset, upon which text descriptions generated by BiomedGPT are encoded and semantically aligned with image features in a high-dimensional space via a semantic interaction alignment module. During this process, a cross attention based linear transformation automatically maps the relationship between textual and visual features to facilitate comprehensive learning. The aligned features are then embedded into a text-injection module for further feature-level fusion. Unlike traditional methods, we further generate diagnostic reports from the fused images to assess the preservation of medical information. Additionally, we design a medical semantic loss function to enhance the retention of textual cues from the source images. Experimental results on test datasets demonstrate that the proposed method achieves superior performance in both qualitative and quantitative evaluations while preserving more critical medical information.

CTLformer: A Hybrid Denoising Model Combining Convolutional Layers and Self-Attention for Enhanced CT Image Reconstruction

Zhiting Zheng, Shuqi Wu, Wen Ding

arxiv logopreprintMay 18 2025
Low-dose CT (LDCT) images are often accompanied by significant noise, which negatively impacts image quality and subsequent diagnostic accuracy. To address the challenges of multi-scale feature fusion and diverse noise distribution patterns in LDCT denoising, this paper introduces an innovative model, CTLformer, which combines convolutional structures with transformer architecture. Two key innovations are proposed: a multi-scale attention mechanism and a dynamic attention control mechanism. The multi-scale attention mechanism, implemented through the Token2Token mechanism and self-attention interaction modules, effectively captures both fine details and global structures at different scales, enhancing relevant features and suppressing noise. The dynamic attention control mechanism adapts the attention distribution based on the noise characteristics of the input image, focusing on high-noise regions while preserving details in low-noise areas, thereby enhancing robustness and improving denoising performance. Furthermore, CTLformer integrates convolutional layers for efficient feature extraction and uses overlapping inference to mitigate boundary artifacts, further strengthening its denoising capability. Experimental results on the 2016 National Institutes of Health AAPM Mayo Clinic LDCT Challenge dataset demonstrate that CTLformer significantly outperforms existing methods in both denoising performance and model efficiency, greatly improving the quality of LDCT images. The proposed CTLformer not only provides an efficient solution for LDCT denoising but also shows broad potential in medical image analysis, especially for clinical applications dealing with complex noise patterns.

Machine Learning-Based Dose Prediction in [<sup>177</sup>Lu]Lu-PSMA-617 Therapy by Integrating Biomarkers and Radiomic Features from [<sup>68</sup>Ga]Ga-PSMA-11 PET/CT.

Yazdani E, Sadeghi M, Karamzade-Ziarati N, Jabari P, Amini P, Vosoughi H, Akbari MS, Asadi M, Kheradpisheh SR, Geramifar P

pubmed logopapersMay 18 2025
The study aimed to develop machine learning (ML) models for pretherapy prediction of absorbed doses (ADs) in kidneys and tumoral lesions for metastatic castration-resistant prostate cancer (mCRPC) patients undergoing [<sup>177</sup>Lu]Lu-PSMA-617 (Lu-PSMA) radioligand therapy (RLT). By leveraging radiomic features (RFs) from [<sup>68</sup>Ga]Ga-PSMA-11 (Ga-PSMA) PET/CT scans and clinical biomarkers (CBs), the approach has the potential to improve patient selection and tailor dosimetry-guided therapy. Twenty patients with mCRPC underwent Ga-PSMA PET/CT scans prior to the administration of an initial 6.8±0.4 GBq dose of the first Lu-PSMA RLT cycle. Post-therapy dosimetry involved sequential scintigraphy imaging at approximately 4, 48, and 72 h, along with a SPECT/CT image at around 48 h, to calculate time-integrated activity (TIA) coefficients. Monte Carlo (MC) simulations, leveraging the Geant4 application for tomographic emission (GATE) toolkit, were employed to derive ADs. The ML models were trained using pretherapy RFs from Ga-PSMA PET/CT and CBs as input, while the ADs in kidneys and lesions (n=130), determined using MC simulations from scintigraphy and SPECT imaging, served as the ground truth. Model performance was assessed through leave-one-out cross-validation (LOOCV), with evaluation metrics including R² and root mean squared error (RMSE). The mean delivered ADs were 0.88 ± 0.34 Gy/GBq for kidneys and 2.36 ± 2.10 Gy/GBq for lesions. Combining CBs with the best RFs produced optimal results: the extra trees regressor (ETR) was the best ML model for predicting kidney ADs, achieving an RMSE of 0.11 Gy/GBq and an R² of 0.87. For lesion ADs, the gradient boosting regressor (GBR) performed best, with an RMSE of 1.04 Gy/GBq and an R² of 0.77. Integrating pretherapy Ga-PSMA PET/CT RFs with CBs shows potential in predicting ADs in RLT. To personalize treatment planning and enhance patient stratification, it is crucial to validate these preliminary findings with a larger sample size and an independent cohort.

Attention-Enhanced U-Net for Accurate Segmentation of COVID-19 Infected Lung Regions in CT Scans

Amal Lahchim, Lazar Davic

arxiv logopreprintMay 18 2025
In this study, we propose a robust methodology for automatic segmentation of infected lung regions in COVID-19 CT scans using convolutional neural networks. The approach is based on a modified U-Net architecture enhanced with attention mechanisms, data augmentation, and postprocessing techniques. It achieved a Dice coefficient of 0.8658 and mean IoU of 0.8316, outperforming other methods. The dataset was sourced from public repositories and augmented for diversity. Results demonstrate superior segmentation performance. Future work includes expanding the dataset, exploring 3D segmentation, and preparing the model for clinical deployment.
Page 416 of 4524519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.