Sort by:
Page 3 of 53522 results

CUAMT: A MRI semi-supervised medical image segmentation framework based on contextual information and mixed uncertainty.

Xiao H, Wang Y, Xiong S, Ren Y, Zhang H

pubmed logopapersJul 1 2025
Semi-supervised medical image segmentation is a class of machine learning paradigms for segmentation model training and inference using both labeled and unlabeled medical images, which can effectively reduce the data labeling workload. However, existing consistency semi-supervised segmentation models mainly focus on investigating more complex consistency strategies and lack efficient utilization of volumetric contextual information, which leads to vague or uncertain understanding of the boundary between the object and the background by the model, resulting in ambiguous or even erroneous boundary segmentation results. For this reason, this study proposes a hybrid uncertainty network CUAMT based on contextual information. In this model, a contextual information extraction module CIE is proposed, which learns the connection between image contexts by extracting semantic features at different scales, and guides the model to enhance learning contextual information. In addition, a hybrid uncertainty module HUM is proposed, which guides the model to focus on segmentation boundary information by combining the global and local uncertainty information of two different networks to improve the segmentation performance of the networks at the boundary. In the left atrial segmentation and brain tumor segmentation dataset, validation experiments were conducted on the proposed model. The experiments show that our model achieves 89.84%, 79.89%, and 8.73 on the Dice metric, Jaccard metric, and 95HD metric, respectively, which significantly outperforms several current SOTA semi-supervised methods. This study confirms that the CIE and HUM strategies are effective. A semi-supervised segmentation framework is proposed for medical image segmentation.

"Recon-all-clinical": Cortical surface reconstruction and analysis of heterogeneous clinical brain MRI.

Gopinath K, Greve DN, Magdamo C, Arnold S, Das S, Puonti O, Iglesias JE

pubmed logopapersJul 1 2025
Surface-based analysis of the cerebral cortex is ubiquitous in human neuroimaging with MRI. It is crucial for tasks like cortical registration, parcellation, and thickness estimation. Traditionally, such analyses require high-resolution, isotropic scans with good gray-white matter contrast, typically a T1-weighted scan with 1 mm resolution. This requirement precludes application of these techniques to most MRI scans acquired for clinical purposes, since they are often anisotropic and lack the required T1-weighted contrast. To overcome this limitation and enable large-scale neuroimaging studies using vast amounts of existing clinical data, we introduce recon-all-clinical, a novel methodology for cortical reconstruction, registration, parcellation, and thickness estimation for clinical brain MRI scans of any resolution and contrast. Our approach employs a hybrid analysis method that combines a convolutional neural network (CNN) trained with domain randomization to predict signed distance functions (SDFs), and classical geometry processing for accurate surface placement while maintaining topological and geometric constraints. The method does not require retraining for different acquisitions, thus simplifying the analysis of heterogeneous clinical datasets. We evaluated recon-all-clinical on multiple public datasets like ADNI, HCP, AIBL, OASIS and including a large clinical dataset of over 9,500 scans. The results indicate that our method produces geometrically precise cortical reconstructions across different MRI contrasts and resolutions, consistently achieving high accuracy in parcellation. Cortical thickness estimates are precise enough to capture aging effects, independently of MRI contrast, even though accuracy varies with slice thickness. Our method is publicly available at https://surfer.nmr.mgh.harvard.edu/fswiki/recon-all-clinical, enabling researchers to perform detailed cortical analysis on the huge amounts of already existing clinical MRI scans. This advancement may be particularly valuable for studying rare diseases and underrepresented populations where research-grade MRI data is scarce.

ConnectomeAE: Multimodal brain connectome-based dual-branch autoencoder and its application in the diagnosis of brain diseases.

Zheng Q, Nan P, Cui Y, Li L

pubmed logopapersJul 1 2025
Exploring the dependencies between multimodal brain networks and integrating node features to enhance brain disease diagnosis remains a significant challenge. Some work has examined only brain connectivity changes in patients, ignoring important information about radiomics features such as shape and texture of individual brain regions in structural images. To this end, this study proposed a novel deep learning approach to integrate multimodal brain connectome information and regional radiomics features for brain disease diagnosis. A dual-branch autoencoder (ConnectomeAE) based on multimodal brain connectomes was proposed for brain disease diagnosis. Specifically, a matrix of radiomics feature extracted from structural magnetic resonance image (MRI) was used as Rad_AE branch inputs for learning important brain region features. Functional brain network built from functional MRI image was used as inputs to Cycle_AE for capturing brain disease-related connections. By separately learning node features and connection features from multimodal brain networks, the method demonstrates strong adaptability in diagnosing different brain diseases. ConnectomeAE was validated on two publicly available datasets. The experimental results show that ConnectomeAE achieved excellent diagnostic performance with an accuracy of 70.7 % for autism spectrum disorder and 90.5 % for Alzheimer's disease. A comparison of training time with other methods indicated that ConnectomeAE exhibits simplicity and efficiency suitable for clinical applications. Furthermore, the interpretability analysis of the model aligned with previous studies, further supporting the biological basis of ConnectomeAE. ConnectomeAE could effectively leverage the complementary information between multimodal brain connectomes for brain disease diagnosis. By separately learning radiomic node features and connectivity features, ConnectomeAE demonstrated good adaptability to different brain disease classification tasks.

Cycle-conditional diffusion model for noise correction of diffusion-weighted images using unpaired data.

Zhu P, Liu C, Fu Y, Chen N, Qiu A

pubmed logopapersJul 1 2025
Diffusion-weighted imaging (DWI) is a key modality for studying brain microstructure, but its signals are highly susceptible to noise due to the thermal motion of water molecules and interactions with tissue microarchitecture, leading to significant signal attenuation and a low signal-to-noise ratio (SNR). In this paper, we propose a novel approach, a Cycle-Conditional Diffusion Model (Cycle-CDM) using unpaired data learning, aimed at improving DWI quality and reliability through noise correction. Cycle-CDM leverages a cycle-consistent translation architecture to bridge the domain gap between noise-contaminated and noise-free DWIs, enabling the restoration of high-quality images without requiring paired datasets. By utilizing two conditional diffusion models, Cycle-CDM establishes data interrelationships between the two types of DWIs, while incorporating synthesized anatomical priors from the cycle translation process to guide noise removal. In addition, we introduce specific constraints to preserve anatomical fidelity, allowing Cycle-CDM to effectively learn the underlying noise distribution and achieve accurate denoising. Our experiments conducted on simulated datasets, as well as children and adolescents' datasets with strong clinical relevance. Our results demonstrate that Cycle-CDM outperforms comparative methods, such as U-Net, CycleGAN, Pix2Pix, MUNIT and MPPCA, in terms of noise correction performance. We demonstrated that Cycle-CDM can be generalized to DWIs with head motion when they were acquired using different MRI scannsers. Importantly, the denoised DWI data produced by Cycle-CDM exhibit accurate preservation of underlying tissue microstructure, thus substantially improving their medical applicability.

Tailored self-supervised pretraining improves brain MRI diagnostic models.

Huang X, Wang Z, Zhou W, Yang K, Wen K, Liu H, Huang S, Lyu M

pubmed logopapersJul 1 2025
Self-supervised learning has shown potential in enhancing deep learning methods, yet its application in brain magnetic resonance imaging (MRI) analysis remains underexplored. This study seeks to leverage large-scale, unlabeled public brain MRI datasets to improve the performance of deep learning models in various downstream tasks for the development of clinical decision support systems. To enhance training efficiency, data filtering methods based on image entropy and slice positions were developed, condensing a combined dataset of approximately 2 million images from fastMRI-brain, OASIS-3, IXI, and BraTS21 into a more focused set of 250 K images enriched with brain features. The Momentum Contrast (MoCo) v3 algorithm was then employed to learn these image features, resulting in robustly pretrained models specifically tailored to brain MRI. The pretrained models were subsequently evaluated in tumor classification, lesion detection, hippocampal segmentation, and image reconstruction tasks. The results demonstrate that our brain MRI-oriented pretraining outperformed both ImageNet pretraining and pretraining on larger multi-organ, multi-modality medical datasets, achieving a ∼2.8 % increase in 4-class tumor classification accuracy, a ∼0.9 % improvement in tumor detection mean average precision, a ∼3.6 % gain in adult hippocampal segmentation Dice score, and a ∼0.1 PSNR improvement in reconstruction at 2-fold acceleration. This study underscores the potential of self-supervised learning for brain MRI using large-scale, tailored datasets derived from public sources.

CASCADE-FSL: Few-shot learning for collateral evaluation in ischemic stroke.

Aktar M, Tampieri D, Xiao Y, Rivaz H, Kersten-Oertel M

pubmed logopapersJul 1 2025
Assessing collateral circulation is essential in determining the best treatment for ischemic stroke patients as good collaterals lead to different treatment options, i.e., thrombectomy, whereas poor collaterals can adversely affect the treatment by leading to excess bleeding and eventually death. To reduce inter- and intra-rater variability and save time in radiologist assessments, computer-aided methods, mainly using deep neural networks, have gained popularity. The current literature demonstrates effectiveness when using balanced and extensive datasets in deep learning; however, such data sets are scarce for stroke, and the number of data samples for poor collateral cases is often limited compared to those for good collaterals. We propose a novel approach called CASCADE-FSL to distinguish poor collaterals effectively. Using a small, unbalanced data set, we employ a few-shot learning approach for training using a 2D ResNet-50 as a backbone and designating good and intermediate cases as two normal classes. We identify poor collaterals as anomalies in comparison to the normal classes. Our novel approach achieves an overall accuracy, sensitivity, and specificity of 0.88, 0.88, and 0.89, respectively, demonstrating its effectiveness in addressing the imbalanced dataset challenge and accurately identifying poor collateral circulation cases.

Quantitative Ischemic Lesions of Portable Low-Field Strength MRI Using Deep Learning-Based Super-Resolution.

Bian Y, Wang L, Li J, Yang X, Wang E, Li Y, Liu Y, Xiang L, Yang Q

pubmed logopapersJul 1 2025
Deep learning-based synthetic super-resolution magnetic resonance imaging (SynthMRI) may improve the quantitative lesion performance of portable low-field strength magnetic resonance imaging (LF-MRI). The aim of this study is to evaluate whether SynthMRI improves the diagnostic performance of LF-MRI in assessing ischemic lesions. We retrospectively included 178 stroke patients and 104 healthy controls with both LF-MRI and high-field strength magnetic resonance imaging (HF-MRI) examinations. Using HF-MRI as the ground truth, the deep learning-based super-resolution framework (SCUNet [Swin-Conv-UNet]) was pretrained using large-scale open-source data sets to generate SynthMRI images from LF-MRI images. Participants were split into a training set (64.2%) to fine-tune the pretrained SCUNet, and a testing set (35.8%) to evaluate the performance of SynthMRI. Sensitivity and specificity of LF-MRI and SynthMRI were assessed. Agreement with HF-MRI for Alberta Stroke Program Early CT Score in the anterior and posterior circulation (diffusion-weighted imaging-Alberta Stroke Program Early CT Score and diffusion-weighted imaging-posterior circulation Alberta Stroke Program Early CT Score) was evaluated using intraclass correlation coefficients (ICCs). Agreement with HF-MRI for lesion volume and mean apparent diffusion coefficient (ADC) within lesions was assessed using both ICCs and Pearson correlation coefficients. SynthMRI demonstrated significantly higher sensitivity and specificity than LF-MRI (89.0% [83.3%-94.6%] versus 77.1% [69.5%-84.7%]; <i>P</i><0.001 and 91.3% [84.7%-98.0%] versus 71.0% [60.3%-81.7%]; <i>P</i><0.001, respectively). The ICCs of diffusion-weighted imaging-Alberta Stroke Program Early CT Score between SynthMRI and HF-MRI were also better than that between LF-MRI and HF-MRI (0.952 [0.920-0.972] versus 0.797 [0.678-0.876], <i>P</i><0.001). For lesion volume and mean apparent diffusion coefficient within lesions, SynthMRI showed significantly higher agreement (<i>P</i><0.001) with HF-MRI (ICC>0.85, <i>r</i>>0.78) than LF-MRI (ICC>0.45, <i>r</i>>0.35). Furthermore, for lesions during various poststroke phases, SynthMRI exhibited significantly higher agreement with HF-MRI than LF-MRI during the early hyperacute and subacute phases. SynthMRI demonstrates high agreement with HF-MRI in detecting and quantifying ischemic lesions and is better than LF-MRI, particularly for lesions during the early hyperacute and subacute phases.

Integrated brain connectivity analysis with fMRI, DTI, and sMRI powered by interpretable graph neural networks.

Qu G, Zhou Z, Calhoun VD, Zhang A, Wang YP

pubmed logopapersJul 1 2025
Multimodal neuroimaging data modeling has become a widely used approach but confronts considerable challenges due to their heterogeneity, which encompasses variability in data types, scales, and formats across modalities. This variability necessitates the deployment of advanced computational methods to integrate and interpret diverse datasets within a cohesive analytical framework. In our research, we combine functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), and structural MRI (sMRI) for joint analysis. This integration capitalizes on the unique strengths of each modality and their inherent interconnections, aiming for a comprehensive understanding of the brain's connectivity and anatomical characteristics. Utilizing the Glasser atlas for parcellation, we integrate imaging-derived features from multiple modalities - functional connectivity from fMRI, structural connectivity from DTI, and anatomical features from sMRI - within consistent regions. Our approach incorporates a masking strategy to differentially weight neural connections, thereby facilitating an amalgamation of multimodal imaging data. This technique enhances interpretability at the connectivity level, transcending traditional analyses centered on singular regional attributes. The model is applied to the Human Connectome Project's Development study to elucidate the associations between multimodal imaging and cognitive functions throughout youth. The analysis demonstrates improved prediction accuracy and uncovers crucial anatomical features and neural connections, deepening our understanding of brain structure and function. This study not only advances multimodal neuroimaging analytics by offering a novel method for integrative analysis of diverse imaging modalities but also improves the understanding of intricate relationships between brain's structural and functional networks and cognitive development.

MDAL: Modality-difference-based active learning for multimodal medical image analysis via contrastive learning and pointwise mutual information.

Wang H, Jin Q, Du X, Wang L, Guo Q, Li H, Wang M, Song Z

pubmed logopapersJul 1 2025
Multimodal medical images reveal different characteristics of the same anatomy or lesion, offering significant clinical value. Deep learning has achieved widespread success in medical image analysis with large-scale labeled datasets. However, annotating medical images is expensive and labor-intensive for doctors, and the variations between different modalities further increase the annotation cost for multimodal images. This study aims to minimize the annotation cost for multimodal medical image analysis. We proposes a novel active learning framework MDAL based on modality differences for multimodal medical images. MDAL quantifies the sample-wise modality differences through pointwise mutual information estimated by multimodal contrastive learning. We hypothesize that samples with larger modality differences are more informative for annotation and further propose two sampling strategies based on these differences: MaxMD and DiverseMD. Moreover, MDAL could select informative samples in one shot without initial labeled data. We evaluated MDAL on public brain glioma and meningioma segmentation datasets and an in-house ovarian cancer classification dataset. MDAL outperforms other advanced active learning competitors. Besides, when using only 20%, 20%, and 15% of labeled samples in these datasets, MDAL reaches 99.6%, 99.9%, and 99.3% of the performance of supervised training with full labeled dataset, respectively. The results show that our proposed MDAL could significantly reduce the annotation cost for multimodal medical image analysis. We expect MDAL could be further extended to other multimodal medical data for lower annotation costs.
Page 3 of 53522 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.