Sort by:
Page 104 of 3993982 results

3D isotropic high-resolution fetal brain MRI reconstruction from motion corrupted thick data based on physical-informed unsupervised learning.

Wu J, Chen L, Li Z, Li X, Sun T, Wang L, Wang R, Wei H, Zhang Y

pubmed logopapersJul 15 2025
High-quality 3D fetal brain MRI reconstruction from motion-corrupted 2D slices is crucial for precise clinical diagnosis and advancing our understanding of fetal brain development. This necessitates reliable slice-to-volume registration (SVR) for motion correction and super-resolution reconstruction (SRR) techniques. Traditional approaches have their limitations, but deep learning (DL) offers the potential in enhancing SVR and SRR. However, most of DL methods require large-scale external 3D high-resolution (HR) training datasets, which is challenging in clinical fetal MRI. To address this issue, we propose an unsupervised iterative joint SVR and SRR DL framework for 3D isotropic HR volume reconstruction. Specifically, our method conceptualizes SVR as a function that maps a 2D slice and a 3D target volume to a rigid transformation matrix, aligning the slice to the underlying location within the target volume. This function is parameterized by a convolutional neural network, which is trained by minimizing the difference between the volume slicing at the predicted position and the actual input slice. For SRR, a decoding network embedded within a deep image prior framework, coupled with a comprehensive image degradation model, is used to produce the HR volume. The deep image prior framework offers a local consistency prior to guide the reconstruction of HR volumes. By performing a forward degradation model, the HR volume is optimized by minimizing the loss between the predicted slices and the acquired slices. Experiments on both large-magnitude motion-corrupted simulation data and clinical data have shown that our proposed method outperforms current state-of-the-art fetal brain reconstruction methods. The source code is available at https://github.com/DeepBMI/SUFFICIENT.

Restore-RWKV: Efficient and Effective Medical Image Restoration with RWKV.

Yang Z, Li J, Zhang H, Zhao D, Wei B, Xu Y

pubmed logopapersJul 15 2025
Transformers have revolutionized medical image restoration, but the quadratic complexity still poses limitations for their application to high-resolution medical images. The recent advent of the Receptance Weighted Key Value (RWKV) model in the natural language processing field has attracted much attention due to its ability to process long sequences efficiently. To leverage its advanced design, we propose Restore-RWKV, the first RWKV-based model for medical image restoration. Since the original RWKV model is designed for 1D sequences, we make two necessary modifications for modeling spatial relations in 2D medical images. First, we present a recurrent WKV (Re-WKV) attention mechanism that captures global dependencies with linear computational complexity. Re-WKV incorporates bidirectional attention as basic for a global 16 receptive field and recurrent attention to effectively model 2D dependencies from various scan directions. Second, we develop an omnidirectional token shift (Omni-Shift) layer that enhances local dependencies by shifting tokens from all directions and across a wide context range. These adaptations make the proposed Restore-RWKV an efficient and effective model for medical image restoration. Even a lightweight variant of Restore-RWKV, with only 1.16 million parameters, achieves comparable or even superior results compared to existing state-of-the-art (SOTA) methods. Extensive experiments demonstrate that the resulting Restore-RWKV achieves SOTA performance across a range of medical image restoration tasks, including PET image synthesis, CT image denoising, MRI image superresolution, and all-in-one medical image restoration. Code is available at: https://github.com/Yaziwel/Restore-RWKV.

LADDA: Latent Diffusion-based Domain-adaptive Feature Disentangling for Unsupervised Multi-modal Medical Image Registration.

Yuan P, Dong J, Zhao W, Lyu F, Xue C, Zhang Y, Yang C, Wu Z, Gao Z, Lyu T, Coatrieux JL, Chen Y

pubmed logopapersJul 15 2025
Deformable image registration (DIR) is critical for accurate clinical diagnosis and effective treatment planning. However, patient movement, significant intensity differences, and large breathing deformations hinder accurate anatomical alignment in multi-modal image registration. These factors exacerbate the entanglement of anatomical and modality-specific style information, thereby severely limiting the performance of multi-modal registration. To address this, we propose a novel LAtent Diffusion-based Domain-Adaptive feature disentangling (LADDA) framework for unsupervised multi-modal medical image registration, which explicitly addresses the representation disentanglement. First, LADDA extracts reliable anatomical priors from the Latent Diffusion Model (LDM), facilitating downstream content-style disentangled learning. A Domain-Adaptive Feature Disentangling (DAFD) module is proposed to promote anatomical structure alignment further. This module disentangles image features into content and style information, boosting the network to focus on cross-modal content information. Next, a Neighborhood-Preserving Hashing (NPH) is constructed to further perceive and integrate hierarchical content information through local neighbourhood encoding, thereby maintaining cross-modal structural consistency. Furthermore, a Unilateral-Query-Frozen Attention (UQFA) module is proposed to enhance the coupling between upstream prior and downstream content information. The feature interaction within intra-domain consistent structures improves the fine recovery of detailed textures. The proposed framework is extensively evaluated on large-scale multi-center datasets, demonstrating superior performance across diverse clinical scenarios and strong generalization on out-of-distribution (OOD) data.

Placenta segmentation redefined: review of deep learning integration of magnetic resonance imaging and ultrasound imaging.

Jittou A, Fazazy KE, Riffi J

pubmed logopapersJul 15 2025
Placental segmentation is critical for the quantitative analysis of prenatal imaging applications. However, segmenting the placenta using magnetic resonance imaging (MRI) and ultrasound is challenging because of variations in fetal position, dynamic placental development, and image quality. Most segmentation methods define regions of interest with different shapes and intensities, encompassing the entire placenta or specific structures. Recently, deep learning has emerged as a key approach that offer high segmentation performance across diverse datasets. This review focuses on the recent advances in deep learning techniques for placental segmentation in medical imaging, specifically MRI and ultrasound modalities, and cover studies from 2019 to 2024. This review synthesizes recent research, expand knowledge in this innovative area, and highlight the potential of deep learning approaches to significantly enhance prenatal diagnostics. These findings emphasize the importance of selecting appropriate imaging modalities and model architectures tailored to specific clinical scenarios. In addition, integrating both MRI and ultrasound can enhance segmentation performance by leveraging complementary information. This review also discusses the challenges associated with the high costs and limited availability of advanced imaging technologies. It provides insights into the current state of placental segmentation techniques and their implications for improving maternal and fetal health outcomes, underscoring the transformative impact of deep learning on prenatal diagnostics.

Vision transformer and complex network analysis for autism spectrum disorder classification in T1 structural MRI.

Gao X, Xu Y

pubmed logopapersJul 15 2025
Autism spectrum disorder (ASD) affects social interaction, communication, and behavior. Early diagnosis is important as it enables timely intervention that can significantly improve long-term outcomes, but current diagnostic, which rely heavily on behavioral observations and clinical interviews, are often subjective and time-consuming. This study introduces an AI-based approach that uses T1-weighted structural MRI (sMRI) scans, network analysis, and vision transformers to automatically diagnose ASD. sMRI data from 79 ASD patients and 105 healthy controls were obtained from the Autism Brain Imaging Data Exchange (ABIDE) database. Complex network analysis (CNA) features and ViT (Vision Transformer) features were developed for predicting ASD. Five models were developed for each type of features: logistic regression, support vector machine (SVM), gradient boosting (GB), K-nearest neighbors (KNN), and neural network (NN). 25 models were further developed by federating the two sets of 5 models. Model performance was evaluated using accuracy, area under the receiver operating characteristic curve (AUC-ROC), sensitivity, and specificity via fivefold cross-validation. The federate model CNA(KNN)-ViT(NN) achieved highest performance, with accuracy 0.951 ± 0.067, AUC-ROC 0.980 ± 0.020, sensitivity 0.963 ± 0.050, and specificity 0.943 ± 0.047. The performance of the ViT-based models exceeds that of the complex network-based models on 80% of the performance metrics. By federating CNA models, the ViT models can achieve better performance. This study demonstrates the feasibility of using CNA and ViT models for the automated diagnosis of ASD. The proposed CNA(KNN)-ViT(NN) model achieved better accuracy in ASD classification based solely on T1 sMRI images. The proposed method's reliance on widely available T1 sMRI scans highlights its potential for integration into routine clinical examinations, facilitating more efficient and accessible ASD screening.

Assessment of local recurrence risk in extremity high-grade osteosarcoma through multimodality radiomics integration.

Luo Z, Liu R, Li J, Ye Q, Zhou Z, Shen X

pubmed logopapersJul 15 2025
BackgroundA timely assessment of local recurrence (LoR) risk in extremity high-grade osteosarcoma is crucial for optimizing treatment strategies and improving patient outcomes.PurposeTo explore the potential of machine-learning algorithms in predicting LoR in patients with osteosarcoma.Material and MethodsData from patients with high-grade osteosarcoma who underwent preoperative radiograph and multiparametric magnetic resonance imaging (MRI) were collected. Machine-learning models were developed and trained on this dataset to predict LoR. The study involved selecting relevant features, training the models, and evaluating their performance using the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC). DeLong's test was utilized for comparing the AUCs.ResultsThe performance (AUC, sensitivity, specificity, and accuracy) of four classifiers (random forest [RF], support vector machine, logistic regression, and extreme gradient boosting) using radiograph-MRI as image inputs were stable (all Hosmer-Lemeshow index >0.05) with the fair to good prognosis efficacy. The RF classifier using radiograph-MRI features as training inputs exhibited better performance (AUC = 0.806, 0.868) than that using MRI only (AUC = 0.774, 0.771) and radiograph only (AUC = 0.613 and 0.627) in the training and testing sets (<i>P</i> <0.05) while the other three classifiers showed no difference between MRI-only and radiograph-MRI models.ConclusionThis study provides valuable insights into the use of machine learning for predicting LoR in osteosarcoma patients. These findings emphasize the potential of integrating radiomics data with algorithms to improve prognostic assessments.

Exploring the robustness of TractOracle methods in RL-based tractography

Jeremi Levesque, Antoine Théberge, Maxime Descoteaux, Pierre-Marc Jodoin

arxiv logopreprintJul 15 2025
Tractography algorithms leverage diffusion MRI to reconstruct the fibrous architecture of the brain's white matter. Among machine learning approaches, reinforcement learning (RL) has emerged as a promising framework for tractography, outperforming traditional methods in several key aspects. TractOracle-RL, a recent RL-based approach, reduces false positives by incorporating anatomical priors into the training process via a reward-based mechanism. In this paper, we investigate four extensions of the original TractOracle-RL framework by integrating recent advances in RL, and we evaluate their performance across five diverse diffusion MRI datasets. Results demonstrate that combining an oracle with the RL framework consistently leads to robust and reliable tractography, regardless of the specific method or dataset used. We also introduce a novel RL training scheme called Iterative Reward Training (IRT), inspired by the Reinforcement Learning from Human Feedback (RLHF) paradigm. Instead of relying on human input, IRT leverages bundle filtering methods to iteratively refine the oracle's guidance throughout training. Experimental results show that RL methods trained with oracle feedback significantly outperform widely used tractography techniques in terms of accuracy and anatomical validity.

Automated Whole-Liver Fat Quantification with Magnetic Resonance Imaging-Derived Proton Density Fat Fraction Map: A Prospective Study in Taiwan.

Wu CH, Yen KC, Wang LY, Hsieh PL, Wu WK, Lee PL, Liu CJ

pubmed logopapersJul 15 2025
Magnetic resonance imaging (MRI) with a proton density fat fraction (PDFF) sequence is the most accurate, noninvasive method for assessing hepatic steatosis. However, manual measurement on the PDFF map is time-consuming. This study aimed to validate automated whole-liver fat quantification for assessing hepatic steatosis with MRI-PDFF. In this prospective study, 80 patients were enrolled from August 2020 to January 2023. Baseline MRI-PDFF and magnetic resonance spectroscopy (MRS) data were collected. The analysis of MRI-PDFF included values from automated whole-liver segmentation (autoPDFF) and the average value from measurements taken from eight segments (avePDFF). Twenty patients with ≥10% autoPDFF values who received 24 weeks of exercise training were also collected for the chronologic evaluation. The correlation and concordance coefficients (r and ρ) among the values and differences were calculated. There were strong correlations between autoPDFF versus avePDFF, autoPDFF versus MRS, and avePDFF versus MRS (r=0.963, r=0.955, and r=0.977, all p<0.001). The autoPDFF values were also highly concordant with the avePDFF and MRS values (ρ=0.941 and ρ=0.942). The autoPDFF, avePDFF, and MRS values consistently decreased after 24 weeks of exercise. The change in autoPDFF was also highly correlated with the changes in avePDFF and MRS (r=0.961 and r=0.870, all p<0.001). Automated whole-liver fat quantification might be feasible for clinical trials and practice, yielding values with high correlations and concordance with the time-consuming manual measurements from the PDFF map and the values from the highly complex processing of MRS (ClinicalTrials.gov identifier: NCT04463667).

A literature review of radio-genomics in breast cancer: Lessons and insights for low and middle-income countries.

Mooghal M, Shaikh K, Shaikh H, Khan W, Siddiqui MS, Jamil S, Vohra LM

pubmed logopapersJul 15 2025
To improve precision medicine in breast cancer (BC) decision-making, radio-genomics is an emerging branch of artificial intelligence (AI) that links cancer characteristics assessed radiologically with the histopathology and genomic properties of the tumour. By employing MRIs, mammograms, and ultrasounds to uncover distinctive radiomics traits that potentially predict genomic abnormalities, this review attempts to find literature that links AI-based models with the genetic mutations discovered in BC patients. The review's findings can be used to create AI-based population models for low and middle-income countries (LMIC) and evaluate how well they predict outcomes for our cohort.Magnetic resonance imaging (MRI) appears to be the modality employed most frequently to research radio-genomics in BC patients in our systemic analysis. According to the papers we analysed, genetic markers and mutations linked to imaging traits, such as tumour size, shape, enhancing patterns, as well as clinical outcomes of treatment response, disease progression, and survival, can be identified by employing AI. The use of radio-genomics can help LMICs get through some of the barriers that keep the general population from having access to high-quality cancer care, thereby improving the health outcomes for BC patients in these regions. It is imperative to ensure that emerging technologies are used responsibly, in a way that is accessible to and affordable for all patients, regardless of their socio-economic condition.

Non-invasive liver fibrosis screening on CT images using radiomics.

Yoo JJ, Namdar K, Carey S, Fischer SE, McIntosh C, Khalvati F, Rogalla P

pubmed logopapersJul 15 2025
To develop a radiomics machine learning model for detecting liver fibrosis on CT images of the liver. With Ethics Board approval, 169 patients (68 women, 101 men; mean age, 51.2 years ± 14.7 [SD]) underwent an ultrasound-guided liver biopsy with simultaneous CT acquisitions without and following intravenous contrast material administration. Radiomic features were extracted from two regions of interest (ROIs) on the CT images, one placed at the biopsy site and another distant from the biopsy site. A development cohort, which was split further into training and validation cohorts across 100 trials, was used to determine the optimal combinations of contrast, normalization, machine learning model, and radiomic features for liver fibrosis detection based on their Area Under the Receiver Operating Characteristic curve (AUC) on the validation cohort. The optimal combinations were then used to develop one final liver fibrosis model which was evaluated on a test cohort. When averaging the AUC across all combinations, non-contrast enhanced (NC) CT (AUC, 0.6100; 95% CI: 0.5897, 0.6303) outperformed contrast-enhanced CT (AUC, 0.5680; 95% CI: 0.5471, 0.5890). The most effective model was found to be a logistic regression model with input features of maximum, energy, kurtosis, skewness, and small area high gray level emphasis extracted from non-contrast enhanced NC CT normalized using Gamma correction with γ = 1.5 (AUC, 0.7833; 95% CI: 0.7821, 0.7845). The presented radiomics-based logistic regression model holds promise as a non-invasive detection tool for subclinical, asymptomatic liver fibrosis. The model may serve as an opportunistic liver fibrosis screening tool when operated in the background during routine CT examinations covering liver parenchyma. The final liver fibrosis detection model is made publicly available at: https://github.com/IMICSLab/RadiomicsLiverFibrosisDetection .
Page 104 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.