Sort by:
Page 7 of 65646 results

Generalizable, sequence-invariant deep learning image reconstruction for subspace-constrained quantitative MRI.

Hu Z, Chen Z, Cao T, Lee HL, Xie Y, Li D, Christodoulou AG

pubmed logopapersJul 1 2025
To develop a deep subspace learning network that can function across different pulse sequences. A contrast-invariant component-by-component (CBC) network structure was developed and compared against previously reported spatiotemporal multicomponent (MC) structure for reconstructing MR Multitasking images. A total of 130, 167, and 16 subjects were imaged using T<sub>1</sub>, T<sub>1</sub>-T<sub>2</sub>, and T<sub>1</sub>-T<sub>2</sub>- <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow><msubsup><mi>T</mi> <mn>2</mn> <mo>*</mo></msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> -fat fraction (FF) mapping sequences, respectively. We compared CBC and MC networks in matched-sequence experiments (same sequence for training and testing), then examined their cross-sequence performance and generalizability by unmatched-sequence experiments (different sequences for training and testing). A "universal" CBC network was also evaluated using mixed-sequence training (combining data from all three sequences). Evaluation metrics included image normalized root mean squared error and Bland-Altman analyses of end-diastolic maps, both versus iteratively reconstructed references. The proposed CBC showed significantly better normalized root mean squared error than MC in both matched-sequence and unmatched-sequence experiments (p < 0.001), fewer structural details in quantitative error maps, and tighter limits of agreement. CBC was more generalizable than MC (smaller performance loss; p = 0.006 in T<sub>1</sub> and p < 0.001 in T<sub>1</sub>-T<sub>2</sub> from matched-sequence testing to unmatched-sequence testing) and additionally allowed training of a single universal network to reconstruct images from any of the three pulse sequences. The mixed-sequence CBC network performed similarly to matched-sequence CBC in T<sub>1</sub> (p = 0.178) and T<sub>1</sub>-T<sub>2</sub> (p = 0121), where training data were plentiful, and performed better in T<sub>1</sub>-T<sub>2</sub>- <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow><msubsup><mi>T</mi> <mn>2</mn> <mo>*</mo></msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> -FF (p < 0.001) where training data were scarce. Contrast-invariant learning of spatial features rather than spatiotemporal features improves performance and generalizability, addresses data scarcity, and offers a pathway to universal supervised deep subspace learning.

Multi-site, multi-vendor development and validation of a deep learning model for liver stiffness prediction using abdominal biparametric MRI.

Ali R, Li H, Zhang H, Pan W, Reeder SB, Harris D, Masch W, Aslam A, Shanbhogue K, Bernieh A, Ranganathan S, Parikh N, Dillman JR, He L

pubmed logopapersJul 1 2025
Chronic liver disease (CLD) is a substantial cause of morbidity and mortality worldwide. Liver stiffness, as measured by MR elastography (MRE), is well-accepted as a surrogate marker of liver fibrosis. To develop and validate deep learning (DL) models for predicting MRE-derived liver stiffness using routine clinical non-contrast abdominal T1-weighted (T1w) and T2-weighted (T2w) data from multiple institutions/system manufacturers in pediatric and adult patients. We identified pediatric and adult patients with known or suspected CLD from four institutions, who underwent clinical MRI with MRE from 2011 to 2022. We used T1w and T2w data to train DL models for liver stiffness classification. Patients were categorized into two groups for binary classification using liver stiffness thresholds (≥ 2.5 kPa, ≥ 3.0 kPa, ≥ 3.5 kPa, ≥ 4 kPa, or ≥ 5 kPa), reflecting various degrees of liver stiffening. We identified 4695 MRI examinations from 4295 patients (mean ± SD age, 47.6 ± 18.7 years; 428 (10.0%) pediatric; 2159 males [50.2%]). With a primary liver stiffness threshold of 3.0 kPa, our model correctly classified patients into no/minimal (< 3.0 kPa) vs moderate/severe (≥ 3.0 kPa) liver stiffness with AUROCs of 0.83 (95% CI: 0.82, 0.84) in our internal multi-site cross-validation (CV) experiment, 0.82 (95% CI: 0.80, 0.84) in our temporal hold-out validation experiment, and 0.79 (95% CI: 0.75, 0.81) in our external leave-one-site-out CV experiment. The developed model is publicly available ( https://github.com/almahdir1/Multi-channel-DeepLiverNet2.0.git ). Our DL models exhibited reasonable diagnostic performance for categorical classification of liver stiffness on a large diverse dataset using T1w and T2w MRI data. Question Can DL models accurately predict liver stiffness using routine clinical biparametric MRI in pediatric and adult patients with CLD? Findings DeepLiverNet2.0 used biparametric MRI data to classify liver stiffness, achieving AUROCs of 0.83, 0.82, and 0.79 for multi-site CV, hold-out validation, and external CV. Clinical relevance Our DeepLiverNet2.0 AI model can categorically classify the severity of liver stiffening using anatomic biparametric MR images in children and young adults. Model refinements and incorporation of clinical features may decrease the need for MRE.

Risk prediction for elderly cognitive impairment by radiomic and morphological quantification analysis based on a cerebral MRA imaging cohort.

Xu X, Zhou Y, Sun S, Cui L, Chen Z, Guo Y, Jiang J, Wang X, Sun T, Yang Q, Wang Y, Yuan Y, Fan L, Yang G, Cao F

pubmed logopapersJul 1 2025
To establish morphological and radiomic models for early prediction of cognitive impairment associated with cerebrovascular disease (CI-CVD) in an elderly cohort based on cerebral magnetic resonance angiography (MRA). One-hundred four patients with CI-CVD and 107 control subjects were retrospectively recruited from the 14-year elderly MRA cohort, and 63 subjects were enrolled for external validation. Automated quantitative analysis was applied to analyse the morphological features, including the stenosis score, length, relative length, twisted angle, and maximum deviation of cerebral arteries. Clinical and morphological risk factors were screened using univariate logistic regression. Radiomic features were extracted via least absolute shrinkage and selection operator (LASSO) regression. The predictive models of CI-CVD were established in the training set and verified in the external testing set. A history of stroke was demonstrated to be a clinical risk factor (OR 2.796, 1.359-5.751). Stenosis ≥ 50% in the right middle cerebral artery (RMCA) and left posterior cerebral artery (LPCA), maximum deviation of the left internal carotid artery (LICA), and twisted angles of the right internal carotid artery (RICA) and LICA were identified as morphological risk factors, with ORs of 4.522 (1.237-16.523), 2.851 (1.438-5.652), 1.373 (1.136-1.661), 0.981 (0.966-0.997) and 0.976 (0.958-0.994), respectively. Overall, 33 radiomic features were screened as risk factors. The clinical-morphological-radiomic model demonstrated optimal performance, with an AUC of 0.883 (0.838-0.928) in the training set and 0.843 (0.743-0.943) in the external testing set. Radiomics features combined with morphological indicators of cerebral arteries were effective indicators for early signs of CI-CVD in elderly individuals. Question The relationship between morphological features of cerebral arteries and cognitive impairment associated with cerebrovascular disease (CI-CVD) deserves to be explored. Findings The multipredictor model combining with stroke history, vascular morphological indicators and radiomic features of cerebral arteries demonstrated optimal performance for the early warning of CI-CVD. Clinical relevance Stenosis percentage and tortuosity score of the cerebral arteries are important risk factors for cognitive impairment. The radiomic features combined with morphological quantification analysis based on cerebral MRA provide higher predictive performance of CI-CVD.

Feasibility/clinical utility of half-Fourier single-shot turbo spin echo imaging combined with deep learning reconstruction in gynecologic magnetic resonance imaging.

Kirita M, Himoto Y, Kurata Y, Kido A, Fujimoto K, Abe H, Matsumoto Y, Harada K, Morita S, Yamaguchi K, Nickel D, Mandai M, Nakamoto Y

pubmed logopapersJul 1 2025
When antispasmodics are unavailable, the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER; called BLADE by Siemens Healthineers) or half Fourier single-shot turbo spin echo (HASTE) is clinically used in gynecologic MRI. However, their imaging qualities are limited compared to Turbo Spin Echo (TSE) with antispasmodics. Even with antispasmodics, TSE can be artifact-affected, necessitating a rapid backup sequence. This study aimed to investigate the utility of HASTE with deep learning reconstruction and variable flip angle evolution (iHASTE) compared to conventional sequences with and without antispasmodics. This retrospective study included MRI scans without antispasmodics for 79 patients who underwent iHASTE, HASTE, and BLADE and MRI scans with antispasmodics for 79 case-control matched patients who underwent TSE. Three radiologists qualitatively evaluated image quality, robustness to artifacts, tissue contrast, and uterine lesion margins. Tissue contrast was also quantitatively evaluated. Quantitative evaluations revealed that iHASTE exhibited significantly superior tissue contrast to HASTE and BLADE. Qualitative evaluations indicated that iHASTE outperformed HASTE in overall quality. Two of three radiologists judged iHASTE to be significantly superior to BLADE, while two of three judged TSE to be significantly superior to iHASTE. iHASTE demonstrated greater robustness to artifacts than both BLADE and TSE. Lesion margins in iHASTE had lower scores than BLADE and TSE. iHASTE is a viable clinical option in patients undergoing gynecologic MRI with anti-spasmodics. iHASTE may also be considered as a useful add-on sequence in patients undergoing MRI with antispasmodics.

Predicting progression-free survival in sarcoma using MRI-based automatic segmentation models and radiomics nomograms: a preliminary multicenter study.

Zhu N, Niu F, Fan S, Meng X, Hu Y, Han J, Wang Z

pubmed logopapersJul 1 2025
Some sarcomas are highly malignant, associated with high recurrence despite treatment. This multicenter study aimed to develop and validate a radiomics signature to estimate sarcoma progression-free survival (PFS). The study retrospectively enrolled 202 consecutive patients with pathologically diagnosed sarcoma, who had pre-treatment axial fat-suppressed T2-weighted images (FS-T2WI), and included them in the ROI-Net model for training. Among them, 120 patients were included in the radiomics analysis, all of whom had pre-treatment axial T1-weighted and transverse FS-T2WI images, and were randomly divided into a development group (n = 96) and a validation group (n = 24). In the development cohort, Least Absolute Shrinkage and Selection Operator (LASSO) Cox regression was used to develop the radiomics features for PFS prediction. By combining significant clinical features with radiomics features, a nomogram was constructed using Cox regression. The proposed ROI-Net framework achieved a Dice coefficient of 0.820 (0.791-0.848). The radiomics signature based on 21 features could distinguish high-risk patients with poor PFS. Univariate Cox analysis revealed that peritumoral edema, metastases, and the radiomics score were associated with poor PFS and were included in the construction of the nomogram. The Radiomics-T1WI-Clinical model exhibited the best performance, with AUC values of 0.947, 0.907, and 0.924 at 300 days, 600 days, and 900 days, respectively. The proposed ROI-Net framework demonstrated high consistency between its segmentation results and expert annotations. The radiomics features and the combined nomogram have the potential to aid in predicting PFS for patients with sarcoma.

Automatic Multiclass Tissue Segmentation Using Deep Learning in Brain MR Images of Tumor Patients.

Kandpal A, Kumar P, Gupta RK, Singh A

pubmed logopapersJun 30 2025
Precise delineation of brain tissues, including lesions, in MR images is crucial for data analysis and objectively assessing conditions like neurological disorders and brain tumors. Existing methods for tissue segmentation often fall short in addressing patients with lesions, particularly those with brain tumors. This study aimed to develop and evaluate a robust pipeline utilizing convolutional neural networks for rapid and automatic segmentation of whole brain tissues, including tumor lesions. The proposed pipeline was developed using BraTS'21 data (1251 patients) and tested on local hospital data (100 patients). Ground truth masks for lesions as well as brain tissues were generated. Two convolutional neural networks based on deep residual U-Net framework were trained for segmenting brain tissues and tumor lesions. The performance of the pipeline was evaluated on independent test data using dice similarity coefficient (DSC) and volume similarity (VS). The proposed pipeline achieved a mean DSC of 0.84 and a mean VS of 0.93 on the BraTS'21 test data set. On the local hospital test data set, it attained a mean DSC of 0.78 and a mean VS of 0.91. The proposed pipeline also generated satisfactory masks in cases where the SPM12 software performed inadequately. The proposed pipeline offers a reliable and automatic solution for segmenting brain tissues and tumor lesions in MR images. Its adaptability makes it a valuable tool for both research and clinical applications, potentially streamlining workflows and enhancing the precision of analyses in neurological and oncological studies.

Three-dimensional end-to-end deep learning for brain MRI analysis

Radhika Juglan, Marta Ligero, Zunamys I. Carrero, Asier Rabasco, Tim Lenz, Leo Misera, Gregory Patrick Veldhuizen, Paul Kuntke, Hagen H. Kitzler, Sven Nebelung, Daniel Truhn, Jakob Nikolas Kather

arxiv logopreprintJun 30 2025
Deep learning (DL) methods are increasingly outperforming classical approaches in brain imaging, yet their generalizability across diverse imaging cohorts remains inadequately assessed. As age and sex are key neurobiological markers in clinical neuroscience, influencing brain structure and disease risk, this study evaluates three of the existing three-dimensional architectures, namely Simple Fully Connected Network (SFCN), DenseNet, and Shifted Window (Swin) Transformers, for age and sex prediction using T1-weighted MRI from four independent cohorts: UK Biobank (UKB, n=47,390), Dallas Lifespan Brain Study (DLBS, n=132), Parkinson's Progression Markers Initiative (PPMI, n=108 healthy controls), and Information eXtraction from Images (IXI, n=319). We found that SFCN consistently outperformed more complex architectures with AUC of 1.00 [1.00-1.00] in UKB (internal test set) and 0.85-0.91 in external test sets for sex classification. For the age prediction task, SFCN demonstrated a mean absolute error (MAE) of 2.66 (r=0.89) in UKB and 4.98-5.81 (r=0.55-0.70) across external datasets. Pairwise DeLong and Wilcoxon signed-rank tests with Bonferroni corrections confirmed SFCN's superiority over Swin Transformer across most cohorts (p<0.017, for three comparisons). Explainability analysis further demonstrates the regional consistency of model attention across cohorts and specific to each task. Our findings reveal that simpler convolutional networks outperform the denser and more complex attention-based DL architectures in brain image analysis by demonstrating better generalizability across different datasets.

MDPG: Multi-domain Diffusion Prior Guidance for MRI Reconstruction

Lingtong Zhang, Mengdie Song, Xiaohan Hao, Huayu Mai, Bensheng Qiu

arxiv logopreprintJun 30 2025
Magnetic Resonance Imaging (MRI) reconstruction is essential in medical diagnostics. As the latest generative models, diffusion models (DMs) have struggled to produce high-fidelity images due to their stochastic nature in image domains. Latent diffusion models (LDMs) yield both compact and detailed prior knowledge in latent domains, which could effectively guide the model towards more effective learning of the original data distribution. Inspired by this, we propose Multi-domain Diffusion Prior Guidance (MDPG) provided by pre-trained LDMs to enhance data consistency in MRI reconstruction tasks. Specifically, we first construct a Visual-Mamba-based backbone, which enables efficient encoding and reconstruction of under-sampled images. Then pre-trained LDMs are integrated to provide conditional priors in both latent and image domains. A novel Latent Guided Attention (LGA) is proposed for efficient fusion in multi-level latent domains. Simultaneously, to effectively utilize a prior in both the k-space and image domain, under-sampled images are fused with generated full-sampled images by the Dual-domain Fusion Branch (DFB) for self-adaption guidance. Lastly, to further enhance the data consistency, we propose a k-space regularization strategy based on the non-auto-calibration signal (NACS) set. Extensive experiments on two public MRI datasets fully demonstrate the effectiveness of the proposed methodology. The code is available at https://github.com/Zolento/MDPG.

Artificial Intelligence-assisted Pixel-level Lung (APL) Scoring for Fast and Accurate Quantification in Ultra-short Echo-time MRI

Bowen Xin, Rohan Hickey, Tamara Blake, Jin Jin, Claire E Wainwright, Thomas Benkert, Alto Stemmer, Peter Sly, David Coman, Jason Dowling

arxiv logopreprintJun 30 2025
Lung magnetic resonance imaging (MRI) with ultrashort echo-time (UTE) represents a recent breakthrough in lung structure imaging, providing image resolution and quality comparable to computed tomography (CT). Due to the absence of ionising radiation, MRI is often preferred over CT in paediatric diseases such as cystic fibrosis (CF), one of the most common genetic disorders in Caucasians. To assess structural lung damage in CF imaging, CT scoring systems provide valuable quantitative insights for disease diagnosis and progression. However, few quantitative scoring systems are available in structural lung MRI (e.g., UTE-MRI). To provide fast and accurate quantification in lung MRI, we investigated the feasibility of novel Artificial intelligence-assisted Pixel-level Lung (APL) scoring for CF. APL scoring consists of 5 stages, including 1) image loading, 2) AI lung segmentation, 3) lung-bounded slice sampling, 4) pixel-level annotation, and 5) quantification and reporting. The results shows that our APL scoring took 8.2 minutes per subject, which was more than twice as fast as the previous grid-level scoring. Additionally, our pixel-level scoring was statistically more accurate (p=0.021), while strongly correlating with grid-level scoring (R=0.973, p=5.85e-9). This tool has great potential to streamline the workflow of UTE lung MRI in clinical settings, and be extended to other structural lung MRI sequences (e.g., BLADE MRI), and for other lung diseases (e.g., bronchopulmonary dysplasia).

GUSL: A Novel and Efficient Machine Learning Model for Prostate Segmentation on MRI

Jiaxin Yang, Vasileios Magoulianitis, Catherine Aurelia Christie Alexander, Jintang Xue, Masatomo Kaneko, Giovanni Cacciamani, Andre Abreu, Vinay Duddalwar, C. -C. Jay Kuo, Inderbir S. Gill, Chrysostomos Nikias

arxiv logopreprintJun 30 2025
Prostate and zonal segmentation is a crucial step for clinical diagnosis of prostate cancer (PCa). Computer-aided diagnosis tools for prostate segmentation are based on the deep learning (DL) paradigm. However, deep neural networks are perceived as "black-box" solutions by physicians, thus making them less practical for deployment in the clinical setting. In this paper, we introduce a feed-forward machine learning model, named Green U-shaped Learning (GUSL), suitable for medical image segmentation without backpropagation. GUSL introduces a multi-layer regression scheme for coarse-to-fine segmentation. Its feature extraction is based on a linear model, which enables seamless interpretability during feature extraction. Also, GUSL introduces a mechanism for attention on the prostate boundaries, which is an error-prone region, by employing regression to refine the predictions through residue correction. In addition, a two-step pipeline approach is used to mitigate the class imbalance, an issue inherent in medical imaging problems. After conducting experiments on two publicly available datasets and one private dataset, in both prostate gland and zonal segmentation tasks, GUSL achieves state-of-the-art performance among other DL-based models. Notably, GUSL features a very energy-efficient pipeline, since it has a model size several times smaller and less complexity than the rest of the solutions. In all datasets, GUSL achieved a Dice Similarity Coefficient (DSC) performance greater than $0.9$ for gland segmentation. Considering also its lightweight model size and transparency in feature extraction, it offers a competitive and practical package for medical imaging applications.
Page 7 of 65646 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.