Sort by:
Page 6 of 65646 results

Using deep feature distances for evaluating the perceptual quality of MR image reconstructions.

Adamson PM, Desai AD, Dominic J, Varma M, Bluethgen C, Wood JP, Syed AB, Boutin RD, Stevens KJ, Vasanawala S, Pauly JM, Gunel B, Chaudhari AS

pubmed logopapersJul 1 2025
Commonly used MR image quality (IQ) metrics have poor concordance with radiologist-perceived diagnostic IQ. Here, we develop and explore deep feature distances (DFDs)-distances computed in a lower-dimensional feature space encoded by a convolutional neural network (CNN)-as improved perceptual IQ metrics for MR image reconstruction. We further explore the impact of distribution shifts between images in the DFD CNN encoder training data and the IQ metric evaluation. We compare commonly used IQ metrics (PSNR and SSIM) to two "out-of-domain" DFDs with encoders trained on natural images, an "in-domain" DFD trained on MR images alone, and two domain-adjacent DFDs trained on large medical imaging datasets. We additionally compare these with several state-of-the-art but less commonly reported IQ metrics, visual information fidelity (VIF), noise quality metric (NQM), and the high-frequency error norm (HFEN). IQ metric performance is assessed via correlations with five expert radiologist reader scores of perceived diagnostic IQ of various accelerated MR image reconstructions. We characterize the behavior of these IQ metrics under common distortions expected during image acquisition, including their sensitivity to acquisition noise. All DFDs and HFEN correlate more strongly with radiologist-perceived diagnostic IQ than SSIM, PSNR, and other state-of-the-art metrics, with correlations being comparable to radiologist inter-reader variability. Surprisingly, out-of-domain DFDs perform comparably to in-domain and domain-adjacent DFDs. A suite of IQ metrics, including DFDs and HFEN, should be used alongside commonly-reported IQ metrics for a more holistic evaluation of MR image reconstruction perceptual quality. We also observe that general vision encoders are capable of assessing visual IQ even for MR images.

Redefining prostate cancer care: innovations and future directions in active surveillance.

Koett M, Melchior F, Artamonova N, Bektic J, Heidegger I

pubmed logopapersJul 1 2025
This review provides a critical analysis of recent advancements in active surveillance (AS), emphasizing updates from major international guidelines and their implications for clinical practice. Recent revisions to international guidelines have broadened the eligibility criteria for AS to include selected patients with ISUP grade group 2 prostate cancer. This adjustment acknowledges that certain intermediate-risk cancers may be appropriate for AS, reflecting a heightened focus on achieving a balance between oncologic control and maintaining quality of life by minimizing the risk of overtreatment. This review explores key innovations in AS for prostate cancer, including multi parametric magnetic resonance imaging (mpMRI), genomic biomarkers, and risk calculators, which enhance patient selection and monitoring. While promising, their routine use remains debated due to guideline inconsistencies, cost, and accessibility. Special focus is given to biomarkers for identifying ISUP grade group 2 cancers suitable for AS. Additionally, the potential of artificial intelligence to improve diagnostic accuracy and risk stratification is examined. By integrating these advancements, this review provides a critical perspective on optimizing AS for more personalized and effective prostate cancer management.

Deformation registration based on reconstruction of brain MRI images with pathologies.

Lian L, Chang Q

pubmed logopapersJul 1 2025
Deformable registration between brain tumor images and brain atlas has been an important tool to facilitate pathological analysis. However, registration of images with tumors is challenging due to absent correspondences induced by the tumor. Furthermore, the tumor growth may displace the tissue, causing larger deformations than what is observed in healthy brains. Therefore, we propose a new reconstruction-driven cascade feature warping (RCFW) network for brain tumor images. We first introduce the symmetric-constrained feature reasoning (SFR) module which reconstructs the missed normal appearance within tumor regions, allowing a dense spatial correspondence between the reconstructed quasi-normal appearance and the atlas. The dilated multi-receptive feature fusion module is further introduced, which collects long-range features from different dimensions to facilitate tumor region reconstruction, especially for large tumor cases. Then, the reconstructed tumor images and atlas are jointly fed into the multi-stage feature warping module (MFW) to progressively predict spatial transformations. The method was performed on the Multimodal Brain Tumor Segmentation (BraTS) 2021 challenge database and compared with six existing methods. Experimental results showed that the proposed method effectively handles the problem of brain tumor image registration, which can maintain the smooth deformation of the tumor region while maximizing the image similarity of normal regions.

Identifying Primary Sites of Spinal Metastases: Expert-Derived Features vs. ResNet50 Model Using Nonenhanced MRI.

Liu K, Ning J, Qin S, Xu J, Hao D, Lang N

pubmed logopapersJul 1 2025
The spinal column is a frequent site for metastases, affecting over 30% of solid tumor patients. Identifying the primary tumor is essential for guiding clinical decisions but often requires resource-intensive diagnostics. To develop and validate artificial intelligence (AI) models using noncontrast MRI to identify primary sites of spinal metastases, aiming to enhance diagnostic efficiency. Retrospective. A total of 514 patients with pathologically confirmed spinal metastases (mean age, 59.3 ± 11.2 years; 294 males) were included, split into a development set (360) and a test set (154). Noncontrast sagittal MRI sequences (T1-weighted, T2-weighted, and fat-suppressed T2) were acquired using 1.5 T and 3 T scanners. Two models were evaluated for identifying primary sites of spinal metastases: the expert-derived features (EDF) model using radiologist-identified imaging features and a ResNet50-based deep learning (DL) model trained on noncontrast MRI. Performance was assessed using accuracy, precision, recall, F1 score, and the area under the receiver operating characteristic curve (ROC-AUC) for top-1, top-2, and top-3 indicators. Statistical analyses included Shapiro-Wilk, t tests, Mann-Whitney U test, and chi-squared tests. ROC-AUCs were compared via DeLong tests, with 95% confidence intervals from 1000 bootstrap replications and significance at P < 0.05. The EDF model outperformed the DL model in top-3 accuracy (0.88 vs. 0.69) and AUC (0.80 vs. 0.71). Subgroup analysis showed superior EDF performance for common sites like lung and kidney (e.g., kidney F1: 0.94 vs. 0.76), while the DL model had higher recall for rare sites like thyroid (0.80 vs. 0.20). SHapley Additive exPlanations (SHAP) analysis identified sex (SHAP: -0.57 to 0.68), age (-0.48 to 0.98), T1WI signal intensity (-0.29 to 0.72), and pathological fractures (-0.76 to 0.25) as key features. AI techniques using noncontrast MRI improve diagnostic efficiency for spinal metastases. The EDF model outperformed the DL model, showing greater clinical potential. Spinal metastases, or cancer spreading to the spine, are common in patients with advanced cancer, often requiring extensive tests to determine the original tumor site. Our study explored whether artificial intelligence could make this process faster and more accurate using noncontrast MRI scans. We tested two methods: one based on radiologists' expertise in identifying imaging features and another using a deep learning model trained to analyze MRI images. The expert-based method was more reliable, correctly identifying the tumor site in 88% of cases when considering the top three likely diagnoses. This approach may help doctors reduce diagnostic time and improve patient care. 3 TECHNICAL EFFICACY: Stage 2.

Automated vs manual cardiac MRI planning: a single-center prospective evaluation of reliability and scan times.

Glessgen C, Crowe LA, Wetzl J, Schmidt M, Yoon SS, Vallée JP, Deux JF

pubmed logopapersJul 1 2025
Evaluating the impact of an AI-based automated cardiac MRI (CMR) planning software on procedure errors and scan times compared to manual planning alone. Consecutive patients undergoing non-stress CMR were prospectively enrolled at a single center (August 2023-February 2024) and randomized into manual, or automated scan execution using prototype software. Patients with pacemakers, targeted indications, or inability to consent were excluded. All patients underwent the same CMR protocol with contrast, in breath-hold (BH) or free breathing (FB). Supervising radiologists recorded procedure errors (plane prescription, forgotten views, incorrect propagation of cardiac planes, and field-of-view mismanagement). Scan times and idle phase (non-acquisition portion) were computed from scanner logs. Most data were non-normally distributed and compared using non-parametric tests. Eighty-two patients (mean age, 51.6 years ± 17.5; 56 men) were included. Forty-four patients underwent automated and 38 manual CMRs. The mean rate of procedure errors was significantly (p = 0.01) lower in the automated (0.45) than in the manual group (1.13). The rate of error-free examinations was higher (p = 0.03) in the automated (31/44; 70.5%) than in the manual group (17/38; 44.7%). Automated studies were shorter than manual studies in FB (30.3 vs 36.5 min, p < 0.001) but had similar durations in BH (42.0 vs 43.5 min, p = 0.42). The idle phase was lower in automated studies for FB and BH strategies (both p < 0.001). An AI-based automated software performed CMR at a clinical level with fewer planning errors and improved efficiency compared to manual planning. Question What is the impact of an AI-based automated CMR planning software on procedure errors and scan times compared to manual planning alone? Findings Software-driven examinations were more reliable (71% error-free) than human-planned ones (45% error-free) and showed improved efficiency with reduced idle time. Clinical relevance CMR examinations require extensive technologist training, and continuous attention, and involve many planning steps. A fully automated software reliably acquired non-stress CMR potentially reducing mistake risk and increasing data homogeneity.

Noninvasive identification of HER2 status by integrating multiparametric MRI-based radiomics model with the vesical imaging-reporting and data system (VI-RADS) score in bladder urothelial carcinoma.

Luo C, Li S, Han Y, Ling J, Wu X, Chen L, Wang D, Chen J

pubmed logopapersJul 1 2025
HER2 expression is crucial for the application of HER2-targeted antibody-drug conjugates. This study aims to construct a predictive model by integrating multiparametric magnetic resonance imaging (mpMRI) based multimodal radiomics and the Vesical Imaging-Reporting and Data System (VI-RADS) score for noninvasive identification of HER2 status in bladder urothelial carcinoma (BUC). A total of 197 patients were retrospectively enrolled and randomly divided into a training cohort (n = 145) and a testing cohort (n = 52). The multimodal radiomics features were derived from mpMRI, which were also utilized for VI-RADS score evaluation. LASSO algorithm and six machine learning methods were applied for radiomics feature screening and model construction. The optimal radiomics model was selected to integrate with VI-RADS score to predict HER2 status, which was determined by immunohistochemistry. The performance of predictive model was evaluated by receiver operating characteristic curve with area under the curve (AUC). Among the enrolled patients, 110 (55.8%) patients were demonstrated with HER2-positive and 87 (44.2%) patients were HER2-negative. Eight features were selected to establish radiomics signature. The optimal radiomics signature achieved the AUC values of 0.841 (95% CI 0.779-0.904) in the training cohort and 0.794 (95%CI 0.650-0.938) in the testing cohort, respectively. The KNN model was selected to evaluate the significance of radiomics signature and VI-RADS score, which were integrated as a predictive nomogram. The AUC values for the nomogram in the training and testing cohorts were 0.889 (95%CI 0.840-0.938) and 0.826 (95%CI 0.702-0.950), respectively. Our study indicated the predictive model based on the integration of mpMRI-based radiomics and VI-RADS score could accurately predict HER2 status in BUC. The model might aid clinicians in tailoring individualized therapeutic strategies.

Learning-based motion artifact correction in the Z-spectral domain for chemical exchange saturation transfer MRI.

Singh M, Mahmud SZ, Yedavalli V, Zhou J, Kamson DO, van Zijl P, Heo HY

pubmed logopapersJul 1 2025
To develop and evaluate a physics-driven, saturation contrast-aware, deep-learning-based framework for motion artifact correction in CEST MRI. A neural network was designed to correct motion artifacts directly from a Z-spectrum frequency (Ω) domain rather than an image spatial domain. Motion artifacts were simulated by modeling 3D rigid-body motion and readout-related motion during k-space sampling. A saturation-contrast-specific loss function was added to preserve amide proton transfer (APT) contrast, as well as enforce image alignment between motion-corrected and ground-truth images. The proposed neural network was evaluated on simulation data and demonstrated in healthy volunteers and brain tumor patients. The experimental results showed the effectiveness of motion artifact correction in the Z-spectrum frequency domain (MOCO<sub>Ω</sub>) compared to in the image spatial domain. In addition, a temporal convolution applied to a dynamic saturation image series was able to leverage motion artifacts to improve reconstruction results as a denoising process. The MOCO<sub>Ω</sub> outperformed existing techniques for motion correction in terms of image quality and computational efficiency. At 3 T, human experiments showed that the root mean squared error (RMSE) of APT images decreased from 4.7% to 2.1% at 1 μT and from 6.2% to 3.5% at 1.5 μT in case of "moderate" motion and from 8.7% to 2.8% at 1 μT and from 12.7% to 4.5% at 1.5 μT in case of "severe" motion, after motion artifact correction. The MOCO<sub>Ω</sub> could effectively correct motion artifacts in CEST MRI without compromising saturation transfer contrast.

Visualizing Preosteoarthritis: Updates on UTE-Based Compositional MRI and Deep Learning Algorithms.

Sun D, Wu G, Zhang W, Gharaibeh NM, Li X

pubmed logopapersJul 1 2025
Osteoarthritis (OA) is heterogeneous and involves structural changes in the whole joint, such as cartilage, meniscus/labrum, ligaments, and tendons, mainly with short T2 relaxation times. Detecting OA before the onset of irreversible changes is crucial for early proactive management and limit growing disease burden. The more recent advanced quantitative imaging techniques and deep learning (DL) algorithms in musculoskeletal imaging have shown great potential for visualizing "pre-OA." In this review, we first focus on ultrashort echo time-based magnetic resonance imaging (MRI) techniques for direct visualization as well as quantitative morphological and compositional assessment of both short- and long-T2 musculoskeletal tissues, and second explore how DL revolutionize the way of MRI analysis (eg, automatic tissue segmentation and extraction of quantitative image biomarkers) and the classification, prediction, and management of OA. PLAIN LANGUAGE SUMMARY: Detecting osteoarthritis (OA) before the onset of irreversible changes is crucial for early proactive management. OA is heterogeneous and involves structural changes in the whole joint, such as cartilage, meniscus/labrum, ligaments, and tendons, mainly with short T2 relaxation times. Ultrashort echo time-based magnetic resonance imaging (MRI), in particular, enables direct visualization and quantitative compositional assessment of short-T2 tissues. Deep learning is revolutionizing the way of MRI analysis (eg, automatic tissue segmentation and extraction of quantitative image biomarkers) and the detection, classification, and prediction of disease. They together have made further advances toward identification of imaging biomarkers/features for pre-OA. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 2.

Multiparametric MRI for Assessment of the Biological Invasiveness and Prognosis of Pancreatic Ductal Adenocarcinoma in the Era of Artificial Intelligence.

Zhao B, Cao B, Xia T, Zhu L, Yu Y, Lu C, Tang T, Wang Y, Ju S

pubmed logopapersJul 1 2025
Pancreatic ductal adenocarcinoma (PDAC) is the deadliest malignant tumor, with a grim 5-year overall survival rate of about 12%. As its incidence and mortality rates rise, it is likely to become the second-leading cause of cancer-related death. The radiological assessment determined the stage and management of PDAC. However, it is a highly heterogeneous disease with the complexity of the tumor microenvironment, and it is challenging to adequately reflect the biological aggressiveness and prognosis accurately through morphological evaluation alone. With the dramatic development of artificial intelligence (AI), multiparametric magnetic resonance imaging (mpMRI) using specific contrast media and special techniques can provide morphological and functional information with high image quality and become a powerful tool in quantifying intratumor characteristics. Besides, AI has been widespread in the field of medical imaging analysis. Radiomics is the high-throughput mining of quantitative image features from medical imaging that enables data to be extracted and applied for better decision support. Deep learning is a subset of artificial neural network algorithms that can automatically learn feature representations from data. AI-enabled imaging biomarkers of mpMRI have enormous promise to bridge the gap between medical imaging and personalized medicine and demonstrate huge advantages in predicting biological characteristics and the prognosis of PDAC. However, current AI-based models of PDAC operate mainly in the realm of a single modality with a relatively small sample size, and the technical reproducibility and biological interpretation present a barrage of new potential challenges. In the future, the integration of multi-omics data, such as radiomics and genomics, alongside the establishment of standardized analytical frameworks will provide opportunities to increase the robustness and interpretability of AI-enabled image biomarkers and bring these biomarkers closer to clinical practice. EVIDENCE LEVEL: 3 TECHNICAL EFFICACY: Stage 4.

Preoperative discrimination of absence or presence of myometrial invasion in endometrial cancer with an MRI-based multimodal deep learning radiomics model.

Chen Y, Ruan X, Wang X, Li P, Chen Y, Feng B, Wen X, Sun J, Zheng C, Zou Y, Liang B, Li M, Long W, Shen Y

pubmed logopapersJul 1 2025
Accurate preoperative evaluation of myometrial invasion (MI) is essential for treatment decisions in endometrial cancer (EC). However, the diagnostic accuracy of commonly utilized magnetic resonance imaging (MRI) techniques for this assessment exhibits considerable variability. This study aims to enhance preoperative discrimination of absence or presence of MI by developing and validating a multimodal deep learning radiomics (MDLR) model based on MRI. During March 2010 and February 2023, 1139 EC patients (age 54.771 ± 8.465 years; range 24-89 years) from five independent centers were enrolled retrospectively. We utilized ResNet18 to extract multi-scale deep learning features from T2-weighted imaging followed by feature selection via Mann-Whitney U test. Subsequently, a Deep Learning Signature (DLS) was formulated using Integrated Sparse Bayesian Extreme Learning Machine. Furthermore, we developed Clinical Model (CM) based on clinical characteristics and MDLR model by integrating clinical characteristics with DLS. The area under the curve (AUC) was used for evaluating diagnostic performance of the models. Decision curve analysis (DCA) and integrated discrimination index (IDI) were used to assess the clinical benefit and compare the predictive performance of models. The MDLR model comprised of age, histopathologic grade, subjective MR findings (TMD and Reading for MI status) and DLS demonstrated the best predictive performance. The AUC values for MDLR in training set, internal validation set, external validation set 1, and external validation set 2 were 0.899 (95% CI, 0.866-0.926), 0.874 (95% CI, 0.829-0.912), 0.862 (95% CI, 0.817-0.899) and 0.867 (95% CI, 0.806-0.914) respectively. The IDI and DCA showed higher diagnostic performance and clinical net benefits for the MDLR than for CM or DLS, which revealed MDLR may enhance decision-making support. The MDLR which incorporated clinical characteristics and DLS could improve preoperative accuracy in discriminating absence or presence of MI. This improvement may facilitate individualized treatment decision-making for EC.
Page 6 of 65646 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.