Sort by:
Page 215 of 6546537 results

Helo M, Nickel D, Kannengiesser S, Kuestner T

pubmed logopapersAug 29 2025
The emergence of new medications for fatty liver conditions has increased the need for reliable and widely available assessment of MRI proton density fat fraction (MRI-PDFF). Whereas low-field MRI presents a promising solution, its utilization is challenging due to the low SNR. This work aims to enhance SNR and enable precise PDFF quantification at low-field MRI using a novel locally low-rank deep learning-based (LLR-DL) reconstruction. LLR-DL alternates between regularized SENSE and a neural network (U-Net) throughout several iterations, operating on complex-valued data. The network processes the spectral projection onto singular value bases, which are computed on local patches across the echoes dimension. The output of the network is recast into the basis of the original echoes and used as a prior for the following iteration. The final echoes are processed by a multi-echo Dixon algorithm. Two different protocols were proposed for imaging at 0.55 T. An iron-and-fat phantom and 10 volunteers were scanned on both 0.55 and 1.5 T systems. Linear regression, t-statistics, and Bland-Altman analyses were conducted. LLR-DL achieved significantly improved image quality compared to the conventional reconstruction technique, with a 32.7% increase in peak SNR and a 25% improvement in structural similarity index. PDFF repeatability was 2.33% in phantoms (0% to 100%) and 0.79% in vivo (3% to 18%), with narrow cross-field strength limits of agreement below 1.67% in phantoms and 1.75% in vivo. An LLR-DL reconstruction was developed and investigated to enable precise PDFF quantification at 0.55 T and improve consistency with 1.5 T results.

Nghiem B, Wu Z, Kashyap S, Kasper L, Uludağ K

pubmed logopapersAug 29 2025
The purpose of this work was to develop and evaluate a novel method that leverages neural networks and physical modeling for 3D motion correction at different levels of corruption. The novel method ("UNet+JE") combines an existing neural network ("UNet<sub>mag</sub>") with a physics-informed algorithm for jointly estimating motion parameters and the motion-compensated image ("JE"). UNet<sub>mag</sub> and UNet+JE were trained on two training datasets separately with different distributions of motion corruption severity and compared to JE as a benchmark. All five resulting methods were tested on T<sub>1</sub>w 3D MPRAGE scans of healthy participants with simulated (n = 40) and in vivo (n = 10) motion corruption ranging from mild to severe motion. UNet+JE provided better motion correction than UNet<sub>mag</sub> ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo><</mo> <msup><mn>10</mn> <mrow><mo>-</mo> <mn>2</mn></mrow> </msup> </mrow> <annotation>$$ p<{10}^{-2} $$</annotation></semantics> </math> for all metrics for both simulated and in vivo data), under both training datasets. UNet<sub>mag</sub> exhibited residual image artifacts and blurring, as well as greater susceptibility to data distribution shifts than UNet+JE. UNet+JE and JE did not significantly differ in image correction quality ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo>></mo> <mn>0.05</mn></mrow> <annotation>$$ p>0.05 $$</annotation></semantics> </math> for all metrics), even under strong distribution shifts for UNet+JE. However, UNet+JE reduced runtimes by a median reduction factor of between 2.00 to 3.80 as well as 4.05 for the simulation and in vivo studies, respectively. UNet+JE benefitted from the robustness of joint estimation and the fast image improvement provided by the neural network, enabling the method to provide high quality 3D image correction under a wide range of motion corruption within shorter runtimes.

Zakkar, A., Perwaiz, N., Harikrishnan, V., Zhong, W., Narra, V., Krule, A., Yousef, F., Kim, D., Burrage-Burton, M., Lawal, A. A., Gadi, V., Korpics, M. C., Kim, S. J., Chen, Z., Khan, A. A., Molina, Y., Dai, Y., Marai, E., Meidani, H., Nguyen, R., Salahudeen, A. A.

medrxiv logopreprintAug 29 2025
PURPOSE Disparities of lung cancer incidence exist in Black populations and screening criteria underserve Black populations due to disparately elevated risk in the screening eligible population. Prediction models that integrate clinical and imaging-based features to individualize lung cancer risk is a potential means to mitigate these disparities. PATIENTS AND METHODS This Multicenter (NLST) and catchment population based (UIH, urban and suburban Cook County) study utilized participants at risk of lung cancer with available lung CT imaging and follow up between the years 2015 and 2024. 53,452 in NLST and 11,654 in UIH were included based on age and tobacco use based risk factors for lung cancer. Cohorts were used for training and testing of deep and machine learning models using clinical features alone or combined with CT image features (hybrid computer vision). RESULTS An optimized 7 clinical feature model achieved ROC-AUC values ranging 0.64-0.67 in NLST and 0.60-0.65 in UIH cohorts across multiple years. Incorporation of imaging features to form a hybrid computer vision model significantly improved ROC-AUC values to 0.78-0.91 in NLST but deteriorated in UIH with ROC-AUC values of 0.68- 0.80, attributable to Black participants where ROC-AUC values ranged from 0.63-0.72 across multiple years. Retraining the hybrid computer vision model by incorporating Black and other participants from the UIH cohort improved performance with ROC- AUC values of 0.70-0.87 in a held out UIH test set. CONCLUSION Hybrid computer vision predicted risk with improved accuracy compared to clinical risk models alone. However, potential biases in image training data reduced model generalizability in Black participants. Performance was improved upon retraining with a subset of the UIH cohort, suggesting that inclusive training and validation datasets can minimize racial disparities. Future studies incorporating vision models trained on representative data sets may demonstrate improved health equity upon clinical use.

Xu S, Ying Y, Hu Q, Li X, Li Y, Xiong H, Chen Y, Ye Q, Li X, Liu Y, Ai T, Du Y

pubmed logopapersAug 29 2025
This study aimed to develop a predictive model integrating multi-sequence MRI radiomics, deep learning features, and habitat imaging to forecast pathological complete response (pCR) in breast cancer patients undergoing neoadjuvant therapy (NAT). A retrospective analysis included 203 breast cancer patients treated with NAT from May 2018 to January 2023. Patients were divided into training (n = 162) and test (n = 41) sets. Radiomics features were extracted from intratumoral and peritumoral regions in multi-sequence MRI (T2WI, DWI, and DCE-MRI) datasets. Habitat imaging was employed to analyze tumor subregions, characterizing heterogeneity within the tumor. We constructed and validated machine learning models, including a fusion model integrating all features, using Receiver Operating Characteristic (ROC) and Precision-Recall (PR) curves, decision curve analysis (DCA), and confusion matrices. Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) analyses were performed for model interpretability. The fusion model achieved superior predictive performance compared to single-region models, with AUCs of 0.913 (95% CI: 0.770-1.000) in the test set. PR curve analysis showed improved precision-recall balance, while DCA indicated higher clinical benefit. Confusion matrix analysis confirmed the model's classification accuracy. SHAP revealed DCE_LLL_DependenceUniformity as the most critical feature for predicting pCR and PC72 for non-pCR. LIME provided patient-specific insights into feature contributions. Integrating multi-dimensional MRI features with habitat imaging enhances pCR prediction in breast cancer. The fusion model offers a robust, non-invasive tool for guiding individualized treatment strategies while providing transparent interpretability through SHAP and LIME analyses.

Mao Y, Heuvelmans MA, van Tuinen M, Yu D, Yi J, Oudkerk M, Ye Z, de Bock GH, Dorrius MD

pubmed logopapersAug 29 2025
To assess the impact of reconstruction parameters on AI's performance in detecting and classifying risk-dominant nodules in a baseline low-dose CT (LDCT) screening among a Chinese general population. Baseline LDCT scans from 300 consecutive participants in the Netherlands and China Big-3 (NELCIN-B3) trial were included. AI analyzed each scan reconstructed with four settings: 1 mm/0.7 mm thickness/interval with medium-soft and hard kernels (D45f/1 mm, B80f/1 mm) and 2 mm/1 mm with soft and medium-soft kernels (B30f/2 mm, D45f/2 mm). Reading results from consensus read by two radiologists served as reference standard. At scan level, inter-reader agreement between AI and reference standard, sensitivity, and specificity in determining the presence of a risk-dominant nodule were evaluated. For reference-standard risk-dominant nodules, nodule detection rate, and agreement in nodule type classification between AI and reference standard were assessed. AI-D45f/1 mm demonstrated a significantly higher sensitivity than AI-B80f/1 mm in determining the presence of a risk-dominant nodule per scan (77.5% vs. 31.5%, p < 0.0001). For reference-standard risk-dominant nodules (111/300, 37.0%), kernel variations (AI-D45f/1 mm vs. AI-B80f/1 mm) did not significantly affect AI's nodule detection rate (87.4% vs. 82.0%, p = 0.26) but substantially influenced the agreement in nodule type classification between AI and reference standard (87.7% [50/57] vs. 17.7% [11/62], p < 0.0001). Change in thickness/interval (AI-D45f/1 mm vs. AI-D45f/2 mm) had no substantial influence on any of AI's performance (p > 0.05). Variations in reconstruction kernels significantly affected AI's performance in risk-dominant nodule type classification, but not nodule detection. Ensuring consistency with radiologist-preferred kernels significantly improved agreement in nodule type classification and may help integrate AI more smoothly into clinical workflows. Question Patient management in lung cancer screening depends on the risk-dominant nodule, yet no prior studies have assessed the impact of reconstruction parameters on AI performance for these nodules. Findings The difference between reconstruction kernels (AI-D45f/1 mm vs. AI-B80f/1 mm, or AI-B30f/2 mm vs. AI-D45f/2 mm) significantly affected AI's performance in risk-dominant nodule type classification, but not nodule detection. Clinical relevance The use of kernel for AI consistent with radiologist's choice is likely to improve the overall performance of AI-based CAD systems as an independent reader and support greater clinical acceptance and integration of AI tools into routine practice.

Khorasani A

pubmed logopapersAug 29 2025
Gliomas are known to have different sub-regions within the tumor, including the edema, necrotic, and active tumor regions. Segmenting of these regions is very important for glioma treatment decisions and management. This paper aims to demonstrate the application of U-Net and pre-trained U-Net backbone networks in glioma semantic segmentation, utilizing different magnetic resonance imaging (MRI) image weights. The data used in this study for network training, validation, and testing is the Multimodal Brain Tumor Segmentation (BraTS) 2021 challenge. In this study, we applied the U-Net and different pre-trained Backbone U-Net for the semantic segmentation of glioma regions. The ResNet, Inception, and VGG networks, which are pre-trained using the ImageNet dataset, have been used as the Backbone in the U-Net architecture. The Accuracy (ACC) and Intersection over Union (IoU) were employed to assess the performance of the networks. The most prominent finding to emerge from this study is that trained ResNet-U-Net with T<sub>1</sub> post-contrast enhancement (T<sub>1</sub>Gd) has the highest ACC and IoU for the necrotic and active tumor regions semantic segmentation in glioma. It was also demonstrated that a trained ResNet-U-Net with T<sub>2</sub> Fluid-Attenuated Inversion Recovery (T<sub>2</sub>-FLAIR) is a suitable combination for edema segmentation in glioma. Our study further validates that the proposed framework's architecture and modules are scientifically grounded and practical, enabling the extraction and aggregation of valuable semantic information to enhance glioma semantic segmentation capability. It demonstrates how useful the ResNet-U-Net will be for physicians to extract glioma regions automatically.

Alanazi TM

pubmed logopapersAug 29 2025
Precise and early detection and diagnosis of lung diseases reduce the severity of life risk and further spread of infections in patients. Computer-based image processing techniques utilize magnetic resonance imaging (MRI) as input for computing, detecting, segmenting, etc., processes for improving the processing efficacy. This article introduces a Multimodal Feature Distinguishing Method (MFDM) for augmenting lung disease detection precision. The method distinguishes the extractable features of an MRI lung input using a homogeneity measure. Depending on the possible differentiations for heterogeneity feature detection, the training using a transformer network is pursued. This network performs differentiation verification and training classification independently and integrates the same for identifying heterogeneous features. The integration classifications are used for detecting the infected region based on feature precision. If the differentiation fails, then the transformer process reinitiates its process from the last known homogeneity feature between successive segments. Therefore, the distinguishing multimodal features between successive segments are validated for different differentiation levels, augmenting the accuracy. Thus, the introduced system ensures 8.78% of sensitivity, 8.81% of precision 9.75% of differentiation time while analyzing various lung features. Then, the effective results indicate that the MFDM model was successfully utilized in medical applications to improve the disease recognition rate.

Jeong H, Lee JM, Kim HS, Chae H, Yoon SJ, Shin SH, Han IW, Heo JS, Min JH, Hyun SH, Kim H

pubmed logopapersAug 29 2025
Pancreatic cancer is aggressive with high recurrence rates, necessitating accurate prediction models for effective treatment planning, particularly for neoadjuvant chemotherapy or upfront surgery. This study explores the use of variational autoencoder (VAE)-generated synthetic data to predict early tumor recurrence (within six months) in pancreatic cancer patients who underwent upfront surgery. Preoperative data of 158 patients between January 2021 and December 2022 was analyzed, and machine learning models-including Logistic Regression, Random Forest (RF), Gradient Boosting Machine (GBM), and Deep Neural Networks (DNN)-were trained on both original and synthetic datasets. The VAE-generated dataset (n = 94) closely matched the original data (p > 0.05) and enhanced model performance, improving accuracy (GBM: 0.81 to 0.87; RF: 0.84 to 0.87) and sensitivity (GBM: 0.73 to 0.91; RF: 0.82 to 0.91). PET/CT-derived metabolic parameters were the strongest predictors, accounting for 54.7% of the model predictive power with maximum standardized uptake value (SUVmax) showing the highest importance (0.182, 95% CI: 0.165-0.199). This study demonstrates that synthetic data can significantly enhance predictive models for pancreatic cancer recurrence, especially in data-limited scenarios, offering a promising strategy for oncology prediction models.

Liu J, Zhang L, Yuan Y, Tang J, Liu Y, Xia L, Zhang J

pubmed logopapersAug 29 2025
Osteoporotic vertebral fractures (OVFs) are common in older adults and often lead to disability if not properly diagnosed and classified. With the increased use of computed tomography (CT) imaging and the development of radiomics and deep learning technologies, there is potential to improve the classification accuracy of OVFs. This study aims to evaluate the efficacy of a deep learning radiomics model, derived from CT imaging, in accurately classifying OVFs. The study analyzed 981 patients (aged 50-95 years; 687 women, 294 men), involving 1098 vertebrae, from 3 medical centers who underwent both CT and magnetic resonance imaging examinations. The Assessment System of Thoracolumbar Osteoporotic Fractures (ASTLOF) classified OVFs into Classes 0, 1, and 2. The data were categorized into 4 cohorts: training (n=750), internal validation (n=187), external validation (n=110), and prospective validation (n=51). Deep transfer learning used the ResNet-50 architecture, pretrained on RadImageNet and ImageNet, to extract imaging features. Deep transfer learning-based features were combined with radiomics features and refined using Least Absolute Shrinkage and Selection Operator (LASSO) regression. The performance of 8 machine learning classifiers for OVF classification was assessed using receiver operating characteristic metrics and the "One-vs-Rest" approach. Performance comparisons between RadImageNet- and ImageNet-based models were performed using the DeLong test. Shapley Additive Explanations (SHAP) analysis was used to interpret feature importance and the predictive rationale of the optimal fusion model. Feature selection and fusion yielded 33 and 54 fused features for the RadImageNet- and ImageNet-based models, respectively, following pretraining on the training set. The best-performing machine learning algorithms for these 2 deep learning radiomics models were the multilayer perceptron and Light Gradient Boosting Machine (LightGBM). The macro-average area under the curve (AUC) values for the fused models based on RadImageNet and ImageNet were 0.934 and 0.996, respectively, with DeLong test showing no statistically significant difference (P=2.34). The RadImageNet-based model significantly surpassed the ImageNet-based model across internal, external, and prospective validation sets, with macro-average AUCs of 0.837 versus 0.648, 0.773 versus 0.633, and 0.852 versus 0.648, respectively (P<.05). Using the binary "One-vs-Rest" approach, the RadImageNet-based fused model achieved superior predictive performance for Class 2 (AUC=0.907, 95% CI 0.805-0.999), with Classes 0 and 1 following (AUC/accuracy=0.829/0.803 and 0.794/0.768, respectively). SHAP analysis provided a visualization of feature importance in the RadImageNet-based fused model, highlighting the top 3 most influential features: cluster shade, mean, and large area low gray level emphasis, and their respective impacts on predictions. The RadImageNet-based fused model using CT imaging data exhibited superior predictive performance compared to the ImageNet-based model, demonstrating significant utility in OVF classification and aiding clinical decision-making for treatment planning. Among the 3 classes, the model performed best in identifying Class 2, followed by Class 0 and Class 1.

Sun H, Sanaat A, Yi W, Salimi Y, Huang Y, Decorads CE, Castarède I, Wu H, Lu L, Zaidi H

pubmed logopapersAug 29 2025
Reducing PET scan acquisition time to minimize motion-related artifacts and improving patient comfort is always demanding. This study proposes a deep-learning framework for synthesizing diagnostic-quality PET images from ultrafast scans in multi-tracer total-body PET imaging. A retrospective analysis was conducted on clinical uEXPLORER PET/CT datasets from a single institution, including [<sup>18</sup>F]FDG (N=50), [<sup>18</sup>F]FAPI (N=45) and [<sup>68</sup>Ga]FAPI (N=60) studies. Standard 300-s acquisitions were performed for each patient, with ultrafast scan PET images (3, 6, 15, 30, and 40 s) generated through list-mode data truncation. We developed two variants of a 3D SwinUNETR-V2 architecture: Model 1 (PET-only input) and Model 2 (PET+CT fusion input). The proposed methodology was trained and tested on all three datasets using 5-fold cross-validation. The proposed Model 1 and Model 2 significantly enhanced subjective image quality and lesion detectability in multi-tracer PET images compared to the original ultrafast scans. Model 1 and Model 2 also improved objective image quality metrics. For the [¹⁸F]FDG datasets, both approaches improved peak signal-to-noise ratio (PSNR) metrics across ultra-short acquisitions: 3 s: 48.169±6.121 (Model 1) vs. 48.123±6.103 (Model 2) vs. 44.092±7.508 (ultrafast), p < 0.001; 6 s: 48.997±5.960 vs. 48.461±5.897 vs. 46.503±7.190, p < 0.001; 15 s: 50.310±5.674 vs. 50.042±5.734 vs. 49.331±6.732, p < 0.001. The proposed Model 1 and Model 2 effectively enhance image quality of multi-tracer total-body PET scans with ultrafast acquisition times. The predicted PET images demonstrate comparable performance in terms of image quality and lesion detectability.
Page 215 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.