Sort by:
Page 47 of 3463455 results

Clinical Metadata Guided Limited-Angle CT Image Reconstruction

Yu Shi, Shuyi Fan, Changsheng Fang, Shuo Han, Haodong Li, Li Zhou, Bahareh Morovati, Dayang Wang, Hengyong Yu

arxiv logopreprintSep 1 2025
Limited-angle computed tomography (LACT) offers improved temporal resolution and reduced radiation dose for cardiac imaging, but suffers from severe artifacts due to truncated projections. To address the ill-posedness of LACT reconstruction, we propose a two-stage diffusion framework guided by structured clinical metadata. In the first stage, a transformer-based diffusion model conditioned exclusively on metadata, including acquisition parameters, patient demographics, and diagnostic impressions, generates coarse anatomical priors from noise. The second stage further refines the images by integrating both the coarse prior and metadata to produce high-fidelity results. Physics-based data consistency is enforced at each sampling step in both stages using an Alternating Direction Method of Multipliers module, ensuring alignment with the measured projections. Extensive experiments on both synthetic and real cardiac CT datasets demonstrate that incorporating metadata significantly improves reconstruction fidelity, particularly under severe angular truncation. Compared to existing metadata-free baselines, our method achieves superior performance in SSIM, PSNR, nMI, and PCC. Ablation studies confirm that different types of metadata contribute complementary benefits, particularly diagnostic and demographic priors under limited-angle conditions. These findings highlight the dual role of clinical metadata in improving both reconstruction quality and efficiency, supporting their integration into future metadata-guided medical imaging frameworks.

Uncovering novel functions of NUF2 in glioblastoma and MRI-based expression prediction.

Zhong RD, Liu YS, Li Q, Kou ZW, Chen FF, Wang H, Zhang N, Tang H, Zhang Y, Huang GD

pubmed logopapersSep 1 2025
Glioblastoma multiforme (GBM) is a lethal brain tumor with limited therapies. NUF2, a kinetochore protein involved in cell cycle regulation, shows oncogenic potential in various cancers; however, its role in GBM pathogenesis remains unclear. In this study, we investigated NUF2's function and mechanisms in GBM and developed an MRI-based machine learning model to predict its expression non-invasively, and evaluated its potential as a therapeutic target and prognostic biomarker. Functional assays (proliferation, colony formation, migration, and invasion) and cell cycle analysis were conducted using NUF2-knockdown U87/U251 cells. Western blotting was performed to assess the expression levels of β-catenin and MMP-9. Bioinformatic analyses included pathway enrichment, immune infiltration, and single-cell subtype characterization. Using preoperative T1CE Magnetic Resonance Imaging sequences from 61 patients, we extracted 1037 radiomic features and developed a predictive model using Least Absolute Shrinkage and Selection Operator regression for feature selection and random forest algorithms for classification with rigorous cross-validation. NUF2 overexpression in GBM tissues and cells was correlated with poor survival (p < 0.01). Knockdown of NUF2 significantly suppressed malignant phenotypes (p < 0.05), induced G0/G1 arrest (p < 0.01), and increased sensitivity to TMZ treatment via the β-catenin/MMP9 pathway. The radiomic model achieved superior NUF2 prediction (AUC = 0.897) using six optimized features. Key features demonstrated associations with MGMT methylation and 1p/19q co-deletion, serving as independent prognostic markers. NUF2 drives GBM progression through β-catenin/MMP9 activation, establishing its dual role as a therapeutic target and a prognostic biomarker. The developed radiogenomic model enables precise non-invasive NUF2 evaluation, thereby advancing personalized GBM management. This study highlights the translational value of integrating molecular biology with artificial intelligence in neuro-oncology.

Automated rating of Fazekas scale in fluid-attenuated inversion recovery MRI for ischemic stroke or transient ischemic attack using machine learning.

Jeon ET, Kim SM, Jung JM

pubmed logopapersSep 1 2025
White matter hyperintensities (WMH) are commonly assessed using the Fazekas scale, a subjective visual grading system. Despite the emergence of deep learning models for automatic WMH grading, their application in stroke patients remains limited. This study aimed to develop and validate an automatic segmentation and grading model for WMH in stroke patients, utilizing spatial-probabilistic methods. We developed a two-step deep learning pipeline to predict Fazekas scale scores from T2-weighted FLAIR images. First, WMH segmentation was performed using a residual neural network based on the U-Net architecture. Then, Fazekas scale grading was carried out using a 3D convolutional neural network trained on the segmented WMH probability volumes. A total of 471 stroke patients from three different sources were included in the analysis. The performance metrics included area under the precision-recall curve (AUPRC), Dice similarity coefficient, and absolute error for WMH volume prediction. In addition, agreement analysis and quadratic weighted kappa were calculated to assess the accuracy of the Fazekas scale predictions. The WMH segmentation model achieved an AUPRC of 0.81 (95% CI, 0.55-0.95) and a Dice similarity coefficient of 0.73 (95% CI, 0.49-0.87) in the internal test set. The mean absolute error between the true and predicted WMH volumes was 3.1 ml (95% CI, 0.0 ml-15.9 ml), with no significant variation across Fazekas scale categories. The agreement analysis demonstrated strong concordance, with an R-squared value of 0.91, a concordance correlation coefficient of 0.96, and a systematic difference of 0.33 ml in the internal test set, and 0.94, 0.97, and 0.40 ml, respectively, in the external validation set. In predicting Fazekas scores, the 3D convolutional neural network achieved quadratic weighted kappa values of 0.951 for regression tasks and 0.956 for classification tasks in the internal test set, and 0.898 and 0.956, respectively, in the external validation set. The proposed deep learning pipeline demonstrated robust performance in automatic WMH segmentation and Fazekas scale grading from FLAIR images in stroke patients. This approach offers a reliable and efficient tool for evaluating WMH burden, which may assist in predicting future vascular events.

Pulmonary T2* quantification of fetuses with congenital diaphragmatic hernia: a retrospective, case-controlled, MRI pilot study.

Avena-Zampieri CL, Uus A, Egloff A, Davidson J, Hutter J, Knight CL, Hall M, Deprez M, Payette K, Rutherford M, Greenough A, Story L

pubmed logopapersSep 1 2025
Advanced MRI techniques, motion-correction and T2*-relaxometry, may provide information regarding functional properties of pulmonary tissue. We assessed whether lung volumes and pulmonary T2* values in fetuses with congenital diaphragmatic hernia (CDH) were lower than controls and differed between survivors and non-survivors. Women with uncomplicated pregnancies (controls) and those with a CDH had a fetal MRI on a 1.5 T imaging system encompassing T2 single shot fast spin echo sequences and gradient echo single shot echo planar sequences providing T2* data. Motion-correction was performed using slice-to-volume reconstruction, T2* maps were generated using in-house pipelines. Lungs were segmented separately using a pre-trained 3D-deep-learning pipeline. Datasets from 33 controls and 12 CDH fetuses were analysed. The mean ± SD gestation at scan was 28.3 ± 4.3 for controls and 27.6 ± 4.9 weeks for CDH cases. CDH lung volumes were lower than controls in both non-survivors and survivors for both lungs combined (5.76 ± 3.59 [cc], mean difference = 15.97, 95% CI: -24.51--12.9, p < 0.001 and 5.73 ± 2.96 [cc], mean difference = 16, 95% CI: 1.91-11.53, p = 0.008) and for the ipsilateral lung (1.93 ± 2.09 [cc], mean difference = 19.8, 95% CI: -28.48--16.45, p < 0.001 1.58 ± 1.18 [cc], mean difference=20.15, 95% CI: 5.96-15.97, p < 0.001). Mean pulmonary T2* values were lower in non-survivors in both lungs, the ipsilateral and contralateral lungs compared with the control group (81.83 ± 26.21 ms, mean difference = 31.13, 95% CI: -58.14--10.32, p = 0.006; 81.05 ± 26.84 ms, mean difference = 31.91, 95% CI: -59.02--10.82, p = 0.006; 82.62 ± 36.31 ms, mean difference = 30.34, 95% CI: -58.84--8.25, p = 0.011) but no difference was observed between controls and CDH cases that survived. Mean pulmonary T2* values were lower in CDH fetuses compared to controls and CDH cases who died compared to survivors. Mean pulmonary T2* values may have a prognostic function in CDH fetuses. This study provides original motion-corrected assessment of the morphologic and functional properties of the ipsilateral and contralateral fetal lungs in the context of CDH. Mean pulmonary T2* values were lower in CDH fetuses compared to controls and in cases who died compared to survivors. Mean pulmonary T2* values may have a role in prognostication. Reduction in pulmonary T2* values in CDH fetuses suggests altered pulmonary development, contributing new insights into antenatal assessment.

TFKT V2: task-focused knowledge transfer from natural images for computed tomography perceptual image quality assessment.

Rifa KR, Ahamed MA, Zhang J, Imran A

pubmed logopapersSep 1 2025
The accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic reliability while minimizing radiation dose. Radiologists' evaluations are time-consuming and labor-intensive. Existing automated approaches often require large CT datasets with predefined image quality assessment (IQA) scores, which often do not align well with clinical evaluations. We aim to develop a reference-free, automated method for CT IQA that closely reflects radiologists' evaluations, reducing the dependency on large annotated datasets. We propose Task-Focused Knowledge Transfer (TFKT), a deep learning-based IQA method leveraging knowledge transfer from task-similar natural image datasets. TFKT incorporates a hybrid convolutional neural network-transformer model, enabling accurate quality predictions by learning from natural image distortions with human-annotated mean opinion scores. The model is pre-trained on natural image datasets and fine-tuned on low-dose computed tomography perceptual image quality assessment data to ensure task-specific adaptability. Extensive evaluations demonstrate that the proposed TFKT method effectively predicts IQA scores aligned with radiologists' assessments on in-domain datasets and generalizes well to out-of-domain clinical pediatric CT exams. The model achieves robust performance without requiring high-dose reference images. Our model is capable of assessing the quality of <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mo>∼</mo> <mn>30</mn></mrow> </math> CT image slices in a second. The proposed TFKT approach provides a scalable, accurate, and reference-free solution for CT IQA. The model bridges the gap between traditional and deep learning-based IQA, offering clinically relevant and computationally efficient assessments applicable to real-world clinical settings.

Comparison of the diagnostic performance of the artificial intelligence-based TIRADS algorithm with established classification systems for thyroid nodules.

Bozkuş A, Başar Y, Güven K

pubmed logopapersSep 1 2025
This study aimed to evaluate and compare the diagnostic performance of various Thyroid Imaging Reporting and Data Systems (TIRADS), with a particular focus on the artificial intelligence-based TIRADS (AI-TIRADS), in characterizing thyroid nodules. In this retrospective study conducted between April 2016 and May 2022, 1,322 thyroid nodules from 1,139 patients with confirmed cytopathological diagnoses were included. Each nodule was assessed using TIRADS classifications defined by the American College of Radiology (ACR-TIRADS), the American Thyroid Association (ATA-TIRADS), the European Thyroid Association (EU-TIRADS), the Korean Thyroid Association (K-TIRADS), and the AI-TIRADS. Three radiologists independently evaluated the ultrasound (US) characteristics of the nodules using all classification systems. Diagnostic performance was assessed using sensitivity, specificity, positive predictive value (PPV), and negative predictive value, and comparisons were made using the McNemar test. Among the nodules, 846 (64%) were benign, 299 (22.6%) were of intermediate risk, and 147 (11.1%) were malignant. The AI-TIRADS demonstrated a PPV of 21.2% and a specificity of 53.6%, outperforming the other systems in specificity without compromising sensitivity. The specificities of the ACR-TIRADS, the ATA-TIRADS, the EU-TIRADS, and the K-TIRADS were 44.6%, 39.3%, 40.1%, and 40.1%, respectively (all pairwise comparisons with the AI-TIRADS: <i>P</i> < 0.001). The PPVs for the ACR-TIRADS, the ATA-TIRADS, the EU-TIRADS, and the K-TIRADS were 18.5%, 17.9%, 17.9%, and 17.4%, respectively (all pairwise comparisons with the AI-TIRADS, excluding the ACR-TIRADS: <i>P</i> < 0.05). The AI-TIRADS shows promise in improving diagnostic specificity and reducing unnecessary biopsies in thyroid nodule assessment while maintaining high sensitivity. The findings suggest that the AI-TIRADS may enhance risk stratification, leading to better patient management. Additionally, the study found that the presence of multiple suspicious US features markedly increases the risk of malignancy, whereas isolated features do not substantially elevate the risk. The AI-TIRADS can enhance thyroid nodule risk stratification by improving diagnostic specificity and reducing unnecessary biopsies, potentially leading to more efficient patient management and better utilization of healthcare resources.

Automated coronary analysis in ultrahigh-spatial resolution photon-counting detector CT angiography: Clinical validation and intra-individual comparison with energy-integrating detector CT.

Kravchenko D, Hagar MT, Varga-Szemes A, Schoepf UJ, Schoebinger M, O'Doherty J, Gülsün MA, Laghi A, Laux GS, Vecsey-Nagy M, Emrich T, Tremamunno G

pubmed logopapersSep 1 2025
To evaluate a deep-learning algorithm for automated coronary artery analysis on ultrahigh-resolution photon-counting detector coronary computed tomography (CT) angiography and compared its performance to expert readers using invasive coronary angiography as reference. Thirty-two patients (mean age 68.6 years; 81 ​% male) underwent both energy-integrating detector and ultrahigh-resolution photon-counting detector CT within 30 days. Expert readers scored each image using the Coronary Artery Disease-Reporting and Data System classification, and compared to invasive angiography. After a three-month wash-out, one reader reanalyzed the photon-counting detector CT images assisted by the algorithm. Sensitivity, specificity, accuracy, inter-reader agreement, and reading times were recorded for each method. On 401 arterial segments, inter-reader agreement improved from substantial (κ ​= ​0.75) on energy-integrating detector CT to near-perfect (κ ​= ​0.86) on photon-counting detector CT. The algorithm alone achieved 85 ​% sensitivity, 91 ​% specificity, and 90 ​% accuracy on energy-integrating detector CT, and 85 ​%, 96 ​%, and 95 ​% on photon-counting detector CT. Compared to invasive angiography on photon-counting detector CT, manual and automated reads had similar sensitivity (67 ​%), but manual assessment slightly outperformed regarding specificity (85 ​% vs. 79 ​%) and accuracy (84 ​% vs. 78 ​%). When the reader was assisted by the algorithm, specificity rose to 97 ​% (p ​< ​0.001), accuracy to 95 ​%, and reading time decreased by 54 ​% (p ​< ​0.001). This deep-learning algorithm demonstrates high agreement with experts and improved diagnostic performance on photon-counting detector CT. Expert review augmented by the algorithm further increases specificity and dramatically reduces interpretation time.

Added prognostic value of histogram features from preoperative multi-modal diffusion MRI in predicting Ki-67 proliferation for adult-type diffuse gliomas.

Huang Y, He S, Hu H, Ma H, Huang Z, Zeng S, Mazu L, Zhou W, Zhao C, Zhu N, Wu J, Liu Q, Yang Z, Wang W, Shen G, Zhang N, Chu J

pubmed logopapersSep 1 2025
Ki-67 labelling index (LI), a critical marker of tumor proliferation, is vital for grading adult-type diffuse gliomas and predicting patient survival. However, its accurate assessment currently relies on invasive biopsy or surgical resection. This makes it challenging to non-invasively predict Ki-67 LI and subsequent prognosis. Therefore, this study aimed to investigate whether histogram analysis of multi-parametric diffusion model metrics-specifically diffusion tensor imaging (DTI), diffusion kurtosis imaging (DKI), and neurite orientation dispersion and density imaging (NODDI)-could help predict Ki-67 LI in adult-type diffuse gliomas and further predict patient survival. A total of 123 patients with diffuse gliomas who underwent preoperative bipolar spin-echo diffusion magnetic resonance imaging (MRI) were included. Diffusion metrics (DTI, DKI and NODDI) and their histogram features were extracted and used to develop a nomogram model in the training set (n=86), and the performance was verified in the test set (n=37). Area under the receiver operating characteristics curve of the nomogram model was calculated. The outcome cohort, including 123 patients, was used to evaluate the predictive value of the diffusion nomogram model for overall survival (OS). Cox proportion regression was performed to predict OS. Among 123 patients, 87 exhibited high Ki-67 LI (Ki-67 LI >5%). The patients had a mean age of 46.08±13.24 years, with 39 being female. Tumor grading showed 46 cases of grade 2, 21 cases of grade 3, and 56 cases of grade 4. The nomogram model included eight histogram features from diffusion MRI and showed good performance for prediction Ki-67 LI, with area under the receiver operating characteristic curves (AUCs) of 0.92 [95% confidence interval (CI): 0.85-0.98, sensitivity =0.85, specificity =0.84] and 0.84 (95% CI: 0.64-0.98, sensitivity =0.77, specificity =0.73) in the training set and test set, respectively. Further nomogram incorporating these variables showed good discrimination in Ki-67 LI predicting and glioma grading. A low nomogram model score relative to the median value in the outcomes cohort was independently associated with OS (P<0.01). Accurate prediction of the Ki-67 LI in adult-type diffuse glioma patients was achieved by using multi-modal diffusion MRI histogram radiomics model, which also reliably and accurately determined survival. ClinicalTrials.gov Identifier: NCT06572592.

Prediction of lymphovascular invasion in invasive breast cancer via intratumoral and peritumoral multiparametric magnetic resonance imaging machine learning-based radiomics with Shapley additive explanations interpretability analysis.

Chen S, Zhong Z, Chen Y, Tang W, Fan Y, Sui Y, Hu W, Pan L, Liu S, Kong Q, Guo Y, Liu W

pubmed logopapersSep 1 2025
The use of multiparametric magnetic resonance imaging (MRI) in predicting lymphovascular invasion (LVI) in breast cancer has been well-documented in the literature. However, the majority of the related studies have primarily focused on intratumoral characteristics, overlooking the potential contribution of peritumoral features. The aim of this study was to evaluate the effectiveness of multiparametric MRI in predicting LVI by analyzing both intratumoral and peritumoral radiomics features and to assess the added value of incorporating both regions in LVI prediction. A total of 366 patients underwent preoperative breast MRI from two centers and were divided into training (n=208), validation (n=70), and test (n=88) sets. Imaging features were extracted from intratumoral and peritumoral T2-weighted imaging, diffusion-weighted imaging, and dynamic contrast-enhanced MRI. Five models were developed for predicting LVI status based on logistic regression: the tumor area (TA) model, peritumoral area (PA) model, tumor-plus-peritumoral area (TPA) model, clinical model, and combined model. The combined model was created incorporating the highest radiomics score and clinical factors. Predictive efficacy was evaluated via the receiver operating characteristic (ROC) curve and area under the curve (AUC). The Shapley additive explanation (SHAP) method was used to rank the features and explain the final model. The performance of the TPA model was superior to that of the TA and PA models. A combined model was further developed via multivariable logistic regression, with the TPA radiomics score (radscore), MRI-assessed axillary lymph node (ALN) status, and peritumoral edema (PE) being incorporated. The combined model demonstrated good calibration and discrimination performance across the training, validation, and test datasets, with AUCs of 0.888 [95% confidence interval (CI): 0.841-0.934], 0.856 (95% CI: 0.769-0.943), and 0.853 (95% CI: 0.760-0.946), respectively. Furthermore, we conducted SHAP analysis to evaluate the contributions of TPA radscore, MRI-ALN status, and PE in LVI status prediction. The combined model, incorporating clinical factors and intratumoral and peritumoral radscore, effectively predicts LVI and may potentially aid in tailored treatment planning.

Acoustic Interference Suppression in Ultrasound images for Real-Time HIFU Monitoring Using an Image-Based Latent Diffusion Model

Dejia Cai, Yao Ran, Kun Yang, Xinwang Shi, Yingying Zhou, Kexian Wu, Yang Xu, Yi Hu, Xiaowei Zhou

arxiv logopreprintSep 1 2025
High-Intensity Focused Ultrasound (HIFU) is a non-invasive therapeutic technique widely used for treating various diseases. However, the success and safety of HIFU treatments depend on real-time monitoring, which is often hindered by interference when using ultrasound to guide HIFU treatment. To address these challenges, we developed HIFU-ILDiff, a novel deep learning-based approach leveraging latent diffusion models to suppress HIFU-induced interference in ultrasound images. The HIFU-ILDiff model employs a Vector Quantized Variational Autoencoder (VQ-VAE) to encode noisy ultrasound images into a lower-dimensional latent space, followed by a latent diffusion model that iteratively removes interference. The denoised latent vectors are then decoded to reconstruct high-resolution, interference-free ultrasound images. We constructed a comprehensive dataset comprising 18,872 image pairs from in vitro phantoms, ex vivo tissues, and in vivo animal data across multiple imaging modalities and HIFU power levels to train and evaluate the model. Experimental results demonstrate that HIFU-ILDiff significantly outperforms the commonly used Notch Filter method, achieving a Structural Similarity Index (SSIM) of 0.796 and Peak Signal-to-Noise Ratio (PSNR) of 23.780 compared to SSIM of 0.443 and PSNR of 14.420 for the Notch Filter under in vitro scenarios. Additionally, HIFU-ILDiff achieves real-time processing at 15 frames per second, markedly faster than the Notch Filter's 5 seconds per frame. These findings indicate that HIFU-ILDiff is able to denoise HIFU interference in ultrasound guiding images for real-time monitoring during HIFU therapy, which will greatly improve the treatment precision in current clinical applications.
Page 47 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.