Sort by:
Page 208 of 6546537 results

Zou LM, Xu C, Xu M, Xu KT, Wang M, Wang Y, Wang YN

pubmed logopapersSep 1 2025
Super-resolution deep learning reconstruction (SR-DLR) algorithm has emerged as a promising image reconstruction technique for improving the image quality of coronary computed tomography angiography (CCTA) and ensuring accurate CCTA-derived fractional flow reserve (CT-FFR) assessments even in problematic scenarios (e.g., the presence of heavily calcified plaque and stent implantation). Therefore, the purposes of this study were to evaluate the image quality of CCTA obtained with SR-DLR in comparison with conventional reconstruction methods and to investigate the diagnostic performances of different reconstruction approaches based on CT-FFR. Fifty patients who underwent CCTA and subsequent invasive coronary angiography (ICA) were retrospectively included. All images were reconstructed with hybrid iterative reconstruction (HIR), model-based iterative reconstruction (MBIR), conventional deep learning reconstruction (C-DLR), and SR-DLR algorithms. Objective parameters and subjective scores were compared. Among the patients, 22-comprising 45 lesions-had invasive FFR results as a reference, and the diagnostic performance of different reconstruction approaches based on CT-FFR were compared. SR-DLR achieved the lowest image noise, highest signal-to-noise ratio (SNR), and best edge sharpness (all P values <0.05), as well as the best subjective scores from both reviewers (all P values <0.001). With FFR serving as a reference, the specificity and positive predictive value (PPV) were improved as compared with HIR and C-DLR (72% <i>vs.</i> 36-44% and 73% <i>vs.</i> 53-58%, respectively); moreover, SR-DLR improved the sensitivity and negative predictive value (NPV) as compared to MBIR (95% <i>vs.</i> 70% and 95% <i>vs.</i> 68%, respectively; all P values <0.05). The overall diagnostic accuracy and area under the curve (AUC) for SR-DLR were significantly higher than those of the HIR, MBIR, and C-DLR algorithms (82% <i>vs.</i> 60-67% and 0.84 <i>vs.</i> 0.61-0.70, respectively; all P values <0.05). SR-DLR had the best image quality for both objective and subjective evaluation. The diagnostic performances of CT-FFR were improved by SR-DLR, enabling more accurate assessment of flow-limiting lesions.

Wang N, Liu Y, Ran J, An Q, Chen L, Zhao Y, Yu D, Liu A, Zhuang L, Song Q

pubmed logopapersSep 1 2025
Magnetic resonance imaging (MRI) plays a crucial role in the diagnosis of abdominal conditions. A comprehensive assessment, especially of the liver, requires multi-planar T2-weighted sequences. To mitigate the effect of respiratory motion on image quality, the combination of acquisition and reconstruction with motion suppression (ARMS) and respiratory triggering (RT) is commonly employed. While this method maintains image quality, it does so at the expense of longer acquisition times. We evaluated the effectiveness of free-breathing, artificial intelligence-assisted compressed-sensing respiratory-triggered T2-weighted imaging (ACS-RT T2WI) compared to conventional acquisition and reconstruction with motion-suppression respiratory-triggered T2-weighted imaging (ARMS-RT T2WI) in abdominal MRI, assessing both qualitative and quantitative measures of image quality and lesion detection. In this retrospective study, 334 patients with upper abdominal discomfort were examined on a 3.0T MRI system. Each patient underwent both ARMS-RT T2WI and ACS-RT T2WI. Image quality was analyzed by two independent readers using a five-point Likert scale. The quantitative measurements included the signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), peak signal-to-noise ratio (PSNR), and sharpness. Lesion detection rates and contrast ratios (CRs) were also evaluated for liver, biliary system, and pancreatic lesions. There ACS-RT T2WI protocol had a significantly reduced median scanning time compared to the ARMS-RT T2WI protocol (148.22±38.37 <i>vs.</i> 13.86±1.72 seconds). However, ARMS-RT T2WI had a higher PSNR than ACS-RT T2WI (39.87±2.72 <i>vs.</i> 38.69±3.00, P<0.05). Of the 201 liver lesions, ARMS-RT T2WI detected 193 (96.0%) and ACS-RT T2WI detected 192 (95.5%) (P=0.787). Of the 97 biliary system lesions, ARMS-RT T2WI detected 92 (94.8%) and ACS-RT T2WI detected 94 (96.9%) (P=0.721). Of the 110 pancreatic lesions, ARMS-RT T2WI detected 102 (92.7%) and ACS-RT T2WI detected 104 (94.5%) (P=0.784). The CR analysis showed the superior performance of ACS-RT T2WI in certain lesion types (hemangioma, 0.58±0.11 <i>vs.</i> 0.55±0.12; biliary tumor, 0.47±0.09 <i>vs.</i> 0.38±0.09; pancreatic cystic lesions, 0.59±0.12 <i>vs.</i> 0.48±0.14; pancreatic cancer, 0.48±0.18 <i>vs.</i> 0.43±0.17), but no significant difference was found in others like focal nodular hyperplasia (FNH), hepatapostema, hepatocellular carcinoma (HCC), cholangiocarcinoma, metastatic tumors, and biliary calculus. ACS-RT T2WI ensures clinical reliability with a substantial scan time reduction (>80%). Despite minor losses in detail and SNR reduction, ACS-RT T2WI does not impair lesion detection, marking its efficacy in abdominal imaging.

Yang S, Bie Y, Zhao L, Luan K, Li X, Chi Y, Bian Z, Zhang D, Pang G, Zhong H

pubmed logopapersSep 1 2025
Deep learning image reconstruction (DLIR) can enhance image quality and lower image dose, yet its impact on radiomics features (RFs) remains unclear. This study aimed to compare the effects of DLIR and conventional adaptive statistical iterative reconstruction-Veo (ASIR-V) algorithms on the robustness of RFs using standard and low-dose abdominal clinical computed tomography (CT) scans. A total of 54 patients with hepatic masses who underwent abdominal contrast-enhanced CT scans were retrospectively analyzed. The raw data of standard dose in the venous phase and low dose in the delayed phase were reconstructed using five reconstruction settings, including ASIR-V at 30% (ASIR-V30%) and 70% (ASIR-V70%) levels, and DLIR at low (DLIR-L), medium (DLIR-M), and high (DLIR-H) levels. The PyRadiomics platform was used for the extraction of RFs in 18 regions of interest (ROIs) in different organs or tissues. The consistency of RFs among different algorithms and different strength levels was tested by coefficient of variation (CV) and quartile coefficient of dispersion (QCD). The consistency of RFs among different strength levels of the same algorithm and clinically comparable levels across algorithms was evaluated by intraclass correlation coefficient (ICC). Robust features were identified by Kruskal-Wallis and Mann-Whitney <i>U</i> test. Among the five reconstruction methods, the mean CV and QCD in the standard-dose group were 0.364 and 0.213, respectively, and the corresponding values were 0.444 and 0.245 in the low-dose group. The mean ICC values between ASIR-V 30% and 70%, DLIR-L and M, DLIR-M and H, DLIR-L and H, ASIR-V30% and DLIR-M, and ASIR-V70% and DLIR-H were 0.672, 0.734, 0.756, 0.629, 0.724, and 0.651, respectively, in the standard-dose group, and the corresponding values were 0.500, 0.567, 0.700, 0.474, 0.499, and 0.650 in the low-dose group. The ICC values between DLIR-M and H under low-dose conditions were even higher than those of ASIR-V30% and -V70% under standard dose conditions. Among the five reconstruction settings, averages of 14.0% (117/837) and 10.3% (86/837) of RFs across 18 ROIs exhibited robustness under standard-dose and low-dose conditions, respectively. Some 23.1% (193/837) of RFs demonstrated robustness between the low-dose DLIR-M and H groups, which was higher than the 21.0% (176/837) observed in the standard-dose ASIR-V30% and -V70% groups. Most of the RFs lacked reproducibility across algorithms and energy levels. However, DLIR at medium (M) and high (H) levels significantly improved RFs consistency and robustness, even at reduced doses.

Chang Z, Shang J, Fan Y, Huang P, Hu Z, Zhang K, Dai J, Yan H

pubmed logopapersSep 1 2025
Cone-beam computed tomography (CBCT) is a three-dimensional (3D) imaging method designed for routine target verification of cancer patients during radiotherapy. The images are reconstructed from a sequence of projection images obtained by the on-board imager attached to a radiotherapy machine. CBCT images are usually stored in a health information system, but the projection images are mostly abandoned due to their massive volume. To store them economically, in this study, a deep learning (DL)-based super-resolution (SR) method for compressing the projection images was investigated. In image compression, low-resolution (LR) images were down-sampled by a factor from the high-resolution (HR) projection images and then encoded to the video file. In image restoration, LR images were decoded from the video file and then up-sampled to HR projection images via the DL network. Three SR DL networks, convolutional neural network (CNN), residual network (ResNet), and generative adversarial network (GAN), were tested along with three video coding-decoding (CODEC) algorithms: Advanced Video Coding (AVC), High Efficiency Video Coding (HEVC), and AOMedia Video 1 (AV1). Based on the two databases of the natural and projection images, the performance of the SR networks and video codecs was evaluated with the compression ratio (CR), peak signal-to-noise ratio (PSNR), video quality metric (VQM), and structural similarity index measure (SSIM). The codec AV1 achieved the highest CR among the three codecs. The CRs of AV1 were 13.91, 42.08, 144.32, and 289.80 for the down-sampling factor (DSF) 0 (non-SR) 2, 4, and 6, respectively. The SR network, ResNet, achieved the best restoration accuracy among the three SR networks. Its PSNRs were 69.08, 41.60, 37.08, and 32.44 dB for the four DSFs, respectively; its VQMs were 0.06%, 3.65%, 6.95%, and 13.03% for the four DSFs, respectively; and its SSIMs were 0.9984, 0.9878, 0.9798, and 0.9518 for the four DSFs, respectively. As the DSF increased, the CR increased proportionally with the modest degradation of the restored images. The application of the SR model can further improve the CR based on the current result achieved by the video encoders. This compression method is not only effective for the two-dimensional (2D) projection images, but also applicable to the 3D images used in radiotherapy.

Zou D, Lyu F, Pan Y, Fan X, Du J, Mai X

pubmed logopapersSep 1 2025
Accurate and timely diagnosis of thyroid cancer is critical for clinical care, and artificial intelligence can enhance this process. This study aims to develop and validate an intelligent assessment model called C-TNet, based on the Chinese Guidelines for Ultrasound Malignancy Risk Stratification of Thyroid Nodules (C-TIRADS) and real-time elasticity imaging. The goal is to differentiate between benign and malignant characteristics of thyroid nodules classified as C-TIRADS category 4. We evaluated the performance of C-TNet against ultrasonographers and BMNet, a model trained exclusively on histopathological findings indicating benign or malignant nature. The study included 3,545 patients with pathologically confirmed C-TIRADS category 4 thyroid nodules from two tertiary hospitals in China: Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine (n=3,463 patients) and Jiangyin People's Hospital (n=82 patients). The cohort from Affiliated Hospital of Integrated Traditional Chinese and Western Medicine, Nanjing University of Chinese Medicine was randomly divided into a training set and validation set (7:3 ratio), while the cohort from Jiangyin People's Hospital served as the external validation set. The C-TNet model was developed by extracting image features from the training set and integrating them with six commonly used classifier algorithms: logistic regression (LR), linear discriminant analysis (LDA), random forest (RF), kernel support vector machine (K-SVM), adaptive boosting (AdaBoost), and Naive Bayes (NB). Its performance was evaluated using both internal and external validation sets, with statistical differences analyzed through the Chi-squared test. C-TNet model effectively integrates feature extraction from deep neural networks with a RF classifier, utilizing grayscale and elastography ultrasound data. It successfully differentiates benign from malignant thyroid nodules, achieving an area under the curve (AUC) of 0.873, comparable to the performance of senior physicians (AUC: 0.868). The model demonstrates generalizability across diverse clinical settings, positioning itself as a transformative decision-support tool for enhancing the risk stratification of thyroid nodules.

Li Z, Chen L, Zhang S, Zhang X, Zhang J, Ying M, Zhu J, Li R, Song M, Feng Z, Zhang J, Liang W

pubmed logopapersSep 1 2025
Aortic dissection (AD) is a lethal emergency requiring prompt diagnosis. Current computed tomography angiography (CTA)-based diagnosis requires contrast agents, which expends time, whereas existing deep learning (DL) models only support single-modality inputs [non-contrast computed tomography (CT) or CTA]. In this study, we propose a bimodal DL framework to independently process both types, enabling dual-path detection and improving diagnostic efficiency. Patients who underwent non-contrast CT and CTA from February 2016 to September 2021 were retrospectively included from three institutions, including the First Affiliated Hospital, Zhejiang University School of Medicine (Center I), Zhejiang Hospital (Center II), and Yiwu Central Hospital (Center III). A two-stage DL model for predicting AD was developed. The first stage used an aorta detection network (AoDN) to localize the aorta in non-contrast CT or CTA images. Image patches that contained detected aorta were cut from CT images and combined to form an image patch sequence, which was inputted to an aortic dissection diagnosis network (ADDiN) to diagnose AD in the second stage. The following performances were assessed: aorta detection and diagnosis using average precision at the intersection over union threshold 0.5 ([email protected]) and area under the receiver operating characteristic curve (AUC). The first cohort, comprising 102 patients (53±15 years, 80 men) from two institutions, was used for the AoDN, whereas the second cohort, consisting of 861 cases (55±15 years, 623 men) from three institutions, was used for the ADDiN. For the AD task, the AoDN achieved [email protected] 99.14% on the non-contrast CT test set and 99.34% on the CTA test set, respectively. For the AD diagnosis task, the ADDiN obtained an AUCs of 0.98 on the non-contrast CT test set and 0.99 on the CTA test set. The proposed bimodal CT data-driven DL model accurately diagnoses AD, facilitating prompt hospital diagnosis and treatment of AD.

Huang Y, He S, Hu H, Ma H, Huang Z, Zeng S, Mazu L, Zhou W, Zhao C, Zhu N, Wu J, Liu Q, Yang Z, Wang W, Shen G, Zhang N, Chu J

pubmed logopapersSep 1 2025
Ki-67 labelling index (LI), a critical marker of tumor proliferation, is vital for grading adult-type diffuse gliomas and predicting patient survival. However, its accurate assessment currently relies on invasive biopsy or surgical resection. This makes it challenging to non-invasively predict Ki-67 LI and subsequent prognosis. Therefore, this study aimed to investigate whether histogram analysis of multi-parametric diffusion model metrics-specifically diffusion tensor imaging (DTI), diffusion kurtosis imaging (DKI), and neurite orientation dispersion and density imaging (NODDI)-could help predict Ki-67 LI in adult-type diffuse gliomas and further predict patient survival. A total of 123 patients with diffuse gliomas who underwent preoperative bipolar spin-echo diffusion magnetic resonance imaging (MRI) were included. Diffusion metrics (DTI, DKI and NODDI) and their histogram features were extracted and used to develop a nomogram model in the training set (n=86), and the performance was verified in the test set (n=37). Area under the receiver operating characteristics curve of the nomogram model was calculated. The outcome cohort, including 123 patients, was used to evaluate the predictive value of the diffusion nomogram model for overall survival (OS). Cox proportion regression was performed to predict OS. Among 123 patients, 87 exhibited high Ki-67 LI (Ki-67 LI >5%). The patients had a mean age of 46.08±13.24 years, with 39 being female. Tumor grading showed 46 cases of grade 2, 21 cases of grade 3, and 56 cases of grade 4. The nomogram model included eight histogram features from diffusion MRI and showed good performance for prediction Ki-67 LI, with area under the receiver operating characteristic curves (AUCs) of 0.92 [95% confidence interval (CI): 0.85-0.98, sensitivity =0.85, specificity =0.84] and 0.84 (95% CI: 0.64-0.98, sensitivity =0.77, specificity =0.73) in the training set and test set, respectively. Further nomogram incorporating these variables showed good discrimination in Ki-67 LI predicting and glioma grading. A low nomogram model score relative to the median value in the outcomes cohort was independently associated with OS (P<0.01). Accurate prediction of the Ki-67 LI in adult-type diffuse glioma patients was achieved by using multi-modal diffusion MRI histogram radiomics model, which also reliably and accurately determined survival. ClinicalTrials.gov Identifier: NCT06572592.

Alibabaei S, Yousefipour M, Rahmani M, Raminfard S, Tahmasbi M

pubmed logopapersSep 1 2025
Developing quantitative methods to assess post-surgery treatment response in Glioblastoma Multiforme (GBM) is critical for improving patient outcomes and refining current subjective approaches. This study analyzes the performance of machine learning models trained on radiomic datasets derived from magnetic resonance imaging (MRI) scans of GBM patients. MRI scans from 143 GBM patients receiving adjuvant therapy post-surgery were acquired and preprocessed. A total of 92 radiomic features, including 68 Gy-level co-occurrence matrix (GLCM)-based features calculated in four directions (0°, 45°, 90°, and 135°) and 24 Curvelet coefficient-based features, were extracted from each patient's segmented tumor cavity. Machine learning classifiers, including Support Vector Machine (SVM), Random Forest, K-Nearest Neighbors (KNN), AdaBoost, CatBoost, LightGBM, XGBoost, Gaussian Naïve Bayes (GNB), and Logistic Regression (LR), were trained on the extracted radiomics selected using sequential feature selection, LASSO, and PCA. Validation was performed with 10-fold cross-validation. The proposed pipeline achieved an accuracy of 87% in classifying post-surgery treatment responses in GBM patients. This accuracy was achieved with the SVM trained on a combination of GLCM and Curvelet-based radiomics selected via forward sequential algorithm-8, and with KNN trained on GLCM and Curvelet radiomics combination selected using LASSO (alpha = 0.01). The LR model trained on Curvelet-based LASSO-selected radiomics (alpha = 0.01) also showed strong performance. The results demonstrate that MRI-based radiomics, specifically GLCM and Curvelet features, can effectively train machine learning models to quantitatively assess GBM treatment response. These models serve as valuable tools to complement qualitative evaluations, enhancing accuracy and objectivity in post-surgery outcome assessment. Not applicable.

Song D, Yang S, Han JY, Kim KG, Kim ST, Yi WJ

pubmed logopapersSep 1 2025
Accurate segmentation of the paranasal sinuses, including the frontal sinus (FS), ethmoid sinus (ES), sphenoid sinus (SS), and maxillary sinus (MS), plays an important role in supporting image-guided surgery (IGS) for sinusitis, facilitating safer intraoperative navigation by identifying anatomical variations and delineating surgical landmarks on CT imaging. To the best of our knowledge, no comparative studies of convolutional neural networks (CNNs), vision transformers (ViTs), and hybrid networks for segmenting each paranasal sinus in patients with sinusitis have been conducted. Therefore, the objective of this study was to compare the segmentation performance of CNNs, ViTs, and hybrid networks for individual paranasal sinuses with varying degrees of anatomical complexity and morphological and textural variations caused by sinusitis on CT images. The performance of CNNs, ViTs, and hybrid networks was compared using Jaccard Index (JI), Dice similarity coefficient (DSC), precision (PR), recall (RC), and 95% Hausdorff Distance (HD95) for segmentation accuracy metrics and the number of parameters (Params) and inference time (IT) for computational efficiency. The Swin UNETR hybrid network outperformed the other networks, achieving the highest segmentation scores, with a JI of 0.719, a DSC of 0.830, a PR of 0.935, and a RC of 0.758, and the lowest HD95 value of 10.529 with the smallest number of the model architectural parameter, with 15.705 M Params. Also, CoTr, another hybrid network, demonstrated superior segmentation performance compared to CNNs and ViTs, and achieved the fastest inference time with 0.149 IT. Compared with CNNs and ViTs, hybrid networks significantly reduced false positives and enabled more precise boundary delineation, effectively capturing anatomical relationships among the sinuses and surrounding structures. This resulted in the lowest segmentation errors near critical surgical landmarks. In conclusion, hybrid networks may provide a more balanced trade-off between segmentation accuracy and computational efficiency, with potential applicability in clinical decision support systems for sinusitis.

Hussien A, Elkhateb A, Saeed M, Elsabawy NM, Elnakeeb AE, Elrashidy N

pubmed logopapersSep 1 2025
Medical images have become indispensable for decision-making and significantly affect treatment planning. However, increasing medical imaging has widened the gap between medical images and available radiologists, leading to delays and diagnosis errors. Recent studies highlight the potential of deep learning (DL) in medical image diagnosis. However, their reliance on labelled data limits their applicability in various clinical settings. As a result, recent studies explore the role of self-supervised learning to overcome these challenges. Our study aims to address these challenges by examining the performance of self-supervised learning (SSL) in diverse medical image datasets and comparing it with traditional pre-trained supervised learning models. Unlike prior SSL methods that focus solely on classification, our framework leverages DINOv2's embeddings to enable semantic search in medical databases (via Qdrant), allowing clinicians to retrieve similar cases efficiently. This addresses a critical gap in clinical workflows where rapid case The results affirmed SSL's ability, especially DINO v2, to overcome the challenge associated with labelling data and provide an accurate diagnosis superior to traditional SL. DINO V2 provides 100%, 99%, 99%, 100 and 95% for classification accuracy of Lung cancer, brain tumour, leukaemia and Eye Retina Disease datasets, respectively. While existing SSL models (e.g., BYOL, SimCLR) lack interpretability, we uniquely combine DINOv2 with ViT-CX, a causal explanation method tailored for transformers. This provides clinically actionable heatmaps, revealing how the model localizes tumors/cellular patternsa feature absent in prior SSL medical imaging studies Furthermore, our research explores the impact of semantic search in the medical images domain and how it can revolutionize the querying process and provide semantic results alongside SSL and the Qudra Net dataset utilized to save the embedding of the developed model after the training process. Cosine similarity measures the distance between the image query and stored information in the embedding using cosine similarity. Our study aims to enhance the efficiency and accuracy of medical image analysis, ultimately improving the decision-making process.
Page 208 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.