Sort by:
Page 18 of 2152148 results

Cycle-conditional diffusion model for noise correction of diffusion-weighted images using unpaired data.

Zhu P, Liu C, Fu Y, Chen N, Qiu A

pubmed logopapersJul 1 2025
Diffusion-weighted imaging (DWI) is a key modality for studying brain microstructure, but its signals are highly susceptible to noise due to the thermal motion of water molecules and interactions with tissue microarchitecture, leading to significant signal attenuation and a low signal-to-noise ratio (SNR). In this paper, we propose a novel approach, a Cycle-Conditional Diffusion Model (Cycle-CDM) using unpaired data learning, aimed at improving DWI quality and reliability through noise correction. Cycle-CDM leverages a cycle-consistent translation architecture to bridge the domain gap between noise-contaminated and noise-free DWIs, enabling the restoration of high-quality images without requiring paired datasets. By utilizing two conditional diffusion models, Cycle-CDM establishes data interrelationships between the two types of DWIs, while incorporating synthesized anatomical priors from the cycle translation process to guide noise removal. In addition, we introduce specific constraints to preserve anatomical fidelity, allowing Cycle-CDM to effectively learn the underlying noise distribution and achieve accurate denoising. Our experiments conducted on simulated datasets, as well as children and adolescents' datasets with strong clinical relevance. Our results demonstrate that Cycle-CDM outperforms comparative methods, such as U-Net, CycleGAN, Pix2Pix, MUNIT and MPPCA, in terms of noise correction performance. We demonstrated that Cycle-CDM can be generalized to DWIs with head motion when they were acquired using different MRI scannsers. Importantly, the denoised DWI data produced by Cycle-CDM exhibit accurate preservation of underlying tissue microstructure, thus substantially improving their medical applicability.

Artificial intelligence-powered coronary artery disease diagnosis from SPECT myocardial perfusion imaging: a comprehensive deep learning study.

Hajianfar G, Gharibi O, Sabouri M, Mohebi M, Amini M, Yasemi MJ, Chehreghani M, Maghsudi M, Mansouri Z, Edalat-Javid M, Valavi S, Bitarafan Rajabi A, Salimi Y, Arabi H, Rahmim A, Shiri I, Zaidi H

pubmed logopapersJul 1 2025
Myocardial perfusion imaging (MPI) using single-photon emission computed tomography (SPECT) is a well-established modality for noninvasive diagnostic assessment of coronary artery disease (CAD). However, the time-consuming and experience-dependent visual interpretation of SPECT images remains a limitation in the clinic. We aimed to develop advanced models to diagnose CAD using different supervised and semi-supervised deep learning (DL) algorithms and training strategies, including transfer learning and data augmentation, with SPECT-MPI and invasive coronary angiography (ICA) as standard of reference. A total of 940 patients who underwent SPECT-MPI were enrolled (281 patients included ICA). Quantitative perfusion SPECT (QPS) was used to extract polar maps of rest and stress states. We defined two different tasks, including (1) Automated CAD diagnosis with expert reader (ER) assessment of SPECT-MPI as reference, and (2) CAD diagnosis from SPECT-MPI based on reference ICA reports. In task 2, we used 6 strategies for training DL models. We implemented 13 different DL models along with 4 input types with and without data augmentation (WAug and WoAug) to train, validate, and test the DL models (728 models). One hundred patients with ICA as standard of reference (the same patients in task 1) were used to evaluate models per vessel and per patient. Metrics, such as the area under the receiver operating characteristics curve (AUC), accuracy, sensitivity, specificity, precision, and balanced accuracy were reported. DeLong and pairwise Wilcoxon rank sum tests were respectively used to compare models and strategies after 1000 bootstraps on the test data for all models. We also compared the performance of our best DL model to ER's diagnosis. In task 1, DenseNet201 Late Fusion (AUC = 0.89) and ResNet152V2 Late Fusion (AUC = 0.83) models outperformed other models in per-vessel and per-patient analyses, respectively. In task 2, the best models for CAD prediction based on ICA were Strategy 3 (a combination of ER- and ICA-based diagnosis in train data), WoAug InceptionResNetV2 EarlyFusion (AUC = 0.71), and Strategy 5 (semi-supervised approach) WoAug ResNet152V2 EarlyFusion (AUC = 0.77) in per-vessel and per-patient analyses, respectively. Moreover, saliency maps showed that models could be helpful for focusing on relevant spots for decision making. Our study confirmed the potential of DL-based analysis of SPECT-MPI polar maps in CAD diagnosis. In the automation of ER-based diagnosis, models' performance was promising showing accuracy close to expert-level analysis. It demonstrated that using different strategies of data combination, such as including those with and without ICA, along with different training methods, like semi-supervised learning, can increase the performance of DL models. The proposed DL models could be coupled with computer-aided diagnosis systems and be used as an assistant to nuclear medicine physicians to improve their diagnosis and reporting, but only in the LAD territory. Not applicable.

Deep learning-based lung cancer classification of CT images.

Faizi MK, Qiang Y, Wei Y, Qiao Y, Zhao J, Aftab R, Urrehman Z

pubmed logopapersJul 1 2025
Lung cancer remains a leading cause of cancer-related deaths worldwide, with accurate classification of lung nodules being critical for early diagnosis. Traditional radiological methods often struggle with high false-positive rates, underscoring the need for advanced diagnostic tools. In this work, we introduce DCSwinB, a novel deep learning-based lung nodule classifier designed to improve the accuracy and efficiency of benign and malignant nodule classification in CT images. Built on the Swin-Tiny Vision Transformer (ViT), DCSwinB incorporates several key innovations: a dual-branch architecture that combines CNNs for local feature extraction and Swin Transformer for global feature extraction, and a Conv-MLP module that enhances connections between adjacent windows to capture long-range dependencies in 3D images. Pretrained on the LUNA16 and LUNA16-K datasets, which consist of annotated CT scans from thousands of patients, DCSwinB was evaluated using ten-fold cross-validation. The model demonstrated superior performance, achieving 90.96% accuracy, 90.56% recall, 89.65% specificity, and an AUC of 0.94, outperforming existing models such as ResNet50 and Swin-T. These results highlight the effectiveness of DCSwinB in enhancing feature representation while optimizing computational efficiency. By improving the accuracy and reliability of lung nodule classification, DCSwinB has the potential to assist radiologists in reducing diagnostic errors, enabling earlier intervention and improved patient outcomes.

The value of machine learning based on spectral CT quantitative parameters in the distinguishing benign from malignant thyroid micro-nodules.

Song Z, Liu Q, Huang J, Zhang D, Yu J, Zhou B, Ma J, Zou Y, Chen Y, Tang Z

pubmed logopapersJul 1 2025
More cases of thyroid micro-nodules have been diagnosed annually in recent years because of advancements in diagnostic technologies and increased public health awareness. To explore the application value of various machine learning (ML) algorithms based on dual-layer spectral computed tomography (DLCT) quantitative parameters in distinguishing benign from malignant thyroid micro-nodules. All 338 thyroid micro-nodules (177 malignant micro-nodules and 161 benign micro-nodules) were randomly divided into a training cohort (n = 237) and a testing cohort (n = 101) at a ratio of 7:3. Four typical radiological features and 19 DLCT quantitative parameters in the arterial phase and venous phase were measured. Recursive feature elimination was employed for variable selection. Three ML algorithms-support vector machine (SVM), logistic regression (LR), and naive Bayes (NB)-were implemented to construct predictive models. Predictive performance was evaluated via receiver operating characteristic (ROC) curve analysis. A variable set containing 6 key variables with "one standard error" rules was identified in the SVM model, which performed well in the training and testing cohorts (area under the ROC curve (AUC): 0.924 and 0.931, respectively). A variable set containing 2 key variables was identified in the NB model, which performed well in the training and testing cohorts (AUC: 0.882 and 0.899, respectively). A variable set containing 8 key variables was identified in the LR model, which performed well in the training and testing cohorts (AUC: 0.924 and 0.925, respectively). And nine ML models were developed with varying variable sets (2, 6, or 8 variables), all of which consistently achieved AUC values above 0.85 in the training, cross validation (CV)-Training, CV-Validation, and testing cohorts. Artificial intelligence-based DLCT quantitative parameters are promising for distinguishing benign from malignant thyroid micro-nodules.

[A deep learning method for differentiating nasopharyngeal carcinoma and lymphoma based on MRI].

Tang Y, Hua H, Wang Y, Tao Z

pubmed logopapersJul 1 2025
<b>Objective:</b>To development a deep learning(DL) model based on conventional MRI for automatic segmentation and differential diagnosis of nasopharyngeal carcinoma(NPC) and nasopharyngeal lymphoma(NPL). <b>Methods:</b>The retrospective study included 142 patients with NPL and 292 patients with NPC who underwent conventional MRI at Renmin Hospital of Wuhan University from June 2012 to February 2023. MRI from 80 patients were manually segmented to train the segmentation model. The automatically segmented regions of interest(ROIs) formed four datasets: T1 weighted images(T1WI), T2 weighted images(T2WI), T1 weighted contrast-enhanced images(T1CE), and a combination of T1WI and T2WI. The ImageNet-pretrained ResNet101 model was fine-tuned for the classification task. Statistical analysis was conducted using SPSS 22.0. The Dice coefficient loss was used to evaluate performance of segmentation task. Diagnostic performance was assessed using receiver operating characteristic(ROC) curves. Gradient-weighted class activation mapping(Grad-CAM) was imported to visualize the model's function. <b>Results:</b>The DICE score of the segmentation model reached 0.876 in the testing set. The AUC values of classification models in testing set were as follows: T1WI: 0.78(95%<i>CI</i> 0.67-0.81), T2WI: 0.75(95%<i>CI</i> 0.72-0.86), T1CE: 0.84(95%<i>CI</i> 0.76-0.87), and T1WI+T2WI: 0.93(95%<i>CI</i> 0.85-0.94). The AUC values for the two clinicians were 0.77(95%<i>CI</i> 0.72-0.82) for the junior, and 0.84(95%<i>CI</i> 0.80-0.89) for the senior. Grad-CAM analysis revealed that the central region of the tumor was highly correlated with the model's classification decisions, while the correlation was lower in the peripheral regions. <b>Conclusion:</b>The deep learning model performed well in differentiating NPC from NPL based on conventional MRI. The T1WI+T2WI combination model exhibited the best performance. The model can assist in the early diagnosis of NPC and NPL, facilitating timely and standardized treatment, which may improve patient prognosis.

Multi-machine learning model based on radiomics features to predict prognosis of muscle-invasive bladder cancer.

Wang B, Gong Z, Su P, Zhen G, Zeng T, Ye Y

pubmed logopapersJul 1 2025
This study aims to construct a survival prognosis prediction model for muscle-invasive bladder cancer based on CT imaging features. A total of 91 patients with muscle-invasive bladder cancer were sourced from the TCGA and TCIA dataset and were divided into a training group (64 cases) and a validation group (27 cases). Additionally, 54 patients with muscle-invasive bladder cancer were retrospectively collected from our hospital to serve as an external test group; their enhanced CT imaging data were analyzed and processed to identify the most relevant radiomic features. Five distinct machine learning methods were employed to develop the optimal radiomics model, which was then combined with clinical data to create a nomogram model aimed at accurately predicting the overall survival (OS) of patients with muscle-invasive bladder cancer. The model's performance was ultimately assessed using various evaluation methods, including the ROC curve, calibration curve, decision curve, and Kaplan-Meier (KM) analysis. Eight radiomic features were identified for modeling analysis. Among the models evaluated, the Gradient Boosting Machine (GBM) In the prediction of OS performed the best. the 2-year AUCs were 0.859, 95% CI (0.767-0.952) for the training group, 0.850, 95% CI (0.705-0.995) for the validation group, and 0.700, 95% CI (0.520-0.880) for the external test group. The 3-year AUCs were 0.809, 95% CI (0.704-0.913) for the training group, 0.895, 95% CI (0.768-1.000) for the validation group, and 0.730, 95% CI (0.569-0.891) for the external test group. The nomogram model incorporating clinical data achieved superior results, the AUCs for predicting 2-year OS were 0.913 (95% CI: 0.83-0.98) for the training group, 0.86 (95% CI: 0.78-0.96) for the validation group, and 0.778 (95% CI: 0.69-0.94) for the external test group; for predicting 3-year OS, the AUCs were 0.837 (95% CI: 0.83-0.98) for the training group, 0.982 (95% CI: 0.84-1.0) for the validation group, and 0.785 (95% CI: 0.75-0.96) for the external test group. The calibration curve demonstrated excellent calibration of the model, while the decision curve and KM analysis indicated that the model possesses substantial clinical utility. The GBM model, based on the radiomic features of enhanced CT imaging, holds significant potential for predicting the prognosis of patients with muscle-invasive bladder cancer. Furthermore, the combined model, which incorporates clinical features, demonstrates enhanced performance and is beneficial for clinical decision-making.

Automatic quality control of brain 3D FLAIR MRIs for a clinical data warehouse.

Loizillon S, Bottani S, Maire A, Ströer S, Chougar L, Dormont D, Colliot O, Burgos N

pubmed logopapersJul 1 2025
Clinical data warehouses, which have arisen over the last decade, bring together the medical data of millions of patients and offer the potential to train and validate machine learning models in real-world scenarios. The quality of MRIs collected in clinical data warehouses differs significantly from that generally observed in research datasets, reflecting the variability inherent to clinical practice. Consequently, the use of clinical data requires the implementation of robust quality control tools. By using a substantial number of pre-existing manually labelled T1-weighted MR images (5,500) alongside a smaller set of newly labelled FLAIR images (926), we present a novel semi-supervised adversarial domain adaptation architecture designed to exploit shared representations between MRI sequences thanks to a shared feature extractor, while taking into account the specificities of the FLAIR thanks to a specific classification head for each sequence. This architecture thus consists of a common invariant feature extractor, a domain classifier and two classification heads specific to the source and target, all designed to effectively deal with potential class distribution shifts between the source and target data classes. The primary objectives of this paper were: (1) to identify images which are not proper 3D FLAIR brain MRIs; (2) to rate the overall image quality. For the first objective, our approach demonstrated excellent results, with a balanced accuracy of 89%, comparable to that of human raters. For the second objective, our approach achieved good performance, although lower than that of human raters. Nevertheless, the automatic approach accurately identified bad quality images (balanced accuracy >79%). In conclusion, our proposed approach overcomes the initial barrier of heterogeneous image quality in clinical data warehouses, thereby facilitating the development of new research using clinical routine 3D FLAIR brain images.

Self-supervised network predicting neoadjuvant chemoradiotherapy response to locally advanced rectal cancer patients.

Chen Q, Dang J, Wang Y, Li L, Gao H, Li Q, Zhang T, Bai X

pubmed logopapersJul 1 2025
Radiographic imaging is a non-invasive technique of considerable importance for evaluating tumor treatment response. However, redundancy in CT data and the lack of labeled data make it challenging to accurately assess the response of locally advanced rectal cancer (LARC) patients to neoadjuvant chemoradiotherapy (nCRT) using current imaging indicators. In this study, we propose a novel learning framework to automatically predict the response of LARC patients to nCRT. Specifically, we develop a deep learning network called the Expand Intensive Attention Network (EIA-Net), which enhances the network's feature extraction capability through cascaded 3D convolutions and coordinate attention. Instance-oriented collaborative self-supervised learning (IOC-SSL) is proposed to leverage unlabeled data for training, reducing the reliance on labeled data. In a dataset consisting of 1,575 volumes, the proposed method achieves an AUC score of 0.8562. The dataset includes two distinct parts: the self-supervised dataset containing 1,394 volumes and the supervised dataset comprising 195 volumes. Analysis of the lifetime predictions reveals that patients with pathological complete response (pCR) predicted by EIA-Net exhibit better overall survival (OS) compared to non-pCR patients with LARC. The retrospective study demonstrates that imaging-based pCR prediction for patients with low rectal cancer can assist clinicians in making informed decisions regarding the need for Miles operation, thereby improving the likelihood of anal preservation, with an AUC of 0.8222. These results underscore the potential of our method to enhance clinical decision-making, offering a promising tool for personalized treatment and improved patient outcomes in LARC management.

CQENet: A segmentation model for nasopharyngeal carcinoma based on confidence quantitative evaluation.

Qi Y, Wei L, Yang J, Xu J, Wang H, Yu Q, Shen G, Cao Y

pubmed logopapersJul 1 2025
Accurate segmentation of the tumor regions of nasopharyngeal carcinoma (NPC) is of significant importance for radiotherapy of NPC. However, the precision of existing automatic segmentation methods for NPC remains inadequate, primarily manifested in the difficulty of tumor localization and the challenges in delineating blurred boundaries. Additionally, the black-box nature of deep learning models leads to insufficient quantification of the confidence in the results, preventing users from directly understanding the model's confidence in its predictions, which severely impacts the clinical application of deep learning models. This paper proposes an automatic segmentation model for NPC based on confidence quantitative evaluation (CQENet). To address the issue of insufficient confidence quantification in NPC segmentation results, we introduce a confidence assessment module (CAM) that enables the model to output not only the segmentation results but also the confidence in those results, aiding users in understanding the uncertainty risks associated with model outputs. To address the difficulty in localizing the position and extent of tumors, we propose a tumor feature adjustment module (FAM) for precise tumor localization and extent determination. To address the challenge of delineating blurred tumor boundaries, we introduce a variance attention mechanism (VAM) to assist in edge delineation during fine segmentation. We conducted experiments on a multicenter NPC dataset, validating that our proposed method is effective and superior to existing state-of-the-art models, possessing considerable clinical application value.

Application and optimization of the U-Net++ model for cerebral artery segmentation based on computed tomographic angiography images.

Kim H, Seo KH, Kim K, Shim J, Lee Y

pubmed logopapersJul 1 2025
Accurate segmentation of cerebral arteries on computed tomography angiography (CTA) images is essential for the diagnosis and management of cerebrovascular diseases, including ischemic stroke. This study implemented a deep learning-based U-Net++ model for cerebral artery segmentation in CTA images, focusing on optimizing pruning levels by analyzing the trade-off between segmentation performance and computational cost. Dual-energy CTA and direct subtraction CTA datasets were utilized to segment the internal carotid and vertebral arteries in close proximity to the bone. We implemented four pruning levels (L1-L4) in the U-Net++ model and evaluated the segmentation performance using accuracy, intersection over union, F1-score, boundary F1-score, and Hausdorff distance. Statistical analyses were conducted to assess the significance of segmentation performance differences across pruning levels. In addition, we measured training and inference times to evaluate the trade-off between segmentation performance and computational efficiency. Applying deep supervision improved segmentation performance across all factors. While the L4 pruning level achieved the highest segmentation performance, L3 significantly reduced training and inference times (by an average of 51.56 % and 22.62 %, respectively), while incurring only a small decrease in segmentation performance (7.08 %) compared to L4. These results suggest that L3 achieves an optimal balance between performance and computational cost. This study demonstrates that pruning levels in U-Net++ models can be optimized to reduce computational cost while maintaining effective segmentation performance. By simplifying deep learning models, this approach can improve the efficiency of cerebrovascular segmentation, contributing to faster and more accurate diagnoses in clinical settings.
Page 18 of 2152148 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.