Sort by:
Page 506 of 7527514 results

Stein T, Lang F, Rau S, Reisert M, Russe MF, Schürmann T, Fink A, Kellner E, Weiss J, Bamberg F, Urbach H, Rau A

pubmed logopapersJul 1 2025
Distinguishing gray matter (GM) from white matter (WM) is essential for CT of the brain. The recently established photon-counting detector CT (PCD-CT) technology employs a novel detection technique that might allow more precise measurement of tissue attenuation for an improved delineation of attenuation values (Hounsfield units - HU) and improved image quality in comparison with energy-integrating detector CT (EID-CT). To investigate this, we compared HU, GM vs. WM contrast, and image noise using automated deep learning-based brain segmentations. We retrospectively included patients who received either PCD-CT or EID-CT and did not display a cerebral pathology. A deep learning-based segmentation of the GM and WM was used to extract HU. From this, the gray-to-white ratio and contrast-to-noise ratio were calculated. We included 329 patients with EID-CT (mean age 59.8 ± 20.2 years) and 180 with PCD-CT (mean age 64.7 ± 16.5 years). GM and WM showed significantly lower HU in PCD-CT (GM: 40.4 ± 2.2 HU; WM: 33.4 ± 1.5 HU) compared to EID-CT (GM: 45.1 ± 1.6 HU; WM: 37.4 ± 1.6 HU, p < .001). Standard deviations of HU were also lower in PCD-CT (GM and WM both p < .001) and contrast-tonoise ratio was significantly higher in PCD-CT compared to EID-CT (p < .001). Gray-to-white matter ratios were not significantly different across both modalities (p > .99). In an age-matched subset (n = 157 patients from both cohorts), all findings were replicated. This comprehensive comparison of HU in cerebral gray and white matter revealed substantially reduced image noise and an average offset with lower HU in PCD-CT while the ratio between GM and WM remained constant. The potential need to adapt windowing presets based on this finding should be investigated in future studies. CNR = Contrast-to-Noise Ratio; CTDIvol = Volume Computed Tomography Dose Index; EID = Energy-Integrating Detector; GWR = Gray-to-White Matter Ratio; HU = Hounsfield Units; PCD = Photon-Counting Detector; ROI = Region of Interest; VMI = Virtual Monoenergetic Images.

Faizi MK, Qiang Y, Wei Y, Qiao Y, Zhao J, Aftab R, Urrehman Z

pubmed logopapersJul 1 2025
Lung cancer remains a leading cause of cancer-related deaths worldwide, with accurate classification of lung nodules being critical for early diagnosis. Traditional radiological methods often struggle with high false-positive rates, underscoring the need for advanced diagnostic tools. In this work, we introduce DCSwinB, a novel deep learning-based lung nodule classifier designed to improve the accuracy and efficiency of benign and malignant nodule classification in CT images. Built on the Swin-Tiny Vision Transformer (ViT), DCSwinB incorporates several key innovations: a dual-branch architecture that combines CNNs for local feature extraction and Swin Transformer for global feature extraction, and a Conv-MLP module that enhances connections between adjacent windows to capture long-range dependencies in 3D images. Pretrained on the LUNA16 and LUNA16-K datasets, which consist of annotated CT scans from thousands of patients, DCSwinB was evaluated using ten-fold cross-validation. The model demonstrated superior performance, achieving 90.96% accuracy, 90.56% recall, 89.65% specificity, and an AUC of 0.94, outperforming existing models such as ResNet50 and Swin-T. These results highlight the effectiveness of DCSwinB in enhancing feature representation while optimizing computational efficiency. By improving the accuracy and reliability of lung nodule classification, DCSwinB has the potential to assist radiologists in reducing diagnostic errors, enabling earlier intervention and improved patient outcomes.

Le Dinh T, Lee S, Park H, Lee S, Choi H, Chun KS, Jung JY

pubmed logopapersJul 1 2025
Classifying chondroid tumors is an essential step for effective treatment planning. Recently, with the advances in computer-aided diagnosis and the increasing availability of medical imaging data, automated tumor classification using deep learning shows promise in assisting clinical decision-making. In this study, we propose a hybrid approach that integrates deep learning and radiomics for chondroid tumor classification. First, we performed tumor segmentation using the nnUNetv2 framework, which provided three-dimensional (3D) delineation of tumor regions of interest (ROIs). From these ROIs, we extracted a set of radiomics features and deep learning-derived features. After feature selection, we identified 15 radiomics and 15 deep features to build classification models. We developed 5 machine learning classifiers including Random Forest, XGBoost, Gradient Boosting, LightGBM, and CatBoost for the classification models. The approach integrating features from radiomics, ROI-originated deep learning features, and clinical variables yielded the best overall classification results. Among the classifiers, CatBoost classifier achieved the highest accuracy of 0.90 (95% CI 0.90-0.93), a weighted kappa of 0.85, and an AUC of 0.91. These findings highlight the potential of integrating 3D U-Net-assisted segmentation with radiomics and deep learning features to improve classification of chondroid tumors.

Gupta A, Dhanakshirur RR, Jain K, Garg S, Yadav N, Seth A, Das CJ

pubmed logopapersJul 1 2025
<b>Objective</b>  The aim of this study was to explore an innovative approach for developing deep learning (DL) algorithm for renal cell carcinoma (RCC) detection and subtyping on computed tomography (CT): clear cell RCC (ccRCC) versus non-ccRCC using two-dimensional (2D) neural network architecture and feature consistency modules. <b>Materials and Methods</b>  This retrospective study included baseline CT scans from 196 histopathologically proven RCC patients: 143 ccRCCs and 53 non-ccRCCs. Manual tumor annotations were performed on axial slices of corticomedullary phase images, serving as ground truth. After image preprocessing, the dataset was divided into training, validation, and testing subsets. The study tested multiple 2D DL architectures, with the FocalNet-DINO demonstrating highest effectiveness in detecting and classifying RCC. The study further incorporated spatial and class consistency modules to enhance prediction accuracy. Models' performance was evaluated using free-response receiver operating characteristic curves, recall rates, specificity, accuracy, F1 scores, and area under the curve (AUC) scores. <b>Results</b>  The FocalNet-DINO architecture achieved the highest recall rate of 0.823 at 0.025 false positives per image (FPI) for RCC detection. The integration of spatial and class consistency modules into the architecture led to 0.2% increase in recall rate at 0.025 FPI, along with improvements of 0.1% in both accuracy and AUC scores for RCC classification. These enhancements allowed detection of cancer in an additional 21 slices and reduced false positives in 126 slices. <b>Conclusion</b>  This study demonstrates high performance for RCC detection and classification using DL algorithm leveraging 2D neural networks and spatial and class consistency modules, to offer a novel, computationally simpler, and accurate DL approach to RCC characterization.

Wang T, Tang X, Du J, Jia Y, Mou W, Lu G

pubmed logopapersJul 1 2025
Accurate quantitative assessment using gadolinium-contrast magnetic resonance imaging (MRI) is crucial in therapy planning, surveillance and prognostic assessment of primary central nervous system lymphoma (PCNSL). The present study aimed to develop a multimodal artificial intelligence deep learning segmentation model to address the challenges associated with traditional 2D measurements and manual volume assessments in MRI. Data from 49 pathologically-confirmed patients with PCNSL from six Chinese medical centers were analyzed, and regions of interest were manually segmented on contrast-enhanced T1-weighted and T2-weighted MRI scans for each patient, followed by fully automated voxel-wise segmentation of tumor components using a 3-dimenstional convolutional deep neural network. Furthermore, the efficiency of the model was evaluated using practical indicators and its consistency and accuracy was compared with traditional methods. The performance of the models were assessed using the Dice similarity coefficient (DSC). The Mann-Whitney U test was used to compare continuous clinical variables and the χ<sup>2</sup> test was used for comparisons between categorical clinical variables. T1WI sequences exhibited the optimal performance (training dice: 0.923, testing dice: 0.830, outer validation dice: 0.801), while T2WI showed a relatively poor performance (training dice of 0.761, a testing dice of 0.647, and an outer validation dice of 0.643. In conclusion, the automatic multi-sequences MRI segmentation model for PCNSL in the present study displayed high spatial overlap ratio and similar tumor volume with routine manual segmentation, indicating its significant potential.

Chen J

pubmed logopapersJul 1 2025
The development of artificial intelligence has revolutionized the field of dentistry. Medical image segmentation is a vital part of AI applications in dentistry. This technique can assist medical practitioners in accurately diagnosing diseases. The detection of the maxillary sinus (MS), such as dental implants, tooth extraction, and endoscopic surgery, is important in the surgical field. The accurate segmentation of MS in radiological images is a prerequisite for diagnosis and treatment planning. This study aims to investigate the feasibility of applying a CNN algorithm based on the U-Net architecture to facilitate MS segmentation of individuals from the Chinese population. A total of 300 CBCT images in the axial, coronal, and sagittal planes were used in this study. These images were divided into a training set and a test set at a ratio of 8:2. The marked regions (maxillary sinus) were labelled for training and testing in the original images. The training process was performed for 40 epochs using a learning rate of 0.00001. Computation was performed on an RTX GeForce 3060 GPU. The best model was retained for predicting MS in the test set and calculating the model parameters. The trained U-Net model achieved yield segmentation accuracy across the three imaging planes. The IoU values were 0.942, 0.937 and 0.916 in the axial, sagittal and coronal planes, respectively, with F1 scores across all planes exceeding 0.95. The accuracies of the U-Net model were 0.997, 0.998, and 0.995 in the axial, sagittal and coronal planes, respectively. The trained U-Net model achieved highly accurate segmentation of MS across three planes on the basis of 2D CBCT images among the Chinese population. The AI model has shown promising application potential for daily clinical practice. Not applicable.

Hausmann AC, Rubbert C, Querbach SK, Ivan VL, Schnitzler A, Hartmann CJ, Caspers J

pubmed logopapersJul 1 2025
Although brain atrophy is a prevalent finding in Wilson disease (WD), its role as a contributing factor to clinical symptoms, especially cognitive decline, remains unclear. The objective of this study was to investigate different neuroimaging biomarkers related to grey matter atrophy and their relationship with neurological and cognitive impairment in WD. In this study, 30 WD patients and 30 age- and sex-matched healthy controls were enrolled prospectively and underwent structural magnetic resonance imaging (MRI). Regional atrophy was evaluated using established linear radiological measurements and the automated workflow volumetric estimation of gross atrophy and brain age longitudinally (veganbagel) for age- and sex-specific estimations of regional brain volume changes. Brain Age Gap Estimate (BrainAGE), defined as the discrepancy between machine learning predicted brain age from structural MRI and chronological age, was assessed using an established model. Atrophy markers and clinical scores were compared between 19 WD patients with a neurological phenotype (neuro-WD), 11 WD patients with a hepatic phenotype (hep-WD), and a healthy control group using Welch's ANOVA or Kruskal-Wallis test. Correlations between atrophy markers and neurological and neuropsychological scores were investigated using Spearman's correlation coefficients. Patients with neuro-WD demonstrated increased third ventricle width and bicaudate index, along with significant striatal-thalamic atrophy patterns that correlated with global cognitive function, mental processing speed, and verbal memory. Median BrainAGE was significantly higher in patients with neuro-WD (8.97 years, interquartile range [IQR] = 5.62-15.73) compared to those with hep-WD (4.72 years, IQR = 0.00-5.48) and healthy controls (0.46 years, IQR = - 4.11-4.24). Striatal-thalamic atrophy and BrainAGE were significantly correlated with neurological symptom severity. Our findings indicate advanced predicted brain age and substantial striatal-thalamic atrophy patterns in patients with neuro-WD, which serve as promising neuroimaging biomarkers for neurological and cognitive functions in treated, chronic WD.

Song D, Cai R, Lou Y, Zhang K, Xu D, Yan D, Guo F

pubmed logopapersJul 1 2025
Meningiomas are among the most common intracranial tumors, and challenges still remain in terms of tumor classification, treatment, and management. With the popularization of artificial intelligence technology, radiomics has been further developed and more extensively applied in the study of meningiomas. This objective and quantitative technique has played an important role in the identification, classification, grading, pathology, treatment, and prognosis of meningiomas, although new problems have also emerged. This review examines the application of magnetic resonance imaging (MRI) techniques in meningioma research. A database search was conducted for articles published between November 2017 and April 2025, with a total of 87 studies included after screening. These studies were summarized in detail, and the risk of bias and the certainty of the evidence were assessed using the Quality Assessment of Diagnostic Accuracy Studies version 2 (QUADAS-2) and radiomics quality scores (RQS). All the studies were retrospective, with most being single-center studies. Contrast-enhanced T1-weighted imaging (T1C) and T2-weighted imaging (T2WI) are the most commonly used MRI sequences. Current research focuses on five topics, namely, differentiation, grade and subtypes, molecular pathology, biological behavior, treatment, and complications, with 14, 32, 14, 12, and 19 studies addressing these topics (some of which are multiple topics). Combined imaging features with clinical or pathological features often outperform traditional clinical models. Most studies show a low to moderate risk of bias. Large, prospective, multicenter studies are needed to validate the performance of radiomic models in diverse patient populations before their clinical implementation can be considered.

Zhu L, Shi X, Tang L, Machida H, Yang L, Ma M, Ha R, Shen Y, Wang F, Chen D

pubmed logopapersJul 1 2025
Deep learning image reconstruction (DLIR) technology effectively improves the image quality while maintaining spatial resolution. The impact of DLIR on the quantification of coronary artery calcium (CAC) is still unclear. The purpose of this study was to investigate the effect of DLIR on the quantification of coronary calcium in high-risk populations. A retrospective study was conducted on patients who underwent coronary artery CT angiography (CCTA) at our hospital(China) from February 2022 to September 2022. Raw data were reconstructed with filtered back projection (FBP) reconstruction, 40% and 80% level adaptive statistical iterative reconstruction-veo (ASiR-V 40%, ASiR-V 80%) and low, medium and high-level deep learning algorithm (DLIR-L, DLIR-M, and DLIR-H). Calculate and compare the signal-to-noise and contrast-to-noise ratio, volumetric score, mass scores, and Agaston score of 6 sets of images. There were 178 patients, female (107), mean age (62.43 ± 9.26), and mean BMI (25.33 ± 3.18) kg/m<sup>2</sup>. Compared with FBP, the image noise of ASiR-V and DLIR was significantly reduced (P < 0.001). There was no significant difference in Agaston score, volumetric score, and mass scores among the six reconstruction algorithms (all P > 0.05). Bland-Altman diagram indicated that the Agatston scores of the five reconstruction algorithms showed good agreement with FBP, with DLIR-L(AUC, 110.08; 95% CI: 26.48, 432.92;)and ASIR-V40% (AUC,110.96; 95% CI: 26.23, 431.34;) having the highest consistency with FBP. Compared with FBP, DLIR and ASiR-V improve CT image quality to varying degrees while having no impact on Agatston score-based risk stratification. CACS is a powerful tool for cardiovascular risk stratification, and DLIR can improve image quality without affecting CACS, making it widely applicable in clinical practice.
Page 506 of 7527514 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.