Sort by:
Page 37 of 1411410 results

Initial Recurrence Risk Stratification of Papillary Thyroid Cancer Based on Intratumoral and Peritumoral Dual Energy CT Radiomics.

Zhou Y, Xu Y, Si Y, Wu F, Xu X

pubmed logopapersAug 21 2025
This study aims to evaluate the potential of Dual-Energy Computed Tomography (DECT)-based radiomics in preoperative risk stratification for the prediction of initial recurrence in Papillary Thyroid Carcinoma (PTC). The retrospective analysis included 236 PTC cases (165 in the training cohort, 71 in the validation cohort) collected between July 2020 and June 2021. Tumor segmentation was carried out in both intratumoral and peritumoral areas (1 mm inner and outer to the tumor boundary). Three regionspecific rad-scores were developed (rad-score [VOI<sup>whole</sup>], rad-score [VOI<sup>outer layer</sup>], and rad-score [VOI<sup>inner layer</sup>]), respectively. Three radiomics models incorporating these rad-scores and additional risk factors were compared to a clinical model alone. The optimal radiomics model was presented as a nomogram. Rad-scores from peritumoral regions (VOI<sup>outer layer</sup> and VOI<sup>inner layer</sup>) outperformed the intratumoral rad-score (VOI<sup>whole</sup>). All radiomics models surpassed the clinical model, with peritumoral-based models (radiomics models 2 and 3) outperforming the intratumoral-based model (radiomics model 1). The top-performing nomogram, which included tumor size, tumor site, and rad-score (VOI<sup>inner layer</sup>), achieved an Area Under the Curve (AUC) of 0.877 in the training cohort and 0.876 in the validation cohort. The nomogram demonstrated good calibration, clinical utility, and stability. DECT-based intratumoral and peritumoral radiomics advance PTC initial recurrence risk prediction, providing clinical radiology with precise predictive tools. Further work is needed to refine the model and enhance its clinical application. Radiomics analysis of DECT, particularly in peritumoral regions, offers valuable predictive information for assessing the risk of initial recurrence in PTC.

Hierarchical Multi-Label Classification Model for CBCT-Based Extraction Socket Healing Assessment and Stratified Diagnostic Decision-Making to Assist Implant Treatment Planning.

Li Q, Han R, Huang J, Liu CB, Zhao S, Ge L, Zheng H, Huang Z

pubmed logopapersAug 21 2025
Dental implant treatment planning requires assessing extraction socket healing, yet current methods face challenges distinguishing soft tissue from woven bone on cone beam computed tomography (CBCT) imaging and lack standardized classification systems. In this study, we propose a hierarchical multilabel classification model for CBCT-based extraction socket healing assessment. We established a novel classification system dividing extraction socket healing status into two levels: Level 1 distinguishes physiological healing (Type I) from pathological healing (Type II); Level 2 is further subdivided into 5 subtypes. The HierTransFuse-Net architecture integrates ResNet50 with a two-dimensional transformer module for hierarchical multilabel classification. Additionally, a stratified diagnostic principle coupled with random forest algorithms supported personalized implant treatment planning. The HierTransFuse-Net model performed excellently in classifying extraction socket healing, achieving an mAccuracy of 0.9705, with mPrecision, mRecall, and mF1 scores of 0.9156, 0.9376, and 0.9253, respectively. The HierTransFuse-Net model demonstrated superior diagnostic reliability (κω = 0.9234) significantly exceeding that of clinical practitioners (mean κω = 0.7148, range: 0.6449-0.7843). The random forest model based on stratified diagnostic decision indicators achieved an accuracy of 81.48% and an mF1 score of 82.55% in predicting 12 clinical treatment pathways. This study successfully developed HierTransFuse-Net, which demonstrated excellent performance in distinguishing different extraction socket healing statuses and subtypes. Random forest algorithms based on stratified diagnostic indicators have shown potential for clinical pathway prediction. The hierarchical multilabel classification system simulates clinical diagnostic reasoning, enabling precise disease stratification and providing a scientific basis for personalized treatment decisions.

Dynamic-Attentive Pooling Networks: A Hybrid Lightweight Deep Model for Lung Cancer Classification.

Ayivi W, Zhang X, Ativi WX, Sam F, Kouassi FAP

pubmed logopapersAug 21 2025
Lung cancer is one of the leading causes of cancer-related mortality worldwide. The diagnosis of this disease remains a challenge due to the subtle and ambiguous nature of early-stage symptoms and imaging findings. Deep learning approaches, specifically Convolutional Neural Networks (CNNs), have significantly advanced medical image analysis. However, conventional architectures such as ResNet50 that rely on first-order pooling often fall short. This study aims to overcome the limitations of CNNs in lung cancer classification by proposing a novel and dynamic model named LungSE-SOP. The model is based on Second-Order Pooling (SOP) and Squeeze-and-Excitation Networks (SENet) within a ResNet50 backbone to improve feature representation and class separation. A novel Dynamic Feature Enhancement (DFE) module is also introduced, which dynamically adjusts the flow of information through SOP and SENet blocks based on learned importance scores. The model was trained using a publicly available IQ-OTH/NCCD lung cancer dataset. The performance of the model was assessed using various metrics, including the accuracy, precision, recall, F1-score, ROC curves, and confidence intervals. For multiclass tumor classification, our model achieved 98.6% accuracy for benign, 98.7% for malignant, and 99.9% for normal cases. Corresponding F1-scores were 99.2%, 99.8%, and 99.9%, respectively, reflecting the model's high precision and recall across all tumor types and its strong potential for clinical deployment.

Zero-shot Volumetric CT Super-Resolution using 3D Gaussian Splatting with Upsampled 2D X-ray Projection Priors

Jeonghyun Noh, Hyun-Jic Oh, Byungju Chae, Won-Ki Jeong

arxiv logopreprintAug 21 2025
Computed tomography (CT) is widely used in clinical diagnosis, but acquiring high-resolution (HR) CT is limited by radiation exposure risks. Deep learning-based super-resolution (SR) methods have been studied to reconstruct HR from low-resolution (LR) inputs. While supervised SR approaches have shown promising results, they require large-scale paired LR-HR volume datasets that are often unavailable. In contrast, zero-shot methods alleviate the need for paired data by using only a single LR input, but typically struggle to recover fine anatomical details due to limited internal information. To overcome these, we propose a novel zero-shot 3D CT SR framework that leverages upsampled 2D X-ray projection priors generated by a diffusion model. Exploiting the abundance of HR 2D X-ray data, we train a diffusion model on large-scale 2D X-ray projection and introduce a per-projection adaptive sampling strategy. It selects the generative process for each projection, thus providing HR projections as strong external priors for 3D CT reconstruction. These projections serve as inputs to 3D Gaussian splatting for reconstructing a 3D CT volume. Furthermore, we propose negative alpha blending (NAB-GS) that allows negative values in Gaussian density representation. NAB-GS enables residual learning between LR and diffusion-based projections, thereby enhancing high-frequency structure reconstruction. Experiments on two datasets show that our method achieves superior quantitative and qualitative results for 3D CT SR.

Validation of an artificial intelligence-based automated PRAGMA and mucus plugging algorithm in pediatric cystic fibrosis.

Raut P, Chen Y, Taleb A, Bonte M, Andrinopoulou ER, Ciet P, Charbonnier JP, Wainwright CE, Tiddens H, Caudri D

pubmed logopapersAug 20 2025
PRAGMA-CF is a clinically validated visual chest CT scoring method, quantifying relevant components of structural airway damage in CF. We aimed to validate a newly developed AI-based automated PRAGMA-AI and Mucus Plugging algorithm using the visual PRAGMA-CF as reference. The study included 363 retrospective chest CT's of 178 CF patients (100 New-Zealand and Australian, 78 Dutch) with at least one inspiratory CT matching the image selection criteria. Eligible CT scans were analyzed using visual PRAGMA-CF, automated PRAGMA-AI and Mucus Plugging algorithm. Outcomes were compared using descriptive statistics, correlation, intra- and interclass correlation and Bland-Altman plots. Sensitivity analyses evaluated the impact of disease severity, study cohort, number of slices and convolution kernel (soft vs. hard). The algorithm successfully analyzed 353 (97 %) CT scans. A strong correlation between the methods was found for %bronchiectasis ( %BE) and %disease ( %DIS), but weak for %Airway wall thickening ( %AWT). The automated Mucus plugging outcomes showed strong correlation with visual %mucus plugging ( %MP). ICC's between visual and automated sub-scores witnessed average agreement for %BE and %DIS, except for %AWT which was weak. Sensitivity analyses revealed that convolution kernel did not affect the correlation between visual and automated outcomes, but harder kernels yielded lower disease scores, especially for %BE and %AWT. Our results show that AI-derived outcomes are not identical to visual PRAGMA-CF scores in size, but strongly correlated on measures of bronchiectasis, bronchial-disease and mucus plugging. They could therefore be a promising alternative for time-consuming visual scoring, especially in larger studies.

Sarcopenia Assessment Using Fully Automated Deep Learning Predicts Cardiac Allograft Survival in Heart Transplant Recipients.

Lang FM, Liu J, Clerkin KJ, Driggin EA, Einstein AJ, Sayer GT, Takeda K, Uriel N, Summers RM, Topkara VK

pubmed logopapersAug 20 2025
Sarcopenia is associated with adverse outcomes in patients with end-stage heart failure. Muscle mass can be quantified via manual segmentation of computed tomography images, but this approach is time-consuming and subject to interobserver variability. We sought to determine whether fully automated assessment of radiographic sarcopenia by deep learning would predict heart transplantation outcomes. This retrospective study included 164 adult patients who underwent heart transplantation between January 2013 and December 2022. A deep learning-based tool was utilized to automatically calculate cross-sectional skeletal muscle area at the T11, T12, and L1 levels on chest computed tomography. Radiographic sarcopenia was defined as skeletal muscle index (skeletal muscle area divided by height squared) in the lowest sex-specific quartile. The study population had a mean age of 53±14 years and was predominantly male (75%) with a nonischemic cause (73%). Mean skeletal muscle index was 28.3±7.6 cm<sup>2</sup>/m<sup>2</sup> for females versus 33.1±8.1 cm<sup>2</sup>/m<sup>2</sup> for males (<i>P</i><0.001). Cardiac allograft survival was significantly lower in heart transplant recipients with versus without radiographic sarcopenia at T11 (90% versus 98% at 1 year, 83% versus 97% at 3 years, log-rank <i>P</i>=0.02). After multivariable adjustment, radiographic sarcopenia at T11 was associated with an increased risk of cardiac allograft loss or death (hazard ratio, 3.86 [95% CI, 1.35-11.0]; <i>P</i>=0.01). Patients with radiographic sarcopenia also had a significantly increased hospital length of stay (28 [interquartile range, 19-33] versus 20 [interquartile range, 16-31] days; <i>P</i>=0.046). Fully automated quantification of radiographic sarcopenia using pretransplant chest computed tomography successfully predicts cardiac allograft survival. By avoiding interobserver variability and accelerating computation, this approach has the potential to improve candidate selection and outcomes in heart transplantation.

Characterizing the Impact of Training Data on Generalizability: Application in Deep Learning to Estimate Lung Nodule Malignancy Risk.

Obreja B, Bosma J, Venkadesh KV, Saghir Z, Prokop M, Jacobs C

pubmed logopapersAug 20 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To investigate the relationship between training data volume and performance of a deep learning AI algorithm developed to assess the malignancy risk of pulmonary nodules detected on low-dose CT scans in lung cancer screening. Materials and Methods This retrospective study used a dataset of 16077 annotated nodules (1249 malignant, 14828 benign) from the National Lung Screening Trial (NLST) to systematically train an AI algorithm for pulmonary nodule malignancy risk prediction across various stratified subsets ranging from 1.25% to the full dataset. External testing was conducted using data from the Danish Lung Cancer Screening Trial (DLCST) to determine the amount of training data at which the performance of the AI was statistically non-inferior to the AI trained on the full NLST cohort. A size-matched cancer-enriched subset of DLCST, where each malignant nodule had been paired in diameter with the closest two benign nodules, was used to investigate the amount of training data at which the performance of the AI algorithm was statistically non-inferior to the average performance of 11 clinicians. Results The external testing set included 599 participants (mean age 57.65 (SD 4.84) for females and mean age 59.03 (SD 4.94) for males) with 883 nodules (65 malignant, 818 benign). The AI achieved a mean AUC of 0.92 [95% CI: 0.88, 0.96] on the DLCST cohort when trained on the full NLST dataset. Training with 80% of NLST data resulted in non-inferior performance (mean AUC 0.92 [95%CI: 0.89, 0.96], <i>P</i> = .005). On the size-matched DLCST subset (59 malignant, 118 benign), the AI reached non-inferior clinician-level performance (mean AUC 0.82 [95% CI: 0.77, 0.86]) with 20% of the training data (<i>P</i> = .02). Conclusion The deep learning AI algorithm demonstrated excellent performance in assessing pulmonary nodule malignancy risk, achieving clinical level performance with a fraction of the training data and reaching peak performance before utilizing the full dataset. ©RSNA, 2025.

Unexpected early pulmonary thrombi in war injured patients.

Sasson I, Sorin V, Ziv-Baran T, Marom EM, Czerniawski E, Adam SZ, Aviram G

pubmed logopapersAug 20 2025
Pulmonary embolism is commonly associated with deep vein thrombosis and the components of Virchow's triad: hypercoagulability, stasis, and endothelial injury. High-risk patients are traditionally those with prolonged immobility and hypercoagulability. Recent findings of pulmonary thrombosis (PT) in healthy combat soldiers, found on CT performed for initial trauma assessment, challenge this assumption. The aim of this study was to investigate the prevalence and characteristics of PT detected in acute traumatic war injuries, and evaluate the effectiveness of an artificial intelligence (AI) algorithm in these settings. This retrospective study analyzed immediate post-trauma CT scans of war-injured patients aged 18-45, from two tertiary hospitals between October 7, 2023, and January 7, 2024. Thrombi were retrospectively detected using AI software and confirmed by two senior radiologists. Findings were compared to the original reports. Clinical and injury-related data were analyzed. Of 190 patients (median age 24, IQR (21.0-30.0), 183 males), AI identified 10 confirmed PT patients (5.6%), six (60%) of whom were not originally diagnosed. The only statistically significant difference between PT and non-PT patients was increased complexity and severity of injuries (higher Injury Severity Score, median (IQR) 21.0 (20.0-21.0) vs 9.0 (4.0-14.5), p = 0.01, accordingly). Despite the presence of thrombi, significant right ventricular dilatation was absent in all patients. This report of early PT in war-injured patients provides a unique opportunity to characterize these findings. PT occurs more frequently than anticipated, without clinical suspicion, highlighting the need for improved radiologists' awareness and the crucial role of AI systems as diagnostic support tools. Question What is the prevalence, and what are the radiological characteristics of arterial clotting within the pulmonary arteries in young acute trauma patients? Findings A surprisingly high occurrence of PT with a high rate of missed diagnoses by radiologists. All cases did not presented right ventricular dysfunction. Clinical relevance PT is a distinct clinical entity separate from traditional venous thromboembolism, which raises the need for further investigation of the appropriate treatment paradigm.

[Preoperative discrimination of colorectal mucinous adenocarcinoma using enhanced CT-based radiomics and deep learning fusion model].

Wang BZ, Zhang X, Wang YL, Wang XY, Wang QG, Luo Z, Xu SL, Huang C

pubmed logopapersAug 20 2025
<b>Objective:</b> To develop a preoperative differentiation model for colorectal mucinous adenocarcinoma and non-mucinous adenocarcinoma using a combination of contrast-enhanced CT radiomics and deep learning methods. <b>Methods:</b> This is a retrospective case series study. Clinical data of colorectal cancer patients confirmed by postoperative pathological examination were retrospectively collected from January 2016 to December 2023 at Shanghai General Hospital Affiliated to Shanghai Jiao Tong University School of Medicine (Center 1, <i>n</i>=220) and the First Affiliated Hospital of Bengbu Medical University (Center 2, <i>n=</i>51). Among them, there were 108 patients diagnosed with mucinous adenocarcinoma, including 55 males and 53 females, with an age of (68.4±12.2) years (range: 38 to 96 years); and 163 patients diagnosed with non-mucinous adenocarcinoma, including 96 males and 67 females, with an age of (67.9±11.0) years (range: 43 to 94 years). The cases from Center 1 were divided into a training set (<i>n</i>=156) and an internal validation set (<i>n</i>=64) using stratified random sampling in a 7︰3 ratio, and the cases from Center 2 were used as an independent external validation set (<i>n</i>=51). Three-dimensional tumor volume of interest was manually segmented on venous-phase contrast-enhanced CT images. Radiomics features were extracted using PyRadiomics, and deep learning features were extracted using the ResNet-18 network. The two sets of features were then combined to form a joint feature set. The consistency of manual segmentation was assessed using the intraclass correlation coefficient. Feature dimensionality reduction was performed using the Mann-Whitney <i>U</i> test and the least absolute shrinkage and selection operator regression. Six machine learning algorithms were used to construct models based on radiomics features, deep learning features, and combined features, including support vector machine, Logistic regression, random forest, extreme gradient boosting, k-nearest neighbors, and decision tree. The discriminative performance of each model was evaluated using receiver operating characteristic curves, the area under the curve (AUC), DeLong test, and decision curve analysis. <b>Results:</b> After feature selection, 22 features with the most discriminative value were finally retained, among which 12 were traditional radiomics features and 10 were deep learning features. In the internal validation set, the Random Forest algorithm based on the combined features model achieved the best performance (AUC=0.938, 95%<i>CI:</i> 0.875 to 0.984), which was superior to the single-modality radiomics feature model (AUC=0.817, 95%<i>CI:</i> 0.702 to 0.913,<i>P</i>=0.048) and the deep learning feature model (AUC=0.832, 95%<i>CI:</i> 0.727 to 0.926,<i>P=</i>0.087); in the independent external validation set, the Random Forest algorithm with the combined features model maintained the highest discriminative performance (AUC=0.891, 95%<i>CI:</i> 0.791 to 0.969), which was superior to the single-modality radiomics feature model (AUC=0.770, 95%<i>CI:</i> 0.636 to 0.890,<i>P</i>=0.045) and the deep learning feature model (AUC=0.799, 95%<i>CI:</i> 0.652 to 0.911,<i>P</i>=0.169). <b>Conclusion:</b> The combined model based on radiomics and deep learning features from venous-phase enhanced CT demonstrates good performance in the preoperative differentiation of colorectal mucinous from non-mucinous adenocarcinoma.

S <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>3</mn></mmultiscripts> </math> TU-Net: Structured convolution and superpixel transformer for lung nodule segmentation.

Wu Y, Liu X, Shi Y, Chen X, Wang Z, Xu Y, Wang S

pubmed logopapersAug 20 2025
Accurate segmentation of lung adenocarcinoma nodules in computed tomography (CT) images is critical for clinical staging and diagnosis. However, irregular nodule shapes and ambiguous boundaries pose significant challenges for existing methods. This study introduces S<sup>3</sup>TU-Net, a hybrid CNN-Transformer architecture designed to enhance feature extraction, fusion, and global context modeling. The model integrates three key innovations: (1) structured convolution blocks (DWF-Conv/D<sup>2</sup>BR-Conv) for multi-scale feature extraction and overfitting mitigation; (2) S<sup>2</sup>-MLP Link, a spatial-shift-enhanced skip-connection module to improve multi-level feature fusion; and 3) residual-based superpixel vision transformer (RM-SViT) to capture long-range dependencies efficiently. Evaluated on the LIDC-IDRI dataset, S<sup>3</sup>TU-Net achieves a Dice score of 89.04%, precision of 90.73%, and IoU of 90.70%, outperforming recent methods by 4.52% in Dice. Validation on the EPDB dataset further confirms its generalizability (Dice, 86.40%). This work contributes to bridging the gap between local feature sensitivity and global context awareness by integrating structured convolutions and superpixel-based transformers, offering a robust tool for clinical decision support.
Page 37 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.