Sort by:
Page 36 of 1411410 results

ConvTNet fusion: A robust transformer-CNN framework for multi-class classification, multimodal feature fusion, and tissue heterogeneity handling.

Mahmood T, Saba T, Rehman A, Alamri FS

pubmed logopapersAug 22 2025
Medical imaging is crucial for clinical practice, providing insight into organ structure and function. Advancements in imaging technologies enable automated image segmentation, which is essential for accurate diagnosis and treatment planning. However, challenges like class imbalance, tissue boundary delineation, and tissue interaction complexity persist. The study introduces ConvTNet, a hybrid model that combines Transformer and CNN features to improve renal CT image segmentation. It uses attention mechanisms and feature fusion techniques to enhance precision. ConvTNet uses the KC module to focus on critical image regions, enabling precise tissue boundary delineation in noisy and ambiguous boundaries. The Mix-KFCA module enhances feature fusion by combining multi-scale features and distinguishing between healthy kidney tissue and surrounding structures. The study proposes innovative preprocessing strategies, including noise reduction, data augmentation, and image normalization, that significantly optimize image quality and ensure reliable inputs for accurate segmentation. ConvTNet employs transfer learning, fine-tuning five pre-trained models to bolster model performance further and leverage knowledge from a vast array of feature extraction techniques. Empirical evaluations demonstrate that ConvTNet performs exceptionally in multi-label classification and lesion segmentation, with an AUC of 0.9970, sensitivity of 0.9942, DSC of 0.9533, and accuracy of 0.9921, proving its efficacy for precise renal cancer diagnosis.

Towards Diagnostic Quality Flat-Panel Detector CT Imaging Using Diffusion Models

Hélène Corbaz, Anh Nguyen, Victor Schulze-Zachau, Paul Friedrich, Alicia Durrer, Florentin Bieder, Philippe C. Cattin, Marios N Psychogios

arxiv logopreprintAug 22 2025
Patients undergoing a mechanical thrombectomy procedure usually have a multi-detector CT (MDCT) scan before and after the intervention. The image quality of the flat panel detector CT (FDCT) present in the intervention room is generally much lower than that of a MDCT due to significant artifacts. However, using only FDCT images could improve patient management as the patient would not need to be moved to the MDCT room. Several studies have evaluated the potential use of FDCT imaging alone and the time that could be saved by acquiring the images before and/or after the intervention only with the FDCT. This study proposes using a denoising diffusion probabilistic model (DDPM) to improve the image quality of FDCT scans, making them comparable to MDCT scans. Clinicans evaluated FDCT, MDCT, and our model's predictions for diagnostic purposes using a questionnaire. The DDPM eliminated most artifacts and improved anatomical visibility without reducing bleeding detection, provided that the input FDCT image quality is not too low. Our code can be found on github.

Application of contrast-enhanced CT-driven multimodal machine learning models for pulmonary metastasis prediction in head and neck adenoid cystic carcinoma.

Gong W, Cui Q, Fu S, Wu Y

pubmed logopapersAug 22 2025
This study explores radiomics and deep learning for predicting pulmonary metastasis in head and neck Adenoid Cystic Carcinoma (ACC), assessing machine learning(ML) algorithms' model performance. The study retrospectively analyzed contrast-enhanced CT imaging data and clinical records from 130 patients with pathologically confirmed ACC in the head and neck region. The dataset was randomly split into training and test sets at a 7:3 ratio. Radiomic features and deep learning-derived features were extracted and subsequently integrated through multi-feature fusion. Z-score normalization was applied to training and test sets. Hypothesis testing selected significant features, followed by LASSO regression (5-fold CV) identifying 7 predictive features. Nine machine learning algorithms were employed to build predictive models for ACC pulmonary metastasis: ada, KNN, rf, NB, GLM, LDA, rpart, SVM-RBF, and GBM. Models were trained using the training set and tested on the test set. Model performance was evaluated using metrics such as recall, sensitivity, PPV, F1-score, precision, prevalence, NPV, specificity, accuracy, detection rate, detection prevalence, and balanced accuracy. Machine learning models based on multi-feature fusion of enhanced CT, utilizing KNN, SVM, rpart, GBM, NB, GLM, and LDA, demonstrated AUC values in the test set of 0.687, 0.863, 0.737, 0.793, 0.763, 0.867, and 0.844, respectively. Rf and ada showed significant overfitting. Among these, GBM and GLM showed higher stability in predicting pulmonary metastasis of head and neck ACC. Radiomics and deep learning methods based on enhanced CT imaging can provide effective auxiliary tools for predicting pulmonary metastasis in head and neck ACC patients, showing promising potential for clinical application.

Vision-Guided Surgical Navigation Using Computer Vision for Dynamic Intraoperative Imaging Updates.

Ruthberg J, Gunderson N, Chen P, Harris G, Case H, Bly R, Seibel EJ, Abuzeid WM

pubmed logopapersAug 22 2025
Residual disease after endoscopic sinus surgery (ESS) contributes to poor outcomes and revision surgery. Image-guided surgery systems cannot dynamically reflect intraoperative changes. We propose a sensorless, video-based method for intraoperative CT updating using neural radiance fields (NeRF), a deep learning algorithm used to create 3D surgical field reconstructions. Bilateral ESS was performed on three 3D-printed models (n = 6 sides). Postoperative endoscopic videos were processed through a custom NeRF pipeline to generate 3D reconstructions, which were co-registered to preoperative CT scans. Digitally updated CT models were created through algorithmic subtraction of resected regions, then volumetrically segmented, and compared to ground-truth postoperative CT. Accuracy was assessed using Hausdorff distance (surface alignment), Dice similarity coefficient (DSC) (volumetric overlap), and Bland‒Altman analysis (BAA) (statistical agreement). Comparison of the updated CT and the ground-truth postoperative CT indicated an average Hausdorff distance of 0.27 ± 0.076 mm and a 95th percentile Hausdorff distance of 0.82 ± 0.165 mm, indicating sub-millimeter surface alignment. The DSC was 0.93 ± 0.012 with values >0.9 suggestive of excellent spatial overlap. BAA indicated modest underestimation of volume on the updated CT versus ground-truth CT with a mean difference in volumes of 0.40 cm<sup>3</sup> with 95% limits of agreement of 0.04‒0.76 cm<sup>3</sup> indicating that all samples fell within acceptable bounds of variability. Computer vision can enable dynamic intraoperative imaging by generating highly accurate CT updates from monocular endoscopic video without external tracking. By directly visualizing resection progress, this software-driven tool has the potential to enhance surgical completeness in ESS for next-generation navigation platforms.

Predicting Radiation Pneumonitis Integrating Clinical Information, Medical Text, and 2.5D Deep Learning Features in Lung Cancer.

Wang W, Ren M, Ren J, Dang J, Zhao X, Li C, Wang Y, Li G

pubmed logopapersAug 21 2025
To construct a prediction model for radiation pneumonitis (RP) in lung cancer patients based on clinical information, medical text, and 2.5D deep learning (DL) features. A total of 356 patients with lung cancer from the Heping Campus of the First Hospital of China Medical University were randomly divided at a 7:3 ratio into training and validation cohorts, and 238 patients from 3 other centers were included in the testing cohort for assessing model generalizability. We used the term frequency-inverse document frequency method to generate numerical vectors from computed tomography (CT) report texts. The CT and radiation therapy dose slices demonstrating the largest lung region of interest across the coronal and transverse planes were considered as the central slice; moreover, 3 slices above and below the central slice were selected to create comprehensive 2.5D data. We extracted DL features via DenseNet121, DenseNet201, and Twins-SVT and integrated them via multi-instance learning (MIL) fusion. The performances of the 2D and 3D DL models were also compared with the performance of the 2.5D MIL model. Finally, RP prediction models based on clinical information, medical text, and 2.5D DL features were constructed, validated, and tested. The 2.5D MIL model based on CT was significantly better than the 2D and 3D DL models in the training, validation, and test cohorts. The 2.5D MIL model based on radiation therapy dose was considered to be the optimal model in the test1 cohort, whereas the 2D model was considered to be the optimal model in the training, validation, and test3 cohorts, with the 3D model being the optimal model in the test2 cohort. A combined model achieved Area Under Curve values of 0.964, 0.877, 0.868, 0.884, and 0.849 in the training, validation, test1, test2, and test3 cohorts, respectively. We propose an RP prediction model that integrates clinical information, medical text, and 2.5D MIL features, which provides new ideas for predicting the side effects of radiation therapy.

Dynamic-Attentive Pooling Networks: A Hybrid Lightweight Deep Model for Lung Cancer Classification.

Ayivi W, Zhang X, Ativi WX, Sam F, Kouassi FAP

pubmed logopapersAug 21 2025
Lung cancer is one of the leading causes of cancer-related mortality worldwide. The diagnosis of this disease remains a challenge due to the subtle and ambiguous nature of early-stage symptoms and imaging findings. Deep learning approaches, specifically Convolutional Neural Networks (CNNs), have significantly advanced medical image analysis. However, conventional architectures such as ResNet50 that rely on first-order pooling often fall short. This study aims to overcome the limitations of CNNs in lung cancer classification by proposing a novel and dynamic model named LungSE-SOP. The model is based on Second-Order Pooling (SOP) and Squeeze-and-Excitation Networks (SENet) within a ResNet50 backbone to improve feature representation and class separation. A novel Dynamic Feature Enhancement (DFE) module is also introduced, which dynamically adjusts the flow of information through SOP and SENet blocks based on learned importance scores. The model was trained using a publicly available IQ-OTH/NCCD lung cancer dataset. The performance of the model was assessed using various metrics, including the accuracy, precision, recall, F1-score, ROC curves, and confidence intervals. For multiclass tumor classification, our model achieved 98.6% accuracy for benign, 98.7% for malignant, and 99.9% for normal cases. Corresponding F1-scores were 99.2%, 99.8%, and 99.9%, respectively, reflecting the model's high precision and recall across all tumor types and its strong potential for clinical deployment.

Initial Recurrence Risk Stratification of Papillary Thyroid Cancer Based on Intratumoral and Peritumoral Dual Energy CT Radiomics.

Zhou Y, Xu Y, Si Y, Wu F, Xu X

pubmed logopapersAug 21 2025
This study aims to evaluate the potential of Dual-Energy Computed Tomography (DECT)-based radiomics in preoperative risk stratification for the prediction of initial recurrence in Papillary Thyroid Carcinoma (PTC). The retrospective analysis included 236 PTC cases (165 in the training cohort, 71 in the validation cohort) collected between July 2020 and June 2021. Tumor segmentation was carried out in both intratumoral and peritumoral areas (1 mm inner and outer to the tumor boundary). Three regionspecific rad-scores were developed (rad-score [VOI<sup>whole</sup>], rad-score [VOI<sup>outer layer</sup>], and rad-score [VOI<sup>inner layer</sup>]), respectively. Three radiomics models incorporating these rad-scores and additional risk factors were compared to a clinical model alone. The optimal radiomics model was presented as a nomogram. Rad-scores from peritumoral regions (VOI<sup>outer layer</sup> and VOI<sup>inner layer</sup>) outperformed the intratumoral rad-score (VOI<sup>whole</sup>). All radiomics models surpassed the clinical model, with peritumoral-based models (radiomics models 2 and 3) outperforming the intratumoral-based model (radiomics model 1). The top-performing nomogram, which included tumor size, tumor site, and rad-score (VOI<sup>inner layer</sup>), achieved an Area Under the Curve (AUC) of 0.877 in the training cohort and 0.876 in the validation cohort. The nomogram demonstrated good calibration, clinical utility, and stability. DECT-based intratumoral and peritumoral radiomics advance PTC initial recurrence risk prediction, providing clinical radiology with precise predictive tools. Further work is needed to refine the model and enhance its clinical application. Radiomics analysis of DECT, particularly in peritumoral regions, offers valuable predictive information for assessing the risk of initial recurrence in PTC.

CT-based machine learning model integrating intra- and peri-tumoral radiomics features for predicting occult lymph node metastasis in peripheral lung cancer.

Lu X, Liu F, E J, Cai X, Yang J, Wang X, Zhang Y, Sun B, Liu Y

pubmed logopapersAug 21 2025
Accurate preoperative assessment of occult lymph node metastasis (OLNM) plays a crucial role in informing therapeutic decision-making for lung cancer patients. Computed tomography (CT) is the most widely used imaging modality for preoperative work-up. The aim of this study was to develop and validate a CT-based machine learning model integrating intra-and peri-tumoral features to predict OLNM in lung cancer patients. Eligible patients with peripheral lung cancer confirmed by radical surgical excision with systematic lymphadenectomy were retrospectively recruited from January 2019 to December 2021. 1688 radiomics features were obtained from each manually segmented VOI which was composed of gross tumor volume (GTV) covering the boundary of entire tumor and three peritumoral volumes (PTV3, PTV6 and PTV9) that capture the region outside the tumor. A clinical-radiomics model incorporating radiomics signature, independent clinical factors and CT semantic features was established via multivariable logistic regression analysis and presented as a nomogram. Model performance was evaluated by discrimination, calibration, and clinical utility. Overall, 591 patients were recruited in the training cohort and 253 in the validation cohort. The radiomics signature of PTV9 showed superior diagnostic performance compared to PTV3 and PTV6 models. Integrating GPTV radiomics signature (incorporating Rad-score of GTV and PTV9) with clinical risk factor of serum CEA levels and CT imaging features of lobulation sign and tumor-pleura relationship demonstrated favorable accuracy in predicting OLNM in the training cohort (AUC, 0.819; 95% CI: 0.780-0.857) and validation cohort (AUC, 0.801; 95% CI: 0.741-0.860). The predictive performance of the clinical-radiomics model demonstrated statistically significant superiority over that of the clinical model in both cohorts (all p < 0.05). The clinical-radiomics model was able to serve as a noninvasive preoperative prediction tool for personalized risk assessment of OLNM in peripheral lung cancer patients.

COVID19 Prediction Based On CT Scans Of Lungs Using DenseNet Architecture

Deborup Sanyal

arxiv logopreprintAug 21 2025
COVID19 took the world by storm since December 2019. A highly infectious communicable disease, COVID19 is caused by the SARSCoV2 virus. By March 2020, the World Health Organization (WHO) declared COVID19 as a global pandemic. A pandemic in the 21st century after almost 100 years was something the world was not prepared for, which resulted in the deaths of around 1.6 million people worldwide. The most common symptoms of COVID19 were associated with the respiratory system and resembled a cold, flu, or pneumonia. After extensive research, doctors and scientists concluded that the main reason for lives being lost due to COVID19 was failure of the respiratory system. Patients were dying gasping for breath. Top healthcare systems of the world were failing badly as there was an acute shortage of hospital beds, oxygen cylinders, and ventilators. Many were dying without receiving any treatment at all. The aim of this project is to help doctors decide the severity of COVID19 by reading the patient's Computed Tomography (CT) scans of the lungs. Computer models are less prone to human error, and Machine Learning or Neural Network models tend to give better accuracy as training improves over time. We have decided to use a Convolutional Neural Network model. Given that a patient tests positive, our model will analyze the severity of COVID19 infection within one month of the positive test result. The severity of the infection may be promising or unfavorable (if it leads to intubation or death), based entirely on the CT scans in the dataset.

Deep Learning-Assisted Skeletal Muscle Radiation Attenuation at C3 Predicts Survival in Head and Neck Cancer

Barajas Ordonez, F., Xie, K., Ferreira, A., Siepmann, R., Chargi, N., Nebelung, S., Truhn, D., Berge, S., Bruners, P., Egger, J., Hölzle, F., Wirth, M., Kuhl, C., Puladi, B.

medrxiv logopreprintAug 21 2025
BackgroundHead and neck cancer (HNC) patients face an increased risk of malnutrition due to lifestyle, tumor localization, and treatment effects. While skeletal muscle area (SMA) and radiation attenuation (SM-RA) at the third lumbar vertebra (L3) are established prognostic markers, L3 is not routinely available in head and neck imaging. The prognostic value of SM-RA at the third cervical vertebra (C3) remains unclear. This study assesses whether SMA and SM-RA at C3 predict locoregional control (LRC) and overall survival (OS) in HNC. MethodsWe analyzed 904 HNC cases with head and neck CT scans. A deep learning pipeline identified C3, and SMA/SM-RA were quantified via automated segmentation with manual verification. Cox proportional hazards models assessed associations with LRC and OS, adjusting for clinical factors. ResultsMedian SMA and SM-RA were 36.64 cm{superscript 2} (IQR: 30.12-42.44) and 50.77 HU (IQR: 43.04-57.39). In multivariate analysis, lower SMA (HR 1.62, 95% CI: 1.02-2.58, p = 0.04), lower SM-RA (HR 1.89, 95% CI: 1.30-2.79, p < 0.001), and advanced T stage (HR 1.50, 95% CI: 1.06-2.12, p = 0.02) were prognostic for LRC. OS predictors included advanced T stage (HR 2.17, 95% CI: 1.64-2.87, p < 0.001), age [&ge;]70 years (HR 1.40, 95% CI: 1.00-1.96, p = 0.05), male sex (HR 1.64, 95% CI: 1.02-2.63, p = 0.04), and lower SM-RA (HR 2.15, 95% CI: 1.56-2.96, p < 0.001). ConclusionDeep learning-assisted SM-RA assessment at C3 outperforms SMA for LRC and OS in HNC, supporting its use as a routine biomarker and L3 alternative.
Page 36 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.