Sort by:
Page 35 of 1411410 results

ESR Essentials: lung cancer screening with low-dose CT-practice recommendations by the European Society of Thoracic Imaging.

Revel MP, Biederer J, Nair A, Silva M, Jacobs C, Snoeckx A, Prokop M, Prosch H, Parkar AP, Frauenfelder T, Larici AR

pubmed logopapersAug 23 2025
Low-dose CT screening for lung cancer reduces the risk of death from lung cancer by at least 21% in high-risk participants and should be offered to people aged between 50 and 75 with at least 20 pack-years of smoking. Iterative reconstruction or deep learning algorithms should be used to keep the effective dose below 1 mSv. Deep learning algorithms are required to facilitate the detection of nodules and the measurement of their volumetric growth. Only large solid nodules larger than 500 mm<sup>3</sup> or those with spiculations, bubble-like lucencies, or pleural indentation and complex cysts should be investigated further. Short-term follow-up at 3 or 6 months is required for solid nodules of 100 to 500 mm<sup>3</sup>. A watchful waiting approach is recommended for most subsolid nodules, to limit the risk of overtreatment. Finally, the description of additional findings must be limited if LCS is to be cost-effective. KEY POINTS: Low-dose CT screening reduces the risk of death from lung cancer by at least 21% in high-risk individuals, with a greater benefit in women. Quality assurance of screening is essential to control radiation dose and the number of false positives. Screening with low-dose CT scans detects incidental findings of variable clinical relevance, only those of importance should be reported.

NLSTseg: A Pixel-level Lung Cancer Dataset Based on NLST LDCT Images.

Chen KH, Lin YH, Wu S, Shih NW, Meng HC, Lin YY, Huang CR, Huang JW

pubmed logopapersAug 23 2025
Low-dose computed tomography (LDCT) is the most effective tools for early detection of lung cancer. With advancements in artificial intelligence, various Computer-Aided Diagnosis (CAD) systems are now supported in clinical practice. For radiologists dealing with a huge volume of CT scans, CAD systems are helpful. However, the development of these systems depends on precisely annotated datasets, which are currently limited. Although several lung imaging datasets exist, there is only few of publicly available datasets with segmentation annotations on LDCT images. To address this problem, we developed a dataset based on NLST LDCT images with pixel-level annotations of lung lesions. The dataset includes LDCT scans from 605 patients and 715 annotated lesions, including 662 lung tumors and 53 lung nodules. Lesion volumes range from 0.03 cm<sup>3</sup> to 372.21 cm<sup>3</sup>, with 500 lesions smaller than 5 cm<sup>3</sup>, mostly located in the right upper lung. A 2D U-Net model trained on the dataset achieved a 0.95 IoU on training dataset. This dataset enhances the diversity and usability of lung cancer annotation resources.

Utility of machine learning for predicting severe chronic thromboembolic pulmonary hypertension based on CT metrics in a surgical cohort.

Grubert Van Iderstine M, Kim S, Karur GR, Granton J, de Perrot M, McIntosh C, McInnis M

pubmed logopapersAug 23 2025
The aim of this study was to develop machine learning (ML) models to explore the relationship between chronic pulmonary embolism (PE) burden and severe pulmonary hypertension (PH) in surgical chronic thromboembolic pulmonary hypertension (CTEPH). CTEPH patients with a preoperative CT pulmonary angiogram and pulmonary endarterectomy between 01/2017 and 06/2022 were included. A mean pulmonary artery pressure of > 50 mmHg was classified as severe. CTs were scored by a blinded radiologist who recorded chronic pulmonary embolism extent in detail, and measured the right ventricle (RV), left ventricle (LV), main pulmonary artery (PA) and ascending aorta (Ao) diameters. XGBoost models were developed to identify CTEPH feature importance and compared to a logistic regression model. There were 184 patients included; 54.9% were female, and 21.7% had severe PH. The average age was 57 ± 15 years. PE burden alone was not helpful in identifying severe PH. The RV/LV ratio logistic regression model performed well (AUC 0.76) with a cutoff of 1.4. A baseline ML model (Model 1) including only the RV, LV, Pa and Ao measures and their ratios yielded an average AUC of 0.66 ± 0.10. The addition of demographics and statistics summarizing the CT findings raised the AUC to 0.75 ± 0.08 (F1 score 0.41). While measures of PE burden had little bearing on PH severity independently, the RV/LV ratio, extent of disease in various segments, total webs observed, and patient demographics improved performance of machine learning models in identifying severe PH. Question Can machine learning methods applied to CT-based cardiac measurements and detailed maps of chronic thromboembolism type and distribution predict pulmonary hypertension (PH) severity? Findings The right-to-left ventricle (RV/LV) ratio was predictive of PH severity with an optimal cutoff of 1.4, and detailed accounts of chronic thromboembolic burden improved model performance. Clinical relevance The identification of a CT-based RV/LV ratio cutoff of 1.4 gives radiologists, clinicians, and patients a point of reference for chronic thromboembolic PH severity. Detailed chronic thromboembolic burden data are useful but cannot be used alone to predict PH severity.

Towards Diagnostic Quality Flat-Panel Detector CT Imaging Using Diffusion Models

Hélène Corbaz, Anh Nguyen, Victor Schulze-Zachau, Paul Friedrich, Alicia Durrer, Florentin Bieder, Philippe C. Cattin, Marios N Psychogios

arxiv logopreprintAug 22 2025
Patients undergoing a mechanical thrombectomy procedure usually have a multi-detector CT (MDCT) scan before and after the intervention. The image quality of the flat panel detector CT (FDCT) present in the intervention room is generally much lower than that of a MDCT due to significant artifacts. However, using only FDCT images could improve patient management as the patient would not need to be moved to the MDCT room. Several studies have evaluated the potential use of FDCT imaging alone and the time that could be saved by acquiring the images before and/or after the intervention only with the FDCT. This study proposes using a denoising diffusion probabilistic model (DDPM) to improve the image quality of FDCT scans, making them comparable to MDCT scans. Clinicans evaluated FDCT, MDCT, and our model's predictions for diagnostic purposes using a questionnaire. The DDPM eliminated most artifacts and improved anatomical visibility without reducing bleeding detection, provided that the input FDCT image quality is not too low. Our code can be found on github.

ConvTNet fusion: A robust transformer-CNN framework for multi-class classification, multimodal feature fusion, and tissue heterogeneity handling.

Mahmood T, Saba T, Rehman A, Alamri FS

pubmed logopapersAug 22 2025
Medical imaging is crucial for clinical practice, providing insight into organ structure and function. Advancements in imaging technologies enable automated image segmentation, which is essential for accurate diagnosis and treatment planning. However, challenges like class imbalance, tissue boundary delineation, and tissue interaction complexity persist. The study introduces ConvTNet, a hybrid model that combines Transformer and CNN features to improve renal CT image segmentation. It uses attention mechanisms and feature fusion techniques to enhance precision. ConvTNet uses the KC module to focus on critical image regions, enabling precise tissue boundary delineation in noisy and ambiguous boundaries. The Mix-KFCA module enhances feature fusion by combining multi-scale features and distinguishing between healthy kidney tissue and surrounding structures. The study proposes innovative preprocessing strategies, including noise reduction, data augmentation, and image normalization, that significantly optimize image quality and ensure reliable inputs for accurate segmentation. ConvTNet employs transfer learning, fine-tuning five pre-trained models to bolster model performance further and leverage knowledge from a vast array of feature extraction techniques. Empirical evaluations demonstrate that ConvTNet performs exceptionally in multi-label classification and lesion segmentation, with an AUC of 0.9970, sensitivity of 0.9942, DSC of 0.9533, and accuracy of 0.9921, proving its efficacy for precise renal cancer diagnosis.

Application of contrast-enhanced CT-driven multimodal machine learning models for pulmonary metastasis prediction in head and neck adenoid cystic carcinoma.

Gong W, Cui Q, Fu S, Wu Y

pubmed logopapersAug 22 2025
This study explores radiomics and deep learning for predicting pulmonary metastasis in head and neck Adenoid Cystic Carcinoma (ACC), assessing machine learning(ML) algorithms' model performance. The study retrospectively analyzed contrast-enhanced CT imaging data and clinical records from 130 patients with pathologically confirmed ACC in the head and neck region. The dataset was randomly split into training and test sets at a 7:3 ratio. Radiomic features and deep learning-derived features were extracted and subsequently integrated through multi-feature fusion. Z-score normalization was applied to training and test sets. Hypothesis testing selected significant features, followed by LASSO regression (5-fold CV) identifying 7 predictive features. Nine machine learning algorithms were employed to build predictive models for ACC pulmonary metastasis: ada, KNN, rf, NB, GLM, LDA, rpart, SVM-RBF, and GBM. Models were trained using the training set and tested on the test set. Model performance was evaluated using metrics such as recall, sensitivity, PPV, F1-score, precision, prevalence, NPV, specificity, accuracy, detection rate, detection prevalence, and balanced accuracy. Machine learning models based on multi-feature fusion of enhanced CT, utilizing KNN, SVM, rpart, GBM, NB, GLM, and LDA, demonstrated AUC values in the test set of 0.687, 0.863, 0.737, 0.793, 0.763, 0.867, and 0.844, respectively. Rf and ada showed significant overfitting. Among these, GBM and GLM showed higher stability in predicting pulmonary metastasis of head and neck ACC. Radiomics and deep learning methods based on enhanced CT imaging can provide effective auxiliary tools for predicting pulmonary metastasis in head and neck ACC patients, showing promising potential for clinical application.

Towards Diagnostic Quality Flat-Panel Detector CT Imaging Using Diffusion Models

Hélène Corbaz, Anh Nguyen, Victor Schulze-Zachau, Paul Friedrich, Alicia Durrer, Florentin Bieder, Philippe C. Cattin, Marios N Psychogios

arxiv logopreprintAug 22 2025
Patients undergoing a mechanical thrombectomy procedure usually have a multi-detector CT (MDCT) scan before and after the intervention. The image quality of the flat panel detector CT (FDCT) present in the intervention room is generally much lower than that of a MDCT due to significant artifacts. However, using only FDCT images could improve patient management as the patient would not need to be moved to the MDCT room. Several studies have evaluated the potential use of FDCT imaging alone and the time that could be saved by acquiring the images before and/or after the intervention only with the FDCT. This study proposes using a denoising diffusion probabilistic model (DDPM) to improve the image quality of FDCT scans, making them comparable to MDCT scans. Clinicans evaluated FDCT, MDCT, and our model's predictions for diagnostic purposes using a questionnaire. The DDPM eliminated most artifacts and improved anatomical visibility without reducing bleeding detection, provided that the input FDCT image quality is not too low. Our code can be found on github.

A Disease-Centric Vision-Language Foundation Model for Precision Oncology in Kidney Cancer

Yuhui Tao, Zhongwei Zhao, Zilong Wang, Xufang Luo, Feng Chen, Kang Wang, Chuanfu Wu, Xue Zhang, Shaoting Zhang, Jiaxi Yao, Xingwei Jin, Xinyang Jiang, Yifan Yang, Dongsheng Li, Lili Qiu, Zhiqiang Shao, Jianming Guo, Nengwang Yu, Shuo Wang, Ying Xiong

arxiv logopreprintAug 22 2025
The non-invasive assessment of increasingly incidentally discovered renal masses is a critical challenge in urologic oncology, where diagnostic uncertainty frequently leads to the overtreatment of benign or indolent tumors. In this study, we developed and validated RenalCLIP using a dataset of 27,866 CT scans from 8,809 patients across nine Chinese medical centers and the public TCIA cohort, a visual-language foundation model for characterization, diagnosis and prognosis of renal mass. The model was developed via a two-stage pre-training strategy that first enhances the image and text encoders with domain-specific knowledge before aligning them through a contrastive learning objective, to create robust representations for superior generalization and diagnostic precision. RenalCLIP achieved better performance and superior generalizability across 10 core tasks spanning the full clinical workflow of kidney cancer, including anatomical assessment, diagnostic classification, and survival prediction, compared with other state-of-the-art general-purpose CT foundation models. Especially, for complicated task like recurrence-free survival prediction in the TCIA cohort, RenalCLIP achieved a C-index of 0.726, representing a substantial improvement of approximately 20% over the leading baselines. Furthermore, RenalCLIP's pre-training imparted remarkable data efficiency; in the diagnostic classification task, it only needs 20% training data to achieve the peak performance of all baseline models even after they were fully fine-tuned on 100% of the data. Additionally, it achieved superior performance in report generation, image-text retrieval and zero-shot diagnosis tasks. Our findings establish that RenalCLIP provides a robust tool with the potential to enhance diagnostic accuracy, refine prognostic stratification, and personalize the management of patients with kidney cancer.

Vision-Guided Surgical Navigation Using Computer Vision for Dynamic Intraoperative Imaging Updates.

Ruthberg J, Gunderson N, Chen P, Harris G, Case H, Bly R, Seibel EJ, Abuzeid WM

pubmed logopapersAug 22 2025
Residual disease after endoscopic sinus surgery (ESS) contributes to poor outcomes and revision surgery. Image-guided surgery systems cannot dynamically reflect intraoperative changes. We propose a sensorless, video-based method for intraoperative CT updating using neural radiance fields (NeRF), a deep learning algorithm used to create 3D surgical field reconstructions. Bilateral ESS was performed on three 3D-printed models (n = 6 sides). Postoperative endoscopic videos were processed through a custom NeRF pipeline to generate 3D reconstructions, which were co-registered to preoperative CT scans. Digitally updated CT models were created through algorithmic subtraction of resected regions, then volumetrically segmented, and compared to ground-truth postoperative CT. Accuracy was assessed using Hausdorff distance (surface alignment), Dice similarity coefficient (DSC) (volumetric overlap), and Bland‒Altman analysis (BAA) (statistical agreement). Comparison of the updated CT and the ground-truth postoperative CT indicated an average Hausdorff distance of 0.27 ± 0.076 mm and a 95th percentile Hausdorff distance of 0.82 ± 0.165 mm, indicating sub-millimeter surface alignment. The DSC was 0.93 ± 0.012 with values >0.9 suggestive of excellent spatial overlap. BAA indicated modest underestimation of volume on the updated CT versus ground-truth CT with a mean difference in volumes of 0.40 cm<sup>3</sup> with 95% limits of agreement of 0.04‒0.76 cm<sup>3</sup> indicating that all samples fell within acceptable bounds of variability. Computer vision can enable dynamic intraoperative imaging by generating highly accurate CT updates from monocular endoscopic video without external tracking. By directly visualizing resection progress, this software-driven tool has the potential to enhance surgical completeness in ESS for next-generation navigation platforms.

Linking morphometric variations in human cranial bone to mechanical behavior using machine learning.

Guo W, Bhagavathula KB, Adanty K, Rabey KN, Ouellet S, Romanyk DL, Westover L, Hogan JD

pubmed logopapersAug 22 2025
With the development of increasingly detailed imaging techniques, there is a need to update the methodology and evaluation criteria for bone analysis to understand the influence of bone microarchitecture on mechanical response. The present study aims to develop a machine learning-based approach to investigate the link between morphology of the human calvarium and its mechanical response under quasi-static uniaxial compression. Micro-computed tomography is used to capture the microstructure at a resolution of 18μm of male (n=5) and female (n=5) formalin-fixed calvarium specimens of the frontal and parietal regions. Image processing-based machine learning methods using convolutional neural networks are developed to isolate and calculate specific morphometric properties, such as porosity, trabecular thickness and trabecular spacing. Then, an ensemble method using a gradient boosted decision tree (XGBoost) is used to predict the mechanical strength based on the morphological results, and found that mean and minimum porosity at diploë are the most relevant factors for the mechanical strength of cranial bones under the studied conditions. Overall, this study provides new tools that can predict the mechanical response of human calvarium a priori. Besides, the quantitative morphology of the human calvarium can be used as input data in finite element models, as well as contributing to efforts in the development of cranial simulant materials.
Page 35 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.