Sort by:
Page 77 of 1421416 results

A Preoperative CT-based Multiparameter Deep Learning and Radiomic Model with Extracellular Volume Parameter Images Can Predict the Tumor Budding Grade in Rectal Cancer Patients.

Tang X, Zhuang Z, Jiang L, Zhu H, Wang D, Zhang L

pubmed logopapersJul 1 2025
To investigate a computed tomography (CT)-based multiparameter deep learning-radiomic model (DLRM) for predicting the preoperative tumor budding (TB) grade in patients with rectal cancer. Data from 135 patients with histologically confirmed rectal cancer (85 in the Bd1+2 group and 50 in the Bd3 group) were retrospectively included. Deep learning (DL) features and hand-crafted radiomic (HCR) features were separately extracted and selected from preoperative CT-based extracellular volume (ECV) parameter images and venous-phase images. Six predictive signatures were subsequently constructed from machine learning classification algorithms. Finally, a combined DL and HCR model, the DLRM, was established to predict the TB grade of rectal cancer patients by merging the DL and HCR features from the two image sets. In the training and test cohorts, the AUC values of the DLRM were 0.976 [95% CI: 0.942-0.997] and 0.976 [95% CI: 0.942-1.00], respectively. The DLRM had good output agreement and clinical applicability according to calibration curve analysis and DCA, respectively. The DLRM outperformed the individual DL and HCR signatures in terms of predicting the TB grade of rectal cancer patients (p < 0.05). The DLRM can be used to evaluate the TB grade of rectal cancer patients in a noninvasive manner before surgery, thereby providing support for clinical treatment decision-making for these patients.

Automated Fast Prediction of Bone Mineral Density From Low-dose Computed Tomography.

Zhou K, Xin E, Yang S, Luo X, Zhu Y, Zeng Y, Fu J, Ruan Z, Wang R, Geng D, Yang L

pubmed logopapersJul 1 2025
Low-dose chest CT (LDCT) is commonly employed for the early screening of lung cancer. However, it has rarely been utilized in the assessment of volumetric bone mineral density (vBMD) and the diagnosis of osteoporosis (OP). This study investigated the feasibility of using deep learning to establish a system for vBMD prediction and OP classification based on LDCT scans. This study included 551 subjects who underwent both LDCT and QCT examinations. First, the U-net was developed to automatically segment lumbar vertebrae from single 2D LDCT slices near the mid-vertebral level. Then, a prediction model was proposed to estimate vBMD, which was subsequently employed for detecting OP and osteopenia (OA). Specifically, two input modalities were constructed for the prediction model. The performance metrics of the models were calculated and evaluated. The segmentation model exhibited a strong correlation with manual segmentation, achieving a mean Dice similarity coefficient (DSC) of 0.974, sensitivity of 0.964, positive predictive value (PPV) of 0.985, and Hausdorff distance of 3.261 in the test set. Linear regression and Bland-Altman analysis demonstrated strong agreement between the predicted vBMD from two-channel inputs and QCT-derived vBMD, with a root mean square error of 8.958 mg/mm<sup>3</sup> and an R<sup>2</sup> of 0.944. The areas under the curve for detecting OP and OA were 0.800 and 0.878, respectively, with an overall accuracy of 94.2%. The average processing time for this system was 1.5 s. This prediction system could automatically estimate vBMD and detect OP and OA on LDCT scans, providing great potential for the osteoporosis screening.

Photon-counting detector CT of the brain reduces variability of Hounsfield units and has a mean offset compared with energy-integrating detector CT.

Stein T, Lang F, Rau S, Reisert M, Russe MF, Schürmann T, Fink A, Kellner E, Weiss J, Bamberg F, Urbach H, Rau A

pubmed logopapersJul 1 2025
Distinguishing gray matter (GM) from white matter (WM) is essential for CT of the brain. The recently established photon-counting detector CT (PCD-CT) technology employs a novel detection technique that might allow more precise measurement of tissue attenuation for an improved delineation of attenuation values (Hounsfield units - HU) and improved image quality in comparison with energy-integrating detector CT (EID-CT). To investigate this, we compared HU, GM vs. WM contrast, and image noise using automated deep learning-based brain segmentations. We retrospectively included patients who received either PCD-CT or EID-CT and did not display a cerebral pathology. A deep learning-based segmentation of the GM and WM was used to extract HU. From this, the gray-to-white ratio and contrast-to-noise ratio were calculated. We included 329 patients with EID-CT (mean age 59.8 ± 20.2 years) and 180 with PCD-CT (mean age 64.7 ± 16.5 years). GM and WM showed significantly lower HU in PCD-CT (GM: 40.4 ± 2.2 HU; WM: 33.4 ± 1.5 HU) compared to EID-CT (GM: 45.1 ± 1.6 HU; WM: 37.4 ± 1.6 HU, p < .001). Standard deviations of HU were also lower in PCD-CT (GM and WM both p < .001) and contrast-tonoise ratio was significantly higher in PCD-CT compared to EID-CT (p < .001). Gray-to-white matter ratios were not significantly different across both modalities (p > .99). In an age-matched subset (n = 157 patients from both cohorts), all findings were replicated. This comprehensive comparison of HU in cerebral gray and white matter revealed substantially reduced image noise and an average offset with lower HU in PCD-CT while the ratio between GM and WM remained constant. The potential need to adapt windowing presets based on this finding should be investigated in future studies. CNR = Contrast-to-Noise Ratio; CTDIvol = Volume Computed Tomography Dose Index; EID = Energy-Integrating Detector; GWR = Gray-to-White Matter Ratio; HU = Hounsfield Units; PCD = Photon-Counting Detector; ROI = Region of Interest; VMI = Virtual Monoenergetic Images.

Deep Learning Estimation of Small Airway Disease from Inspiratory Chest Computed Tomography: Clinical Validation, Repeatability, and Associations with Adverse Clinical Outcomes in Chronic Obstructive Pulmonary Disease.

Chaudhary MFA, Awan HA, Gerard SE, Bodduluri S, Comellas AP, Barjaktarevic IZ, Barr RG, Cooper CB, Galban CJ, Han MK, Curtis JL, Hansel NN, Krishnan JA, Menchaca MG, Martinez FJ, Ohar J, Vargas Buonfiglio LG, Paine R, Bhatt SP, Hoffman EA, Reinhardt JM

pubmed logopapersJul 1 2025
<b>Rationale:</b> Quantifying functional small airway disease (fSAD) requires additional expiratory computed tomography (CT) scans, limiting clinical applicability. Artificial intelligence (AI) could enable fSAD quantification from chest CT scans at total lung capacity (TLC) alone (fSAD<sup>TLC</sup>). <b>Objectives:</b> To evaluate an AI model for estimating fSAD<sup>TLC</sup>, compare it with dual-volume parametric response mapping fSAD (fSAD<sup>PRM</sup>), and assess its clinical associations and repeatability in chronic obstructive pulmonary disease (COPD). <b>Methods:</b> We analyzed 2,513 participants from SPIROMICS (the Subpopulations and Intermediate Outcome Measures in COPD Study). Using a randomly sampled subset (<i>n</i> = 1,055), we developed a generative model to produce virtual expiratory CT scans for estimating fSAD<sup>TLC</sup> in the remaining 1,458 SPIROMICS participants. We compared fSAD<sup>TLC</sup> with dual-volume fSAD<sup>PRM</sup>. We investigated univariate and multivariable associations of fSAD<sup>TLC</sup> with FEV<sub>1</sub>, FEV<sub>1</sub>/FVC ratio, 6-minute-walk distance, St. George's Respiratory Questionnaire score, and FEV<sub>1</sub> decline. The results were validated in a subset of patients from the COPDGene (Genetic Epidemiology of COPD) study (<i>n</i> = 458). Multivariable models were adjusted for age, race, sex, body mass index, baseline FEV<sub>1</sub>, smoking pack-years, smoking status, and percent emphysema. <b>Measurements and Main Results:</b> Inspiratory fSAD<sup>TLC</sup> showed a strong correlation with fSAD<sup>PRM</sup> in SPIROMICS (Pearson's <i>R</i> = 0.895) and COPDGene (<i>R</i> = 0.897) cohorts. Higher fSAD<sup>TLC</sup> levels were significantly associated with lower lung function, including lower postbronchodilator FEV<sub>1</sub> (in liters) and FEV<sub>1</sub>/FVC ratio, and poorer quality of life reflected by higher total St. George's Respiratory Questionnaire scores independent of percent CT emphysema. In SPIROMICS, individuals with higher fSAD<sup>TLC</sup> experienced an annual decline in FEV<sub>1</sub> of 1.156 ml (relative decrease; 95% confidence interval [CI], 0.613-1.699; <i>P</i> < 0.001) per year for every 1% increase in fSAD<sup>TLC</sup>. The rate of decline in the COPDGene cohort was slightly lower at 0.866 ml/yr (relative decrease; 95% CI, 0.345-1.386; <i>P</i> < 0.001) per 1% increase in fSAD<sup>TLC</sup>. Inspiratory fSAD<sup>TLC</sup> demonstrated greater consistency between repeated measurements, with a higher intraclass correlation coefficient of 0.99 (95% CI, 0.98-0.99) compared with fSAD<sup>PRM</sup> (0.83; 95% CI, 0.76-0.88). <b>Conclusions:</b> Small airway disease can be reliably assessed from a single inspiratory CT scan using generative AI, eliminating the need for an additional expiratory CT scan. fSAD estimation from inspiratory CT correlates strongly with fSAD<sup>PRM</sup>, demonstrates a significant association with FEV<sub>1</sub> decline, and offers greater repeatability.

CT Differentiation and Prognostic Modeling in COVID-19 and Influenza A Pneumonia.

Chen X, Long Z, Lei Y, Liang S, Sima Y, Lin R, Ding Y, Lin Q, Ma T, Deng Y

pubmed logopapersJul 1 2025
This study aimed to compare CT features of COVID-19 and Influenza A pneumonia, develop a diagnostic differential model, and explore a prognostic model for lesion resolution. A total of 446 patients diagnosed with COVID-19 and 80 with Influenza A pneumonitis underwent baseline chest CT evaluation. Logistic regression analysis was conducted after multivariate analysis and the results were presented as nomograms. Machine learning models were also evaluated for their diagnostic performance. Prognostic factors for lesion resolution were analyzed using Cox regression after excluding patients who were lost to follow-up, with a nomogram being created. COVID-19 patients showed more features such as thickening of bronchovascular bundles, crazy paving sign and traction bronchiectasis. Influenza A patients exhibited more features such as consolidation, coarse banding and pleural effusion (P < 0.05). The logistic regression model achieved AUC values of 0.937 (training) and 0.931 (validation). Machine learning models exhibited area under the curve values ranging from 0.8486 to 0.9017. COVID-19 patients showed better lesion resolution. Independent prognostic factors for resolution at baseline included age, sex, lesion distribution, morphology, coarse banding, and widening of the main pulmonary artery. Distinct imaging features can differentiate COVID-19 from Influenza A pneumonia. The logistic discriminative model and each machine - learning network model constructed in this study demonstrated efficacy. The nomogram for the logistic discriminative model exhibited high utility. Patients with COVID-19 may exhibit a better resolution of lesions. Certain baseline characteristics may act as independent prognostic factors for complete resolution of lesions.

Deep learning-based lung cancer classification of CT images.

Faizi MK, Qiang Y, Wei Y, Qiao Y, Zhao J, Aftab R, Urrehman Z

pubmed logopapersJul 1 2025
Lung cancer remains a leading cause of cancer-related deaths worldwide, with accurate classification of lung nodules being critical for early diagnosis. Traditional radiological methods often struggle with high false-positive rates, underscoring the need for advanced diagnostic tools. In this work, we introduce DCSwinB, a novel deep learning-based lung nodule classifier designed to improve the accuracy and efficiency of benign and malignant nodule classification in CT images. Built on the Swin-Tiny Vision Transformer (ViT), DCSwinB incorporates several key innovations: a dual-branch architecture that combines CNNs for local feature extraction and Swin Transformer for global feature extraction, and a Conv-MLP module that enhances connections between adjacent windows to capture long-range dependencies in 3D images. Pretrained on the LUNA16 and LUNA16-K datasets, which consist of annotated CT scans from thousands of patients, DCSwinB was evaluated using ten-fold cross-validation. The model demonstrated superior performance, achieving 90.96% accuracy, 90.56% recall, 89.65% specificity, and an AUC of 0.94, outperforming existing models such as ResNet50 and Swin-T. These results highlight the effectiveness of DCSwinB in enhancing feature representation while optimizing computational efficiency. By improving the accuracy and reliability of lung nodule classification, DCSwinB has the potential to assist radiologists in reducing diagnostic errors, enabling earlier intervention and improved patient outcomes.

Lung cancer screening with low-dose CT: definition of positive, indeterminate, and negative screen results. A nodule management recommendation from the European Society of Thoracic Imaging.

Snoeckx A, Silva M, Prosch H, Biederer J, Frauenfelder T, Gleeson F, Jacobs C, Kauczor HU, Parkar AP, Schaefer-Prokop C, Prokop M, Revel MP

pubmed logopapersJul 1 2025
Early detection of lung cancer through low-dose CT lung cancer screening in a high-risk population has proven to reduce lung cancer-specific mortality. Nodule management plays a pivotal role in early detection and further diagnostic approaches. The European Society of Thoracic Imaging (ESTI) has established a nodule management recommendation to improve the handling of pulmonary nodules detected during screening. For solid nodules, the primary method for assessing the likelihood of malignancy is to monitor nodule growth using volumetry software. For subsolid nodules, the aggressiveness is determined by measuring the solid part. The ESTI-recommendation enhances existing protocols but puts a stronger focus on lesion aggressiveness. The main goals are to minimise the overall number of follow-up examinations while preventing the risk of a major stage shift and reducing the risk of overtreatment. KEY POINTS: Question Assessment of nodule growth and management according to guidelines is essential in lung cancer screening. Findings Assessment of nodule aggressiveness defines follow-up in lung cancer screening. Clinical relevance The ESTI nodule management recommendation aims to reduce follow-up examinations while preventing major stage shift and overtreatment.

Personalized prediction model generated with machine learning for kidney function one year after living kidney donation.

Oki R, Hirai T, Iwadoh K, Kijima Y, Hashimoto H, Nishimura Y, Banno T, Unagami K, Omoto K, Shimizu T, Hoshino J, Takagi T, Ishida H, Hirai T

pubmed logopapersJul 1 2025
Living kidney donors typically experience approximately a 30% reduction in kidney function after donation, although the degree of reduction varies among individuals. This study aimed to develop a machine learning (ML) model to predict serum creatinine (Cre) levels at one year post-donation using preoperative clinical data, including kidney-, fat-, and muscle-volumetry values from computed tomography. A total of 204 living kidney donors were included. Symbolic regression via genetic programming was employed to create an ML-based Cre prediction model using preoperative clinical variables. Validation was conducted using a 7:3 training-to-test data split. The ML model demonstrated a median absolute error of 0.079 mg/dL for predicting Cre. In the validation cohort, it outperformed conventional methods (which assume post-donation eGFR to be 70% of the preoperative value) with higher R<sup>2</sup> (0.58 vs. 0.27), lower root mean squared error (5.27 vs. 6.89), and lower mean absolute error (3.92 vs. 5.8). Key predictive variables included preoperative Cre and remnant kidney volume. The model was deployed as a web application for clinical use. The ML model offers accurate predictions of post-donation kidney function and may assist in monitoring donor outcomes, enhancing personalized care after kidney donation.

Auto-Segmentation via deep-learning approaches for the assessment of flap volume after reconstructive surgery or radiotherapy in head and neck cancer.

Thariat J, Mesbah Z, Chahir Y, Beddok A, Blache A, Bourhis J, Fatallah A, Hatt M, Modzelewski R

pubmed logopapersJul 1 2025
Reconstructive flap surgery aims to restore the substance and function losses associated with tumor resection. Automatic flap segmentation could allow quantification of flap volume and correlations with functional outcomes after surgery or post-operative RT (poRT). Flaps being ectopic tissues of various components (fat, skin, fascia, muscle, bone) of various volume, shape and texture, the anatomical modifications, inflammation and edema of the postoperative bed make the segmentation task challenging. We built a artificial intelligence-enabled automatic soft-tissue flap segmentation method from CT scans of Head and Neck Cancer (HNC) patients. Ground-truth flap segmentation masks were delineated by two experts on postoperative CT scans of 148 HNC patients undergoing poRT. All CTs and flaps (free or pedicled, soft tissue only or bone) were kept, including those with artefacts, to ensure generalizability. A deep-learning nnUNetv2 framework was built using Hounsfield Units (HU) windowing to mimic radiological assessment. A transformer-based 2D "Segment Anything Model" (MedSAM) was also built and fine-tuned to medical CTs. Models were compared with the Dice Similarity Coefficient (DSC) and Hausdorff Distance 95th percentile (HD95) metrics. Flaps were in the oral cavity (N = 102), oropharynx (N = 26) or larynx/hypopharynx (N = 20). There were free flaps (N = 137), pedicled flaps (N = 11), of soft tissue flap-only (N = 92), reconstructed bone (N = 42), or bone resected without reconstruction (N = 40). The nnUNet-windowing model outperformed the nnUNetv2 and MedSam models. It achieved mean DSCs of 0.69 and HD95 of 25.6 mm using 5-fold cross-validation. Segmentation performed better in the absence of artifacts, and rare situations such as pedicled flaps, laryngeal primaries and resected bone without bone reconstruction (p < 0.01). Automatic flap segmentation demonstrates clinical performances that allow to quantify spontaneous and radiation-induced volume shrinkage of flaps. Free flaps achieved excellent performances; rare situations will be addressed by fine-tuning the network.
Page 77 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.