Sort by:
Page 94 of 1021015 results

Application of Quantitative CT and Machine Learning in the Evaluation and Diagnosis of Polymyositis/Dermatomyositis-Associated Interstitial Lung Disease.

Yang K, Chen Y, He L, Sheng Y, Hei H, Zhang J, Jin C

pubmed logopapersMay 16 2025
To investigate lung changes in patients with polymyositis/dermatomyositis-associated interstitial lung disease (PM/DM-ILD) using quantitative CT and to construct a diagnostic model to evaluate the application of quantitative CT and machine learning in diagnosing PM/DM-ILD. Chest CT images from 348 PM/DM individuals were quantitatively analyzed to obtain the lung volume (LV), mean lung density (MLD), and intrapulmonary vascular volume (IPVV) of the whole lung and each lung lobe. The percentage of high attenuation area (HAA %) was determined using the lung density histogram. Patients hospitalized from 2016 to 2021 were used as the training set (n=258), and from 2022 to 2023 were used as the temporal test set (n=90). Seven classification models were established, and their performance was evaluated through ROC analysis, decision curve analysis, calibration, and precision-recall curve. The optimal model was selected and interpreted with Python's SHAP model interpretation package. Compared to the non-ILD group, the mean lung density and percentage of high attenuation area in the whole lung and each lung lobe were significantly increased, and the lung volume and intrapulmonary vessel volume were significantly decreased in the ILD group. The Random Forest (RF) model demonstrated superior performance with the test set area under the curve of 0.843 (95% CI: 0.821-0.865), accuracy of 0.778, sensitivity of 0.784, and specificity of 0.750. Quantitative CT serves as an objective and precise method to assess pulmonary changes in PM/DM-ILD patients. The RF model based on CT quantitative parameters displayed strong diagnostic efficiency in identifying ILD, offering a new and convenient approach for evaluating and diagnosing PM/DM-ILD patients.

Enhancing Craniomaxillofacial Surgeries with Artificial Intelligence Technologies.

Do W, van Nistelrooij N, Bergé S, Vinayahalingam S

pubmed logopapersMay 16 2025
Artificial intelligence (AI) can be applied in multiple subspecialties in craniomaxillofacial (CMF) surgeries. This article overviews AI fundamentals focusing on classification, object detection, and segmentation-core tasks used in CMF applications. The article then explores the development and integration of AI in dentoalveolar surgery, implantology, traumatology, oncology, craniofacial surgery, and orthognathic and feminization surgery. It highlights AI-driven advancements in diagnosis, pre-operative planning, intra-operative assistance, post-operative management, and outcome prediction. Finally, the challenges in AI adoption are discussed, including data limitations, algorithm validation, and clinical integration.

Deep learning progressive distill for predicting clinical response to conversion therapy from preoperative CT images of advanced gastric cancer patients.

Han S, Zhang T, Deng W, Han S, Wu H, Jiang B, Xie W, Chen Y, Deng T, Wen X, Liu N, Fan J

pubmed logopapersMay 16 2025
Identifying patients suitable for conversion therapy through early non-invasive screening is crucial for tailoring treatment in advanced gastric cancer (AGC). This study aimed to develop and validate a deep learning method, utilizing preoperative computed tomography (CT) images, to predict the response to conversion therapy in AGC patients. This retrospective study involved 140 patients. We utilized Progressive Distill (PD) methodology to construct a deep learning model for predicting clinical response to conversion therapy based on preoperative CT images. Patients in the training set (n = 112) and in the test set (n = 28) were sourced from The First Affiliated Hospital of Wenzhou Medical University between September 2017 and November 2023. Our PD models' performance was compared with baseline models and those utilizing Knowledge Distillation (KD), with evaluation metrics including accuracy, sensitivity, specificity, receiver operating characteristic curves, areas under the receiver operating characteristic curve (AUCs), and heat maps. The PD model exhibited the best performance, demonstrating robust discrimination of clinical response to conversion therapy with an AUC of 0.99 and accuracy of 99.11% in the training set, and 0.87 AUC and 85.71% accuracy in the test set. Sensitivity and specificity were 97.44% and 100% respectively in the training set, 85.71% and 85.71% each in the test set, suggesting absence of discernible bias. The deep learning model of PD method accurately predicts clinical response to conversion therapy in AGC patients. Further investigation is warranted to assess its clinical utility alongside clinicopathological parameters.

Multicenter development of a deep learning radiomics and dosiomics nomogram to predict radiation pneumonia risk in non-small cell lung cancer.

Wang X, Zhang A, Yang H, Zhang G, Ma J, Ye S, Ge S

pubmed logopapersMay 16 2025
Radiation pneumonia (RP) is the most common side effect of chest radiotherapy, and can affect patients' quality of life. This study aimed to establish a combined model of radiomics, dosiomics, deep learning (DL) based on simulated location CT and dosimetry images combining with clinical parameters to improve the predictive ability of ≥ 2 grade RP (RP2) in patients with non-small cell lung cancer (NSCLC). This study retrospectively collected 245 patients with NSCLC who received radiotherapy from three hospitals. 162 patients from Hospital I were randomly divided into training cohort and internal validation cohort according to 7:3. 83 patients from two other hospitals served as an external validation cohort. Multivariate analysis was used to screen independent clinical predictors and establish clinical model (CM). The radiomic and dosiomics (RD) features and DL features were extracted from simulated location CT and dosimetry images based on the region of interest (ROI) of total lung-PTV (TL-PTV). The features screened by the t-test and least absolute shrinkage and selection operator (LASSO) were used to construct the RD and DL model, and RD-score and DL-score were calculated. RD-score, DL-score and independent clinical features were combined to establish deep learning radiomics and dosiomics nomogram (DLRDN). The model performance was evaluated by area under the curve (AUC). Three clinical factors, including V20, V30, and mean lung dose (MLD), were used to establish the CM. 7 RD features including 4 radiomics features and 3 dosiomics features were selected to establish RD model. 10 DL features were selected to establish DL model. Among the different models, DLRDN showed the best predictions, with the AUCs of 0.891 (0.826-0.957), 0.825 (0.693-0.957), and 0.801 (0.698-0.904) in the training cohort, internal validation cohort and external validation cohort, respectively. DCA showed that DLRDN had a higher overall net benefit than other models. The calibration curve showed that the predicted value of DLRDN was in good agreement with the actual value. Overall, radiomics, dosiomics, and DL features based on simulated location CT and dosimetry images have the potential to help predict RP2. The combination of multi-dimensional data produced the optimal predictive model, which could provide guidance for clinicians.

Artificial intelligence generated 3D body composition predicts dose modifications in patients undergoing neoadjuvant chemotherapy for rectal cancer.

Besson A, Cao K, Mardinli A, Wirth L, Yeung J, Kokelaar R, Gibbs P, Reid F, Yeung JM

pubmed logopapersMay 16 2025
Chemotherapy administration is a balancing act between giving enough to achieve the desired tumour response while limiting adverse effects. Chemotherapy dosing is based on body surface area (BSA). Emerging evidence suggests body composition plays a crucial role in the pharmacokinetic and pharmacodynamic profile of cytotoxic agents and could inform optimal dosing. This study aims to assess how lumbosacral body composition influences adverse events in patients receiving neoadjuvant chemotherapy for rectal cancer. A retrospective study (February 2013 to March 2023) examined the impact of body composition on neoadjuvant treatment outcomes for rectal cancer patients. Staging CT scans were analysed using a validated AI model to measure lumbosacral skeletal muscle (SM), intramuscular adipose tissue (IMAT), visceral adipose tissue (VAT), and subcutaneous adipose tissue volume and density. Multivariate analyses explored the relationship between body composition and chemotherapy outcomes. 242 patients were included (164 males, 78 Females), median age 63.4 years. Chemotherapy dose reductions occurred more frequently in females (26.9% vs. 15.9%, p = 0.042) and in females with greater VAT density (-82.7 vs. -89.1, p = 0.007) and SM: IMAT + VAT volume ratio (1.99 vs. 1.36, p = 0.042). BSA was a poor predictor of dose reduction (AUC 0.397, sensitivity 38%, specificity 60%) for female patients, whereas the SM: IMAT + VAT volume ratio (AUC 0.651, sensitivity 76%, specificity 61%) and VAT density (AUC 0.699, sensitivity 57%, specificity 74%) showed greater predictive ability. Body composition didn't influence dose adjustment of male patients. Lumbosacral body composition outperformed BSA in predicting adverse events in female patients with rectal cancer undergoing neoadjuvant chemotherapy.

Impact of sarcopenia and obesity on mortality in older adults with SARS-CoV-2 infection: automated deep learning body composition analysis in the NAPKON-SUEP cohort.

Schluessel S, Mueller B, Tausendfreund O, Rippl M, Deissler L, Martini S, Schmidmaier R, Stoecklein S, Ingrisch M, Blaschke S, Brandhorst G, Spieth P, Lehnert K, Heuschmann P, de Miranda SMN, Drey M

pubmed logopapersMay 16 2025
Severe respiratory infections pose a major challenge in clinical practice, especially in older adults. Body composition analysis could play a crucial role in risk assessment and therapeutic decision-making. This study investigates whether obesity or sarcopenia has a greater impact on mortality in patients with severe respiratory infections. The study focuses on the National Pandemic Cohort Network (NAPKON-SUEP) cohort, which includes patients over 60 years of age with confirmed severe COVID-19 pneumonia. An innovative approach was adopted, using pre-trained deep learning models for automated analysis of body composition based on routine thoracic CT scans. The study included 157 hospitalized patients (mean age 70 ± 8 years, 41% women, mortality rate 39%) from the NAPKON-SUEP cohort at 57 study sites. A pre-trained deep learning model was used to analyze body composition (muscle, bone, fat, and intramuscular fat volumes) from thoracic CT images of the NAPKON-SUEP cohort. Binary logistic regression was performed to investigate the association between obesity, sarcopenia, and mortality. Non-survivors exhibited lower muscle volume (p = 0.043), higher intramuscular fat volume (p = 0.041), and a higher BMI (p = 0.031) compared to survivors. Among all body composition parameters, muscle volume adjusted to weight was the strongest predictor of mortality in the logistic regression model, even after adjusting for factors such as sex, age, diabetes, chronic lung disease and chronic kidney disease, (odds ratio = 0.516). In contrast, BMI did not show significant differences after adjustment for comorbidities. This study identifies muscle volume derived from routine CT scans as a major predictor of survival in patients with severe respiratory infections. The results underscore the potential of AI supported CT-based body composition analysis for risk stratification and clinical decision making, not only for COVID-19 patients but also for all patients over 60 years of age with severe acute respiratory infections. The innovative application of pre-trained deep learning models opens up new possibilities for automated and standardized assessment in clinical practice.

High-Performance Prompting for LLM Extraction of Compression Fracture Findings from Radiology Reports.

Kanani MM, Monawer A, Brown L, King WE, Miller ZD, Venugopal N, Heagerty PJ, Jarvik JG, Cohen T, Cross NM

pubmed logopapersMay 16 2025
Extracting information from radiology reports can provide critical data to empower many radiology workflows. For spinal compression fractures, these data can facilitate evidence-based care for at-risk populations. Manual extraction from free-text reports is laborious, and error-prone. Large language models (LLMs) have shown promise; however, fine-tuning strategies to optimize performance in specific tasks can be resource intensive. A variety of prompting strategies have achieved similar results with fewer demands. Our study pioneers the use of Meta's Llama 3.1, together with prompt-based strategies, for automated extraction of compression fractures from free-text radiology reports, outputting structured data without model training. We tested performance on a time-based sample of CT exams covering the spine from 2/20/2024 to 2/22/2024 acquired across our healthcare enterprise (637 anonymized reports, age 18-102, 47% Female). Ground truth annotations were manually generated and compared against the performance of three models (Llama 3.1 70B, Llama 3.1 8B, and Vicuna 13B) with nine different prompting configurations for a total of 27 model/prompt experiments. The highest F1 score (0.91) was achieved by the 70B Llama 3.1 model when provided with a radiologist-written background, with similar results when the background was written by a separate LLM (0.86). The addition of few-shot examples to these prompts had variable impact on F1 measurements (0.89, 0.84 respectively). Comparable ROC-AUC and PR-AUC performance was observed. Our work demonstrated that an open-weights LLM excelled at extracting compression fractures findings from free-text radiology reports using prompt-based techniques without requiring extensive manually labeled examples for model training.

A deep learning-based approach to automated rib fracture detection and CWIS classification.

Marting V, Borren N, van Diepen MR, van Lieshout EMM, Wijffels MME, van Walsum T

pubmed logopapersMay 16 2025
Trauma-induced rib fractures are a common injury. The number and characteristics of these fractures influence whether a patient is treated nonoperatively or surgically. Rib fractures are typically diagnosed using CT scans, yet 19.2-26.8% of fractures are still missed during assessment. Another challenge in managing rib fractures is the interobserver variability in their classification. Purpose of this study was to develop and assess an automated method that detects rib fractures in CT scans, and classifies them according to the Chest Wall Injury Society (CWIS) classification. 198 CT scans were collected, of which 170 were used for training and internal validation, and 28 for external validation. Fractures and their classifications were manually annotated in each of the scans. A detection and classification network was trained for each of the three components of the CWIS classifications. In addition, a rib number labeling network was trained for obtaining the rib number of a fracture. Experiments were performed to assess the method performance. On the internal test set, the method achieved a detection sensitivity of 80%, at a precision of 87%, and an F1-score of 83%, with a mean number of FPPS (false positives per scan) of 1.11. Classification sensitivity varied, with the lowest being 25% for complex fractures and the highest being 97% for posterior fractures. The correct rib number was assigned to 94% of the detected fractures. The custom-trained nnU-Net correctly labeled 95.5% of all ribs and 98.4% of fractured ribs in 30 patients. The detection and classification performance on the external validation dataset was slightly better, with a fracture detection sensitivity of 84%, precision of 85%, F1-score of 84%, FPPS of 0.96 and 95% of the fractures were assigned the correct rib number. The method developed is able to accurately detect and classify rib fractures in CT scans, there is room for improvement in the (rare and) underrepresented classes in the training set.

Automated CT segmentation for lower extremity tissues in lymphedema evaluation using deep learning.

Na S, Choi SJ, Ko Y, Urooj B, Huh J, Cha S, Jung C, Cheon H, Jeon JY, Kim KW

pubmed logopapersMay 16 2025
Clinical assessment of lymphedema, particularly for lymphedema severity and fluid-fibrotic lesions, remains challenging with traditional methods. We aimed to develop and validate a deep learning segmentation tool for automated tissue component analysis in lower extremity CT scans. For development datasets, lower extremity CT venography scans were collected in 118 patients with gynecologic cancers for algorithm training. Reference standards were created by segmentation of fat, muscle, and fluid-fibrotic tissue components using 3D slicer. A deep learning model based on the Unet++ architecture with an EfficientNet-B7 encoder was developed and trained. Segmentation accuracy of the deep learning model was validated in an internal validation set (n = 10) and an external validation set (n = 10) using Dice similarity coefficient (DSC) and volumetric similarity (VS). A graphical user interface (GUI) tool was developed for the visualization of the segmentation results. Our deep learning algorithm achieved high segmentation accuracy. Mean DSCs for each component and all components ranged from 0.945 to 0.999 in the internal validation set and 0.946 to 0.999 in the external validation set. Similar performance was observed in the VS, with mean VSs for all components ranging from 0.97 to 0.999. In volumetric analysis, mean volumes of the entire leg and each component did not differ significantly between reference standard and deep learning measurements (p > 0.05). Our GUI displays lymphedema mapping, highlighting segmented fat, muscle, and fluid-fibrotic components in the entire leg. Our deep learning algorithm provides an automated segmentation tool enabling accurate segmentation, volume measurement of tissue component, and lymphedema mapping. Question Clinical assessment of lymphedema remains challenging, particularly for tissue segmentation and quantitative severity evaluation. Findings A deep learning algorithm achieved DSCs > 0.95 and VS > 0.97 for fat, muscle, and fluid-fibrotic components in internal and external validation datasets. Clinical relevance The developed deep learning tool accurately segments and quantifies lower extremity tissue components on CT scans, enabling automated lymphedema evaluation and mapping with high segmentation accuracy.

A computed tomography-based radiomics prediction model for BRAF mutation status in colorectal cancer.

Zhou B, Tan H, Wang Y, Huang B, Wang Z, Zhang S, Zhu X, Wang Z, Zhou J, Cao Y

pubmed logopapersMay 15 2025
The aim of this study was to develop and validate CT venous phase image-based radiomics to predict BRAF gene mutation status in preoperative colorectal cancer patients. In this study, 301 patients with pathologically confirmed colorectal cancer were retrospectively enrolled, comprising 225 from Centre I (73 mutant and 152 wild-type) and 76 from Centre II (36 mutant and 40 wild-type). The Centre I cohort was randomly divided into a training set (n = 158) and an internal validation set (n = 67) in a 7:3 ratio, while Centre II served as an independent external validation set (n = 76). The whole tumor region of interest was segmented, and radiomics characteristics were extracted. To explore whether tumor expansion could improve the performance of the study objectives, the tumor contour was extended by 3 mm in this study. Finally, a t-test, Pearson correlation, and LASSO regression were used to screen out features strongly associated with BRAF mutations. Based on these features, six classifiers-Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Logistic Regression (LR), K-Nearest Neighbors (KNN), and Extreme Gradient Boosting (XGBoost)-were constructed. The model performance and clinical utility were evaluated using receiver operating characteristic (ROC) curves, decision curve analysis, accuracy, sensitivity, and specificity. Gender was an independent predictor of BRAF mutations. The unexpanded RF model, constructed using 11 imaging histologic features, demonstrated the best predictive performance. For the training cohort, it achieved an AUC of 0.814 (95% CI 0.732-0.895), an accuracy of 0.810, and a sensitivity of 0.620. For the internal validation cohort, it achieved an AUC of 0.798 (95% CI 0.690-0.907), an accuracy of 0.761, and a sensitivity of 0.609. For the external validation cohort, it achieved an AUC of 0.737 (95% CI 0.616-0.847), an accuracy of 0.658, and a sensitivity of 0.667. A machine learning model based on CT radiomics can effectively predict BRAF mutations in patients with colorectal cancer. The unexpanded RF model demonstrated optimal predictive performance.
Page 94 of 1021015 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.