Sort by:
Page 127 of 1381373 results

Predicting treatment response to systemic therapy in advanced gallbladder cancer using multiphase enhanced CT images.

Wu J, Zheng Z, Li J, Shen X, Huang B

pubmed logopapersMay 8 2025
Accurate estimation of treatment response can help clinicians identify patients who would potentially benefit from systemic therapy. This study aimed to develop and externally validate a model for predicting treatment response to systemic therapy in advanced gallbladder cancer (GBC). We recruited 399 eligible GBC patients across four institutions. Multivariable logistic regression analysis was performed to identify independent clinical factors related to therapeutic efficacy. This deep learning (DL) radiomics signature was developed for predicting treatment response using multiphase enhanced CT images. Then, the DL radiomic-clinical (DLRSC) model was built by combining the DL signature and significant clinical factors, and its predictive performance was evaluated using area under the curve (AUC). Gradient-weighted class activation mapping analysis was performed to help clinicians better understand the predictive results. Furthermore, patients were stratified into low- and high-score groups by the DLRSC model. The progression-free survival (PFS) and overall survival (OS) between the two different groups were compared. Multivariable analysis revealed that tumor size was a significant predictor of efficacy. The DLRSC model showed great predictive performance, with AUCs of 0.86 (95% CI, 0.82-0.89) and 0.84 (95% CI, 0.80-0.87) in the internal and external test datasets, respectively. This model showed great discrimination, calibration, and clinical utility. Moreover, Kaplan-Meier survival analysis revealed that low-score group patients who were insensitive to systemic therapy predicted by the DLRSC model had worse PFS and OS. The DLRSC model allows for predicting treatment response in advanced GBC patients receiving systemic therapy. The survival benefit provided by the DLRSC model was also assessed. Question No effective tools exist for identifying patients who would potentially benefit from systemic therapy in clinical practice. Findings Our combined model allows for predicting treatment response to systemic therapy in advanced gallbladder cancer. Clinical relevance With the help of this model, clinicians could inform patients of the risk of potential ineffective treatment. Such a strategy can reduce unnecessary adverse events and effectively help reallocate societal healthcare resources.

An automated hip fracture detection, classification system on pelvic radiographs and comparison with 35 clinicians.

Yilmaz A, Gem K, Kalebasi M, Varol R, Gencoglan ZO, Samoylenko Y, Tosyali HK, Okcu G, Uvet H

pubmed logopapersMay 8 2025
Accurate diagnosis of orthopedic injuries, especially pelvic and hip fractures, is vital in trauma management. While pelvic radiographs (PXRs) are widely used, misdiagnosis is common. This study proposes an automated system that uses convolutional neural networks (CNNs) to detect potential fracture areas and predict fracture conditions, aiming to outperform traditional object detection-based systems. We developed two deep learning models for hip fracture detection and prediction, trained on PXRs from three hospitals. The first model utilized automated hip area detection, cropping, and classification of the resulting patches. The images were preprocessed using the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm. The YOLOv5 architecture was employed for the object detection model, while three different pre-trained deep neural network (DNN) architectures were used for classification, applying transfer learning. Their performance was evaluated on a test dataset, and compared with 35 clinicians. YOLOv5 achieved a 92.66% accuracy on regular images and 88.89% on CLAHE-enhanced images. The classifier models, MobileNetV2, Xception, and InceptionResNetV2, achieved accuracies between 94.66% and 97.67%. In contrast, the clinicians demonstrated a mean accuracy of 84.53% and longer prediction durations. The DNN models showed significantly better accuracy and speed compared to human evaluators (p < 0.0005, p < 0.01). These DNN models highlight promising utility in trauma diagnosis due to their high accuracy and speed. Integrating such systems into clinical practices may enhance the diagnostic efficiency of PXRs.

Construction of risk prediction model of sentinel lymph node metastasis in breast cancer patients based on machine learning algorithm.

Yang Q, Liu C, Wang Y, Dong G, Sun J

pubmed logopapersMay 8 2025
The aim of this study was to develop and validate a machine learning (ML) based prediction model for sentinel lymph node metastasis in breast cancer to identify patients with a high risk of sentinel lymph node metastasis. In this machine learning study, we retrospectively collected 225 female breast cancer patients who underwent sentinel lymph node biopsy (SLNB). Feature screening was performed using the logistic regression analysis. Subsequently, five ML algorithms, namely LOGIT, LASSO, XGBOOST, RANDOM FOREST model and GBM model were employed to train and develop an ML model. In addition, model interpretation was performed by the Shapley Additive Explanations (SHAP) analysis to clarify the importance of each feature of the model and its decision basis. Combined univariate and multivariate logistic regression analysis, identified Multifocal, LVI, Maximum Diameter, Shape US, Maximum Cortical Thickness as significant predictors. We than successfully leveraged machine learning algorithms, particularly the RANDOM FOREST model, to develop a predictive model for sentinel lymph node metastasis in breast cancer. Finally, the SHAP method identified Maximum Diameter and Maximum Cortical Thickness as the primary decision factors influencing the ML model's predictions. With the integration of pathological and imaging characteristics, ML algorithm can accurately predict sentinel lymph node metastasis in breast cancer patients. The RANDOM FOREST model showed ideal performance. With the incorporation of these models in the clinic, can helpful for clinicians to identify patients at risk of sentinel lymph node metastasis of breast cancer and make more reasonable treatment decisions.

Quantitative analysis and clinical determinants of orthodontically induced root resorption using automated tooth segmentation from CBCT imaging.

Lin J, Zheng Q, Wu Y, Zhou M, Chen J, Wang X, Kang T, Zhang W, Chen X

pubmed logopapersMay 8 2025
Orthodontically induced root resorption (OIRR) is difficult to assess accurately using traditional 2D imaging due to distortion and low sensitivity. While CBCT offers more precise 3D evaluation, manual segmentation remains labor-intensive and prone to variability. Recent advances in deep learning enable automatic, accurate tooth segmentation from CBCT images. This study applies deep learning and CBCT technology to quantify OIRR and analyze its risk factors, aiming to improve assessment accuracy, efficiency, and clinical decision-making. This study retrospectively analyzed CBCT scans of 108 orthodontic patients to assess OIRR using deep learning-based tooth segmentation and volumetric analysis. Statistical analysis was performed using linear regression to evaluate the influence of patient-related factors. A significance level of p < 0.05 was considered statistically significant. Root volume significantly decreased after orthodontic treatment (p < 0.001). Age, gender, open (deep) bite, severe crowding, and other factors significantly influenced root resorption rates in different tooth positions. Multivariable regression analysis showed these factors can predict root resorption, explaining 3% to 15.4% of the variance. This study applied a deep learning model to accurately assess root volume changes using CBCT, revealing significant root volume reduction after orthodontic treatment. It found that underage patients experienced less root resorption, while factors like anterior open bite and deep overbite influenced resorption in specific teeth, though skeletal pattern, overjet, and underbite were not significant predictors.

nnU-Net-based high-resolution CT features quantification for interstitial lung diseases.

Lin Q, Zhang Z, Xiong X, Chen X, Ma T, Chen Y, Li T, Long Z, Luo Q, Sun Y, Jiang L, He W, Deng Y

pubmed logopapersMay 8 2025
To develop a new high-resolution (HR)CT abnormalities quantification tool (CVILDES) for interstitial lung diseases (ILDs) based on the nnU-Net network structure and to determine whether the quantitative parameters derived from this new software could offer a reliable and precise assessment in a clinical setting that is in line with expert visual evaluation. HRCT scans from 83 cases of ILDs and 20 cases of other diffuse lung diseases were labeled section by section by multiple radiologists and were used as training data for developing a deep learning model based on nnU-Net, employing a supervised learning approach. For clinical validation, a cohort including 51 cases of interstitial pneumonia with autoimmune features (IPAF) and 14 cases of idiopathic pulmonary fibrosis (IPF) had CT parenchymal patterns evaluated quantitatively with CVILDES and by visual evaluation. Subsequently, we assessed the correlation of the two methodologies for ILD features quantification. Furthermore, the correlation between the quantitative results derived from the two methods and pulmonary function parameters (DL<sub>CO</sub>%, FVC%, and FEV%) was compared. All CT data were successfully quantified using CVILDES. CVILDES-quantified results (total ILD extent, ground-glass opacity, consolidation, reticular pattern and honeycombing) showed a strong correlation with visual evaluation and were numerically close to the visual evaluation results (r = 0.64-0.89, p < 0.0001), particularly for the extent of fibrosis (r = 0.82, p < 0.0001). As judged by correlation with pulmonary function parameters, CVILDES quantification was comparable or even superior to visual evaluation. nnU-Net-based CVILDES was comparable to visual evaluation for ILD abnormalities quantification. Question Visual assessment of ILD on HRCT is time-consuming and exhibits poor inter-observer agreement, making it challenging to accurately evaluate the therapeutic efficacy. Findings nnU-Net-based Computer vision-based ILD evaluation system (CVILDES) accurately segmented and quantified the HRCT features of ILD, and results were comparable to visual evaluation. Clinical relevance This study developed a new tool that has the potential to be applied in the quantitative assessment of ILD.

Predicting the efficacy of bevacizumab on peritumoral edema based on imaging features and machine learning.

Bai X, Feng M, Ma W, Wang S

pubmed logopapersMay 8 2025
This study proposes a novel approach to predict the efficacy of bevacizumab (BEV) in treating peritumoral edema in metastatic brain tumor patients by integrating advanced machine learning (ML) techniques with comprehensive imaging and clinical data. A retrospective analysis was performed on 300 patients who received BEV treatment from September 2013 to January 2024. The dataset incorporated 13 predictive features: 8 clinical variables and 5 radiological variables. The dataset was divided into a training set (70%) and a test set (30%) using stratified sampling. Data preprocessing was carried out through methods such as handling missing values with the MICE method, detecting and adjusting outliers, and feature scaling. Four algorithms, namely Random Forest (RF), Logistic Regression, Gradient Boosting Tree, and Naive Bayes, were selected to construct binary classification models. A tenfold cross-validation strategy was implemented during training, and techniques like regularization, hyperparameter optimization, and oversampling were used to mitigate overfitting. The RF model demonstrated superior performance, achieving an accuracy of 0.89, a precision of 0.94, F1-score of 0.92, with both AUC-ROC and AUC-PR values reaching 0.91. Feature importance analysis consistently identified edema volume as the most significant predictor, followed by edema index, patient age, and tumor volume. Traditional multivariate logistic regression corroborated these findings, confirming that edema volume and edema index were independent predictors (p < 0.01). Our results highlight the potential of ML-driven predictive models in optimizing BEV treatment selection, reducing unnecessary treatment risks, and improving clinical decision-making in neuro-oncology.

Weakly supervised language models for automated extraction of critical findings from radiology reports.

Das A, Talati IA, Chaves JMZ, Rubin D, Banerjee I

pubmed logopapersMay 8 2025
Critical findings in radiology reports are life threatening conditions that need to be communicated promptly to physicians for timely management of patients. Although challenging, advancements in natural language processing (NLP), particularly large language models (LLMs), now enable the automated identification of key findings from verbose reports. Given the scarcity of labeled critical findings data, we implemented a two-phase, weakly supervised fine-tuning approach on 15,000 unlabeled Mayo Clinic reports. This fine-tuned model then automatically extracted critical terms on internal (Mayo Clinic, n = 80) and external (MIMIC-III, n = 123) test datasets, validated against expert annotations. Model performance was further assessed on 5000 MIMIC-IV reports using LLM-aided metrics, G-eval and Prometheus. Both manual and LLM-based evaluations showed improved task alignment with weak supervision. The pipeline and model, publicly available under an academic license, can aid in critical finding extraction for research and clinical use ( https://github.com/dasavisha/CriticalFindings_Extract ).

From Genome to Phenome: Opportunities and Challenges of Molecular Imaging.

Tian M, Hood L, Chiti A, Schwaiger M, Minoshima S, Watanabe Y, Kang KW, Zhang H

pubmed logopapersMay 8 2025
The study of the human phenome is essential for understanding the complexities of wellness and disease and their transitions, with molecular imaging being a vital tool in this exploration. Molecular imaging embodies the 4 principles of human phenomics: precise measurement, accurate calculation or analysis, well-controlled manipulation or intervention, and innovative invention or creation. Its application has significantly enhanced the precision, individualization, and effectiveness of medical interventions. This article provides an overview of molecular imaging's technologic advancements and presents the potential use of molecular imaging in human phenomics and precision medicine. The integration of molecular imaging with multiomics data and artificial intelligence has the potential to transform health care, promoting proactive and preventive strategies. This evolving approach promises to deepen our understanding of the human phenome, lead to preclinical diagnostics and treatments, and establish quantitative frameworks for precision health management.

Systematic review and epistemic meta-analysis to advance binomial AI-radiomics integration for predicting high-grade glioma progression and enhancing patient management.

Chilaca-Rosas MF, Contreras-Aguilar MT, Pallach-Loose F, Altamirano-Bustamante NF, Salazar-Calderon DR, Revilla-Monsalve C, Heredia-Gutiérrez JC, Conde-Castro B, Medrano-Guzmán R, Altamirano-Bustamante MM

pubmed logopapersMay 8 2025
High-grade gliomas, particularly glioblastoma (MeSH:Glioblastoma), are among the most aggressive and lethal central nervous system tumors, necessitating advanced diagnostic and prognostic strategies. This systematic review and epistemic meta-analysis explore the integration of Artificial Intelligence (AI) and Radiomics Inter-field (AIRI) to enhance predictive modeling for tumor progression. A comprehensive literature search identified 19 high-quality studies, which were analyzed to evaluate radiomic features and machine learning models in predicting overall survival (OS) and progression-free survival (PFS). Key findings highlight the predictive strength of specific MRI-derived radiomic features such as log-filter and Gabor textures and the superior performance of Support Vector Machines (SVM) and Random Forest (RF) models, achieving high accuracy and AUC scores (e.g., 98% AUC and 98.7% accuracy for OS). This research demonstrates the current state of the AIRI field and shows that current articles report their results with different performance indicators and metrics, making outcomes heterogenous and hard to integrate knowledge. Additionally, it was explored that today some articles use biased methodologies. This study proposes a structured AIRI development roadmap and guidelines, to avoid bias and make results comparable, emphasizing standardized feature extraction and AI model training to improve reproducibility across clinical settings. By advancing precision medicine, AIRI integration has the potential to refine clinical decision-making and enhance patient outcomes.

Effective data selection via deep learning processes and corresponding learning strategies in ultrasound image classification.

Lee H, Kwak JY, Lee E

pubmed logopapersMay 8 2025
In this study, we propose a novel approach to enhancing transfer learning by optimizing data selection through deep learning techniques and corresponding innovative learning strategies. This method is particularly beneficial when the available dataset has reached its limit and cannot be further expanded. Our approach focuses on maximizing the use of existing data to improve learning outcomes which offers an effective solution for data-limited applications in medical imaging classification. The proposed method consists of two stages. In the first stage, an original network performs the initial classification. When the original network exhibits low confidence in its predictions, ambiguous classifications are passed to a secondary decision-making step involving a newly trained network, referred to as the True network. The True network shares the same architecture as the original network but is trained on a subset of the original dataset that is selected based on consensus among multiple independent networks. It is then used to verify the classification results of the original network, identifying and correcting any misclassified images. To evaluate the effectiveness of our approach, we conducted experiments using thyroid nodule ultrasound images with the ResNet101 and Vision Transformer architectures along with eleven other pre-trained neural networks. The proposed method led to performance improvements across all five key metrics, accuracy, sensitivity, specificity, F1-score, and AUC, compared to using only the original or True networks in ResNet101. Additionally, the True network showed strong performance when applied to the Vision Transformer and similar enhancements were observed across multiple convolutional neural network architectures. Furthermore, to assess the robustness and adaptability of our method across different medical imaging modalities, we applied it to dermoscopic images and observed similar performance enhancements. These results provide evidence of the effectiveness of our approach in improving transfer learning-based medical image classification without requiring additional training data.
Page 127 of 1381373 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.