Sort by:
Page 3 of 21209 results

A computed tomography-based radiomics prediction model for BRAF mutation status in colorectal cancer.

Zhou B, Tan H, Wang Y, Huang B, Wang Z, Zhang S, Zhu X, Wang Z, Zhou J, Cao Y

pubmed logopapersMay 15 2025
The aim of this study was to develop and validate CT venous phase image-based radiomics to predict BRAF gene mutation status in preoperative colorectal cancer patients. In this study, 301 patients with pathologically confirmed colorectal cancer were retrospectively enrolled, comprising 225 from Centre I (73 mutant and 152 wild-type) and 76 from Centre II (36 mutant and 40 wild-type). The Centre I cohort was randomly divided into a training set (n = 158) and an internal validation set (n = 67) in a 7:3 ratio, while Centre II served as an independent external validation set (n = 76). The whole tumor region of interest was segmented, and radiomics characteristics were extracted. To explore whether tumor expansion could improve the performance of the study objectives, the tumor contour was extended by 3 mm in this study. Finally, a t-test, Pearson correlation, and LASSO regression were used to screen out features strongly associated with BRAF mutations. Based on these features, six classifiers-Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF), Logistic Regression (LR), K-Nearest Neighbors (KNN), and Extreme Gradient Boosting (XGBoost)-were constructed. The model performance and clinical utility were evaluated using receiver operating characteristic (ROC) curves, decision curve analysis, accuracy, sensitivity, and specificity. Gender was an independent predictor of BRAF mutations. The unexpanded RF model, constructed using 11 imaging histologic features, demonstrated the best predictive performance. For the training cohort, it achieved an AUC of 0.814 (95% CI 0.732-0.895), an accuracy of 0.810, and a sensitivity of 0.620. For the internal validation cohort, it achieved an AUC of 0.798 (95% CI 0.690-0.907), an accuracy of 0.761, and a sensitivity of 0.609. For the external validation cohort, it achieved an AUC of 0.737 (95% CI 0.616-0.847), an accuracy of 0.658, and a sensitivity of 0.667. A machine learning model based on CT radiomics can effectively predict BRAF mutations in patients with colorectal cancer. The unexpanded RF model demonstrated optimal predictive performance.

Deep learning MRI-based radiomic models for predicting recurrence in locally advanced nasopharyngeal carcinoma after neoadjuvant chemoradiotherapy: a multi-center study.

Hu C, Xu C, Chen J, Huang Y, Meng Q, Lin Z, Huang X, Chen L

pubmed logopapersMay 15 2025
Local recurrence and distant metastasis were a common manifestation of locoregionally advanced nasopharyngeal carcinoma (LA-NPC) after neoadjuvant chemoradiotherapy (NACT). To validate the clinical value of MRI radiomic models based on deep learning for predicting the recurrence of LA-NPC patients. A total of 328 NPC patients from four hospitals were retrospectively included and divided into the training(n = 229) and validation (n = 99) cohorts randomly. Extracting 975 traditional radiomic features and 1000 deep radiomic features from contrast enhanced T1-weighted (T1WI + C) and T2-weighted (T2WI) sequences, respectively. Least absolute shrinkage and selection operator (LASSO) was applied for feature selection. Five machine learning classifiers were conducted to develop three models for LA-NPC prediction in training cohort, namely Model I: traditional radiomic features, Model II: combined the deep radiomic features with Model I, and Model III: combined Model II with clinical features. The predictive performance of these models were evaluated by receive operating characteristic (ROC) curve analysis, area under the curve (AUC), accuracy, sensitivity and specificity in both cohorts. The clinical characteristics in two cohorts showed no significant differences. Choosing 15 radiomic features and 6 deep radiomic features from T1WI + C. Choosing 9 radiomic features and 6 deep radiomic features from T2WI. In T2WI, the Model II based on Random forest (RF) (AUC = 0.87) performed best compared with other models in validation cohort. Traditional radiomic model combined with deep radiomic features shows excellent predictive performance. It could be used assist clinical doctors to predict curative effect for LA-NPC patients after NACT.

A fully automatic radiomics pipeline for postoperative facial nerve function prediction of vestibular schwannoma.

Song G, Li K, Wang Z, Liu W, Xue Q, Liang J, Zhou Y, Geng H, Liu D

pubmed logopapersMay 14 2025
Vestibular schwannoma (VS) is the most prevalent intracranial schwannoma. Surgery is one of the options for the treatment of VS, with the preservation of facial nerve (FN) function being the primary objective. Therefore, postoperative FN function prediction is essential. However, achieving automation for such a method remains a challenge. In this study, we proposed a fully automatic deep learning approach based on multi-sequence magnetic resonance imaging (MRI) to predict FN function after surgery in VS patients. We first developed a segmentation network 2.5D Trans-UNet, which combined Transformer and U-Net to optimize contour segmentation for radiomic feature extraction. Next, we built a deep learning network based on the integration of 1DConvolutional Neural Network (1DCNN) and Gated Recurrent Unit (GRU) to predict postoperative FN function using the extracted features. We trained and tested the 2.5D Trans-UNet segmentation network on public and private datasets, achieving accuracies of 89.51% and 90.66%, respectively, confirming the model's strong performance. Then Feature extraction and selection were performed on the private dataset's segmentation results using 2.5D Trans-UNet. The selected features were used to train the 1DCNN-GRU network for classification. The results showed that our proposed fully automatic radiomics pipeline outperformed the traditional radiomics pipeline on the test set, achieving an accuracy of 88.64%, demonstrating its effectiveness in predicting the postoperative FN function in VS patients. Our proposed automatic method has the potential to become a valuable decision-making tool in neurosurgery, assisting neurosurgeons in making more informed decisions regarding surgical interventions and improving the treatment of VS patients.

Deep learning for cerebral vascular occlusion segmentation: A novel ConvNeXtV2 and GRN-integrated U-Net framework for diffusion-weighted imaging.

Ince S, Kunduracioglu I, Algarni A, Bayram B, Pacal I

pubmed logopapersMay 14 2025
Cerebral vascular occlusion is a serious condition that can lead to stroke and permanent neurological damage due to insufficient oxygen and nutrients reaching brain tissue. Early diagnosis and accurate segmentation are critical for effective treatment planning. Due to its high soft tissue contrast, Magnetic Resonance Imaging (MRI) is commonly used for detecting these occlusions such as ischemic stroke. However, challenges such as low contrast, noise, and heterogeneous lesion structures in MRI images complicate manual segmentation and often lead to misinterpretations. As a result, deep learning-based Computer-Aided Diagnosis (CAD) systems are essential for faster and more accurate diagnosis and treatment methods, although they can sometimes face challenges such as high computational costs and difficulties in segmenting small or irregular lesions. This study proposes a novel U-Net architecture enhanced with ConvNeXtV2 blocks and GRN-based Multi-Layer Perceptrons (MLP) to address these challenges in cerebral vascular occlusion segmentation. This is the first application of ConvNeXtV2 in this domain. The proposed model significantly improves segmentation accuracy, even in low-contrast regions, while maintaining high computational efficiency, which is crucial for real-world clinical applications. To reduce false positives and improve overall accuracy, small lesions (≤5 pixels) were removed in the preprocessing step with the support of expert clinicians. Experimental results on the ISLES 2022 dataset showed superior performance with an Intersection over Union (IoU) of 0.8015 and a Dice coefficient of 0.8894. Comparative analyses indicate that the proposed model achieves higher segmentation accuracy than existing U-Net variants and other methods, offering a promising solution for clinical use.

AI-based metal artefact correction algorithm for radiotherapy patients with dental hardware in head and neck CT: Towards precise imaging.

Yu X, Zhong S, Zhang G, Du J, Wang G, Hu J

pubmed logopapersMay 14 2025
To investigate the clinical efficiency of an AI-based metal artefact correction algorithm (AI-MAC), for reducing dental metal artefacts in head and neck CT, compared to conventional interpolation-based MAC. We retrospectively collected 41 patients with non-removal dental hardware who underwent non-contrast head and neck CT prior to radiotherapy. All images were reconstructed with standard reconstruction algorithm (SRA), and were additionally processed with both conventional MAC and AI-MAC. The image quality of SRA, MAC and AI-MAC were compared by qualitative scoring on a 5-point scale, with scores ≥ 3 considered interpretable. This was followed by a quantitative evaluation, including signal-to-noise ratio (SNR) and artefact index (Idxartefact). Organ contouring accuracy was quantified via calculating the dice similarity coefficient (DSC) and hausdorff distance (HD) for oral cavity and teeth, using the clinically accepted contouring as reference. Moreover, the treatment planning dose distribution for oral cavity was assessed. AI-MAC yielded superior qualitative image quality as well as quantitative metrics, including SNR and Idxartefact, to SRA and MAC. The image interpretability significantly improved from 41.46% for SRA and 56.10% for MAC to 92.68% for AI-MAC (p < 0.05). Compared to SRA and MAC, the best DSC and HD for both oral cavity and teeth were obtained on AI-MAC (all p < 0.05). No significant differences for dose distribution were found among the three image sets. AI-MAC outperforms conventional MAC in metal artefact reduction, achieving superior image quality with high image interpretability for patients with dental hardware undergoing head and neck CT. Furthermore, the use of AI-MAC improves the accuracy of organ contouring while providing consistent dose calculation against metal artefacts in radiotherapy. AI-MAC is a novel deep learning-based technique for reducing metal artefacts on CT. This in-vivo study first demonstrated its capability of reducing metal artefacts while preserving organ visualization, as compared with conventional MAC.

Comparative performance of large language models in structuring head CT radiology reports: multi-institutional validation study in Japan.

Takita H, Walston SL, Mitsuyama Y, Watanabe K, Ishimaru S, Ueda D

pubmed logopapersMay 14 2025
To compare the diagnostic performance of three proprietary large language models (LLMs)-Claude, GPT, and Gemini-in structuring free-text Japanese radiology reports for intracranial hemorrhage and skull fractures, and to assess the impact of three different prompting approaches on model accuracy. In this retrospective study, head CT reports from the Japan Medical Imaging Database between 2018 and 2023 were collected. Two board-certified radiologists established the ground truth regarding intracranial hemorrhage and skull fractures through independent review and consensus. Each radiology report was analyzed by three LLMs using three prompting strategies-Standard, Chain of Thought, and Self Consistency prompting. Diagnostic performance (accuracy, precision, recall, and F1-score) was calculated for each LLM-prompt combination and compared using McNemar's tests with Bonferroni correction. Misclassified cases underwent qualitative error analysis. A total of 3949 head CT reports from 3949 patients (mean age 59 ± 25 years, 56.2% male) were enrolled. Across all institutions, 856 patients (21.6%) had intracranial hemorrhage and 264 patients (6.6%) had skull fractures. All nine LLM-prompt combinations achieved very high accuracy. Claude demonstrated significantly higher accuracy for intracranial hemorrhage than GPT and Gemini, and also outperformed Gemini for skull fractures (p < 0.0001). Gemini's performance improved notably with Chain of Thought prompting. Error analysis revealed common challenges including ambiguous phrases and findings unrelated to intracranial hemorrhage or skull fractures, underscoring the importance of careful prompt design. All three proprietary LLMs exhibited strong performance in structuring free-text head CT reports for intracranial hemorrhage and skull fractures. While the choice of prompting method influenced accuracy, all models demonstrated robust potential for clinical and research applications. Future work should refine the prompts and validate these approaches in prospective, multilingual settings.

Large language models for efficient whole-organ MRI score-based reports and categorization in knee osteoarthritis.

Xie Y, Hu Z, Tao H, Hu Y, Liang H, Lu X, Wang L, Li X, Chen S

pubmed logopapersMay 14 2025
To evaluate the performance of large language models (LLMs) in automatically generating whole-organ MRI score (WORMS)-based structured MRI reports and predicting osteoarthritis (OA) severity for the knee. A total of 160 consecutive patients suspected of OA were included. Knee MRI reports were reviewed by three radiologists to establish the WORMS reference standard for 39 key features. GPT-4o and GPT-4o-mini were prompted using in-context knowledge (ICK) and chain-of-thought (COT) to generate WORMS-based structured reports from original reports and to automatically predict the OA severity. Four Orthopedic surgeons reviewed original and LLM-generated reports to conduct pairwise preference and difficulty tests, and their review times were recorded. GPT-4o demonstrated perfect performance in extracting the laterality of the knee (accuracy = 100%). GPT-4o outperformed GPT-4o mini in generating WORMS reports (Accuracy: 93.9% vs 76.2%, respectively). GPT-4o achieved higher recall (87.3% s 46.7%, p < 0.001), while maintaining higher precision compared to GPT-4o mini (94.2% vs 71.2%, p < 0.001). For predicting OA severity, GPT-4o outperformed GPT-4o mini across all prompt strategies (best accuracy: 98.1% vs 68.7%). Surgeons found it easier to extract information and gave more preference to LLM-generated reports over the original reports (both p < 0.001) while spending less time on each report (51.27 ± 9.41 vs 87.42 ± 20.26 s, p < 0.001). GPT-4o generated expert multi-feature, WORMS-based reports from original free-text knee MRI reports. GPT-4o with COT achieved high accuracy in categorizing OA severity. Surgeons reported greater preference and higher efficiency when using LLM-generated reports. The perfect performance of generating WORMS-based reports and the high efficiency and ease of use suggest that integrating LLMs into clinical workflows could greatly enhance productivity and alleviate the documentation burden faced by clinicians in knee OA. GPT-4o successfully generated WORMS-based knee MRI reports. GPT-4o with COT prompting achieved impressive accuracy in categorizing knee OA severity. Greater preference and higher efficiency were reported for LLM-generated reports.

Total radius BMD correlates with the hip and lumbar spine BMD among post-menopausal patients with fragility wrist fracture in a machine learning model.

Ruotsalainen T, Panfilov E, Thevenot J, Tiulpin A, Saarakkala S, Niinimäki J, Lehenkari P, Valkealahti M

pubmed logopapersMay 14 2025
Osteoporosis screening should be systematic in the group of over 50-year-old females with a radius fracture. We tested a phantom combined with machine learning model and studied osteoporosis-related variables. This machine learning model for screening osteoporosis using plain radiographs requires further investigation in larger cohorts to assess its potential as a replacement for DXA measurements in settings where DXA is not available. The main purpose of this study was to improve osteoporosis screening, especially in post-menopausal patients with fragility wrist fractures. The secondary objective was to increase understanding of the connection between osteoporosis and aging, as well as other risk factors. We collected data on 83 females > 50 years old with a distal radius fracture treated at Oulu University Hospital in 2019-2020. The data included basic patient information, WHO FRAX tool, blood tests, X-ray imaging of the fractured wrist, and DXA scanning of the non-fractured forearm, both hips, and the lumbar spine. Machine learning was used in combination with a custom phantom. Eighty-five percent of the study population had osteopenia or osteoporosis. Only 28.4% of patients had increased bone resorption activity measured by ICTP values. Total radius BMD correlated with other osteoporosis-related variables (age r =  - 0.494, BMI r = 0.273, FRAX osteoporotic fracture risk r =  - 0.419, FRAX hip fracture risk r =  - 0.433, hip BMD r = 0.435, and lumbar spine BMD r = 0.645), but the ultra distal (UD) radius BMD did not. Our custom phantom combined with a machine learning model showed potential for screening osteoporosis, with the class-wise accuracies for "Osteoporotic vs. osteopenic & normal bone" of 76% and 75%, respectively. We suggest osteoporosis screening for all females over 50 years old with wrist fractures. We found that the total radius BMD correlates with the central BMD. Due to the limited sample size in the phantom and machine learning parts of the study, further research is needed to make a clinically useful tool for screening osteoporosis.

Privacy-preserving Federated Learning and Uncertainty Quantification in Medical Imaging.

Koutsoubis N, Waqas A, Yilmaz Y, Ramachandran RP, Schabath MB, Rasool G

pubmed logopapersMay 14 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Artificial Intelligence (AI) has demonstrated strong potential in automating medical imaging tasks, with potential applications across disease diagnosis, prognosis, treatment planning, and posttreatment surveillance. However, privacy concerns surrounding patient data remain a major barrier to the widespread adoption of AI in clinical practice, as large and diverse training datasets are essential for developing accurate, robust, and generalizable AI models. Federated Learning offers a privacy-preserving solution by enabling collaborative model training across institutions without sharing sensitive data. Instead, model parameters, such as model weights, are exchanged between participating sites. Despite its potential, federated learning is still in its early stages of development and faces several challenges. Notably, sensitive information can still be inferred from the shared model parameters. Additionally, postdeployment data distribution shifts can degrade model performance, making uncertainty quantification essential. In federated learning, this task is particularly challenging due to data heterogeneity across participating sites. This review provides a comprehensive overview of federated learning, privacy-preserving federated learning, and uncertainty quantification in federated learning. Key limitations in current methodologies are identified, and future research directions are proposed to enhance data privacy and trustworthiness in medical imaging applications. ©RSNA, 2025.

The Future of Urodynamics: Innovations, Challenges, and Possibilities.

Chew LE, Hannick JH, Woo LL, Weaver JK, Damaser MS

pubmed logopapersMay 14 2025
Urodynamic studies (UDS) are essential for evaluating lower urinary tract function but are limited by patient discomfort, lack of standardization and diagnostic variability. Advances in technology aim to address these challenges and improve diagnostic accuracy and patient comfort. AUM offers physiological assessment by allowing natural bladder filling and monitoring during daily activities. Compared to conventional UDS, AUM demonstrates higher sensitivity for detecting detrusor overactivity and underlying pathophysiology. However, it faces challenges like motion artifacts, catheter-related discomfort, and difficulty measuring continuous bladder volume. Emerging devices such as Urodynamics Monitor and UroSound offer more patient-friendly alternatives. These tools have the potential to improve diagnostic accuracy for bladder pressure and voiding metrics but remain limited and still require further validation and testing. Ultrasound-based modalities, including dynamic ultrasonography and shear wave elastography, provide real-time, noninvasive assessment of bladder structure and function. These modalities are promising but will require further development of standardized protocols. AI and machine learning models enhance diagnostic accuracy and reduce variability in UDS interpretation. Applications include detecting detrusor overactivity and distinguishing bladder outlet obstruction from detrusor underactivity. However, further validation is required for clinical adoption. Advances in AUM, wearable technologies, ultrasonography, and AI demonstrate potential for transforming UDS into a more accurate, patient-centered tool. Despite significant progress, challenges like technical complexity, standardization, and cost-effectiveness must be addressed to integrate these innovations into routine practice. Nonetheless, these technologies provide the possibility of a future of improved diagnosis and treatment of lower urinary tract dysfunction.
Page 3 of 21209 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.