Sort by:
Page 339 of 3863853 results

Artificial intelligence automated measurements of spinopelvic parameters in adult spinal deformity-a systematic review.

Bishara A, Patel S, Warman A, Jo J, Hughes LP, Khalifeh JM, Azad TD

pubmed logopapersMay 23 2025
This review evaluates advances made in deep learning (DL) applications to automatic spinopelvic parameter estimation, comparing their accuracy to manual measurements performed by surgeons. The PubMed database was queried for studies on DL measurement of adult spinopelvic parameters between 2014 and 2024. Studies were excluded if they focused on pediatric patients, non-deformity-related conditions, non-human subjects, or if they lacked sufficient quantitative data comparing DL models to human measurements. Included studies were assessed based on model architecture, patient demographics, training, validation, testing methods, and sample sizes, as well as performance compared to manual methods. Of 442 screened articles, 16 were included, with sample sizes ranging from 15 to 9,832 radiograph images and reporting interclass correlation coefficients (ICCs) of 0.56 to 1.00. Measurements of pelvic tilt, pelvic incidence, T4-T12 kyphosis, L1-L4 lordosis, and SVA showed consistently high ICCs (>0.80) and low mean absolute deviations (MADs <6°), with substantial number of studies reporting pelvic tilt achieving an excellent ICC of 0.90 or greater. In contrast, T1-T12 kyphosis and L4-S1 lordosis exhibited lower ICCs and higher measurement errors. Overall, most DL models demonstrated strong correlations (>0.80) with clinician measurements and minimal differences compared to manual references, except for T1-T12 kyphosis (average Pearson correlation: 0.68), L1-L4 lordosis (average Pearson correlation: 0.75), and L4-S1 lordosis (average Pearson correlation: 0.65). Novel computer vision algorithms show promising accuracy in measuring spinopelvic parameters, comparable to manual surgeon measurements. Future research should focus on external validation, additional imaging modalities, and the feasibility of integration in clinical settings to assess model reliability and predictive capacity.

Evaluation of a deep-learning segmentation model for patients with colorectal cancer liver metastases (COALA) in the radiological workflow.

Zeeuw M, Bereska J, Strampel M, Wagenaar L, Janssen B, Marquering H, Kemna R, van Waesberghe JH, van den Bergh J, Nota I, Moos S, Nio Y, Kop M, Kist J, Struik F, Wesdorp N, Nelissen J, Rus K, de Sitter A, Stoker J, Huiskens J, Verpalen I, Kazemier G

pubmed logopapersMay 23 2025
For patients with colorectal liver metastases (CRLM), total tumor volume (TTV) is prognostic. A deep-learning segmentation model for CRLM to assess TTV called COlorectal cAncer Liver metastases Assessment (COALA) has been developed. This study evaluated COALA's performance and practical utility in the radiological picture archiving and communication system (PACS). A secondary aim was to provide lessons for future researchers on the implementation of artificial intelligence (AI) models. Patients discussed between January and December 2023 in a multidisciplinary meeting for CRLM were included. In those patients, CRLM was automatically segmented in portal-venous phase CT scans by COALA and integrated with PACS. Eight expert abdominal radiologists completed a questionnaire addressing segmentation accuracy and PACS integration. They were also asked to write down general remarks. In total, 57 patients were evaluated. Of those patients, 112 contrast-enhanced portal-venous phase CT scans were analyzed. Of eight radiologists, six (75%) evaluated the model as user-friendly in their radiological workflow. Areas of improvement of the COALA model were the segmentation of small lesions, heterogeneous lesions, and lesions at the border of the liver with involvement of the diaphragm or heart. Key lessons for implementation were a multidisciplinary approach, a robust method prior to model development and organizing evaluation sessions with end-users early in the development phase. This study demonstrates that the deep-learning segmentation model for patients with CRLM (COALA) is user-friendly in the radiologist's PACS. Future researchers striving for implementation should have a multidisciplinary approach, propose a robust methodology and involve end-users prior to model development. Many segmentation models are being developed, but none of those models are evaluated in the (radiological) workflow or clinically implemented. Our model is implemented in the radiological work system, providing valuable lessons for researchers to achieve clinical implementation. Developed segmentation models should be implemented in the radiological workflow. Our implemented segmentation model provides valuable lessons for future researchers. If implemented in clinical practice, our model could allow for objective radiological evaluation.

Lung volume assessment for mean dark-field coefficient calculation using different determination methods.

Gassert FT, Heuchert J, Schick R, Bast H, Urban T, Dorosti T, Zimmermann GS, Ziegelmayer S, Marka AW, Graf M, Makowski MR, Pfeiffer D, Pfeiffer F

pubmed logopapersMay 23 2025
Accurate lung volume determination is crucial for reliable dark-field imaging. We compared different approaches for the determination of lung volume in mean dark-field coefficient calculation. In this retrospective analysis of data prospectively acquired between October 2018 and October 2020, patients at least 18 years of age who underwent chest computed tomography (CT) were screened for study participation. Inclusion criteria were the ability to consent and to stand upright without help. Exclusion criteria were pregnancy, lung cancer, pleural effusion, atelectasis, air space disease, ground-glass opacities, and pneumothorax. Lung volume was calculated using four methods: conventional radiography (CR) using shape information; a convolutional neural network (CNN) trained for CR; CT-based volume estimation; and results from pulmonary function testing (PFT). Results were compared using a Student t-test and Spearman ρ correlation statistics. We studied 81 participants (51 men, 30 women), aged 64 ± 12 years (mean ± standard deviation). All lung volumes derived from the various methods were different from each other: CR, 7.27 ± 1.64 L; CNN, 4.91 ± 1.05 L; CT, 5.25 ± 1.36 L; PFT, 6.54 L ± 1.52 L; p < 0.001 for all comparisons. A high positive correlation was found for all combinations (p < 0.001 for all), the highest one being between CT and CR (ρ = 0.88) and the lowest one between PFT and CNN (ρ = 0.78). Lung volume and therefore mean dark-field coefficient calculation is highly dependent on the method used, taking into consideration different positioning and inhalation depths. This study underscores the impact of the method used for lung volume determination. In the context of mean dark-field coefficient calculation, CR-based methods are more desirable because both dark-field images and conventional images are acquired at the same breathing state, and therefore, biases due to differences in inhalation depth are eliminated. Lung volume measurements vary significantly between different determination methods. Mean dark-field coefficient calculations require the same method to ensure comparability. Radiography-based methods simplify workflows and minimize biases, making them most suitable.

Development and validation of a radiomics model using plain radiographs to predict spine fractures with posterior wall injury.

Liu W, Zhang X, Yu C, Chen D, Zhao K, Liang J

pubmed logopapersMay 23 2025
When spine fractures involve posterior wall damage, they pose a heightened risk of instability, consequently influencing treatment strategies. To enhance early diagnosis and refine treatment planning for these fractures, we implemented a radiomics analysis using deep learning techniques, based on both anteroposterior and lateral plain X-ray images. Retrospective data were collected for 130 patients with spine fractures who underwent anteroposterior and lateral imaging at two centers (Center 1, training cohort; Center 2, validation cohort) between January 2010 and June 2024. The Vision Transformer (ViT) technique was employed to extract imaging features. The features selected through multiple methods were then used to construct a machine learning model using NaiveBayes and Support Vector Machine (SVM). The model's performance was evaluated using the area under the curve (AUC) metric. 12 features were selected to form the deep learning features. The SVM model using a combination of anteroposterior and lateral plain images showed good performance in both centers with a high AUC for predicting spine fractures with posterior wall injury (Center 1, AUC: 0.909, 95% CI: 0.763-1.000; Center 2, AUC: 0.837, 95% CI: 0.678-0.996). The SVM model based on the combined images outperformed both the individual position images and a spine surgeon with 3 years of clinical experience in classification performance. Our study demonstrates that a radiomic model created by integrating anteroposterior and lateral plain X-ray images of the spine can more effectively predict spine fractures with posterior wall injury, aiding clinicians in making accurate diagnoses and treatment decisions.

End-to-end prognostication in pancreatic cancer by multimodal deep learning: a retrospective, multicenter study.

Schuurmans M, Saha A, Alves N, Vendittelli P, Yakar D, Sabroso-Lasa S, Xue N, Malats N, Huisman H, Hermans J, Litjens G

pubmed logopapersMay 23 2025
Pancreatic cancer treatment plans involving surgery and/or chemotherapy are highly dependent on disease stage. However, current staging systems are ineffective and poorly correlated with survival outcomes. We investigate how artificial intelligence (AI) can enhance prognostic accuracy in pancreatic cancer by integrating multiple data sources. Patients with histopathology and/or radiology/follow-up confirmed pancreatic ductal adenocarcinoma (PDAC) from a Dutch center (2004-2023) were included in the development cohort. Two additional PDAC cohorts from a Dutch and Spanish center were used for external validation. Prognostic models including clinical variables, contrast-enhanced CT images, and a combination of both were developed to predict high-risk short-term survival. All models were trained using five-fold cross-validation and assessed by the area under the time-dependent receiver operating characteristic curve (AUC). The models were developed on 401 patients (203 females, 198 males, median survival (OS) = 347 days, IQR: 171-585), with 98 (24.4%) short-term survivors (OS < 230 days) and 303 (75.6%) long-term survivors. The external validation cohorts included 361 patients (165 females, 138 males, median OS = 404 days, IQR: 173-736), with 110 (30.5%) short-term survivors and 251 (69.5%) longer survivors. The best AUC for predicting short vs. long-term survival was achieved with the multi-modal model (AUC = 0.637 (95% CI: 0.500-0.774)) in the internal validation set. External validation showed AUCs of 0.571 (95% CI: 0.453-0.689) and 0.675 (95% CI: 0.593-0.757). Multimodal AI can predict long vs. short-term survival in PDAC patients, showing potential as a prognostic tool in clinical decision-making. Question Prognostic tools for pancreatic ductal adenocarcinoma (PDAC) remain limited, with TNM staging offering suboptimal accuracy in predicting patient survival outcomes. Findings The multimodal AI model demonstrated improved prognostic performance over TNM and unimodal models for predicting short- and long-term survival in PDAC patients. Clinical relevance Multimodal AI provides enhanced prognostic accuracy compared to current staging systems, potentially improving clinical decision-making and personalized management strategies for PDAC patients.

EnsembleEdgeFusion: advancing semantic segmentation in microvascular decompression imaging with innovative ensemble techniques.

Dhiyanesh B, Vijayalakshmi M, Saranya P, Viji D

pubmed logopapersMay 23 2025
Semantic segmentation involves an imminent part in the investigation of medical images, particularly in the domain of microvascular decompression, where publicly available datasets are scarce, and expert annotation is demanding. In response to this challenge, this study presents a meticulously curated dataset comprising 2003 RGB microvascular decompression images, each intricately paired with annotated masks. Extensive data preprocessing and augmentation strategies were employed to fortify the training dataset, enhancing the robustness of proposed deep learning model. Numerous up-to-date semantic segmentation approaches, including DeepLabv3+, U-Net, DilatedFastFCN with JPU, DANet, and a custom Vanilla architecture, were trained and evaluated using diverse performance metrics. Among these models, DeepLabv3 + emerged as a strong contender, notably excelling in F1 score. Innovatively, ensemble techniques, such as stacking and bagging, were introduced to further elevate segmentation performance. Bagging, notably with the Naïve Bayes approach, exhibited significant improvements, underscoring the potential of ensemble methods in medical image segmentation. The proposed EnsembleEdgeFusion technique exhibited superior loss reduction during training compared to DeepLabv3 + and achieved maximum Mean Intersection over Union (MIoU) scores of 77.73%, surpassing other models. Category-wise analysis affirmed its superiority in accurately delineating various categories within the test dataset.

Multimodal fusion model for prognostic prediction and radiotherapy response assessment in head and neck squamous cell carcinoma.

Tian R, Hou F, Zhang H, Yu G, Yang P, Li J, Yuan T, Chen X, Chen Y, Hao Y, Yao Y, Zhao H, Yu P, Fang H, Song L, Li A, Liu Z, Lv H, Yu D, Cheng H, Mao N, Song X

pubmed logopapersMay 23 2025
Accurate prediction of prognosis and postoperative radiotherapy response is critical for personalized treatment in head and neck squamous cell carcinoma (HNSCC). We developed a multimodal deep learning model (MDLM) integrating computed tomography, whole-slide images, and clinical features from 1087 HNSCC patients across multiple centers. The MDLM exhibited good performance in predicting overall survival (OS) and disease-free survival in external test cohorts. Additionally, the MDLM outperformed unimodal models. Patients with a high-risk score who underwent postoperative radiotherapy exhibited prolonged OS compared to those who did not (P = 0.016), whereas no significant improvement in OS was observed among patients with a low-risk score (P = 0.898). Biological exploration indicated that the model may be related to changes in the cytochrome P450 metabolic pathway, tumor microenvironment, and myeloid-derived cell subpopulations. Overall, the MDLM effectively predicts prognosis and postoperative radiotherapy response, offering a promising tool for personalized HNSCC therapy.

Development and validation of a multi-omics hemorrhagic transformation model based on hyperattenuated imaging markers following mechanical thrombectomy.

Jiang L, Zhu G, Wang Y, Hong J, Fu J, Hu J, Xiao S, Chu J, Hu S, Xiao W

pubmed logopapersMay 23 2025
This study aimed to develop a predictive model integrating clinical, radiomics, and deep learning (DL) features of hyperattenuated imaging markers (HIM) from computed tomography scans immediately following mechanical thrombectomy (MT) to predict hemorrhagic transformation (HT). A total of 239 patients with HIM who underwent MT were enrolled, with 191 patients (80%) in the training cohort and 48 patients (20%) in the validation cohort. Additionally, the model was tested on an internal prospective cohort of 49 patients. A total of 1834 radiomics features and 2048 DL features were extracted from HIM images. Statistical methods, such as analysis of variance, Pearson's correlation coefficient, principal component analysis, and least absolute shrinkage and selection operator, were used to select the most significant features. A K-Nearest Neighbor classifier was employed to develop a combined model integrating clinical, radiomics, and DL features for HT prediction. Model performance was evaluated using metrics such as accuracy, sensitivity, specificity, receiver operating characteristic curves, and area under curve (AUC). In the training, validation, and test cohorts, the combined model achieved AUCs of 0.926, 0.923, and 0.887, respectively, outperforming other models, including clinical, radiomics, and DL models, as well as hybrid models combining subsets of features (Clinical + Radiomics, DL + Radiomics, and Clinical + DL) in predicting HT. The combined model, which integrates clinical, radiomics, and DL features derived from HIM, demonstrated efficacy in noninvasively predicting HT. These findings suggest its potential utility in guiding clinical decision-making for patients with MT.

Multi-view contrastive learning and symptom extraction insights for medical report generation.

Bai Q, Zou X, Alhaskawi A, Dong Y, Zhou H, Ezzi SHA, Kota VG, AbdullaAbdulla MHH, Abdalbary SA, Hu X, Lu H

pubmed logopapersMay 23 2025
The task of generating medical reports automatically is of paramount importance in modern healthcare, offering a substantial reduction in the workload of radiologists and accelerating the processes of clinical diagnosis and treatment. Current challenges include handling limited sample sizes and interpreting intricate multi-modal and multi-view medical data. In order to improve the accuracy and efficiency for radiologists, we conducted this investigation. This study aims to present a novel methodology for medical report generation that leverages Multi-View Contrastive Learning (MVCL) applied to MRI data, combined with a Symptom Consultant (SC) for extracting medical insights, to improve the quality and efficiency of automated medical report generation. We introduce an advanced MVCL framework that maximizes the potential of multi-view MRI data to enhance visual feature extraction. Alongside, the SC component is employed to distill critical medical insights from symptom descriptions. These components are integrated within a transformer decoder architecture, which is then applied to the Deep Wrist dataset for model training and evaluation. Our experimental analysis on the Deep Wrist dataset reveals that our proposed integration of MVCL and SC significantly outperforms the baseline model in terms of accuracy and relevance of the generated medical reports. The results indicate that our approach is particularly effective in capturing and utilizing the complex information inherent in multi-modal and multi-view medical datasets. The combination of MVCL and SC constitutes a powerful approach to medical report generation, addressing the existing challenges in the field. The demonstrated superiority of our model over traditional methods holds promise for substantial improvements in clinical diagnosis and automated report generation, indicating a significant stride forward in medical technology.

Ovarian Cancer Screening: Recommendations and Future Prospects.

Chiu S, Staley H, Jeevananthan P, Mascarenhas S, Fotopoulou C, Rockall A

pubmed logopapersMay 23 2025
Ovarian cancer remains a significant cause of mortality among women, largely due to challenges in early detection. Current screening strategies, including transvaginal ultrasound and CA125 testing, have limited sensitivity and specificity, particularly in asymptomatic women or those with early-stage disease. The European Society of Gynaecological Oncology, the European Society for Medical Oncology, the European Society of Pathology, and other health organizations currently do not recommend routine population-based screening for ovarian cancer due to the high rates of false-positives and the absence of a reliable early detection method.This review examines existing ovarian cancer screening guidelines and explores recent advances in diagnostic technologies including radiomics, artificial intelligence, point-of-care testing, and novel detection methods.Emerging technologies show promise with respect to improving ovarian cancer detection by enhancing sensitivity and specificity compared to traditional methods. Artificial intelligence and radiomics have potential for revolutionizing ovarian cancer screening by identifying subtle diagnostic patterns, while liquid biopsy-based approaches and cell-free DNA profiling enable tumor-specific biomarker detection. Minimally invasive methods, such as intrauterine lavage and salivary diagnostics, provide avenues for population-wide applicability. However, large-scale validation is required to establish these techniques as effective and reliable screening options. · Current ovarian cancer screening methods lack sensitivity and specificity for early-stage detection.. · Emerging technologies like artificial intelligence, radiomics, and liquid biopsy offer improved diagnostic accuracy.. · Large-scale clinical validation is required, particularly for baseline-risk populations.. · Chiu S, Staley H, Jeevananthan P et al. Ovarian Cancer Screening: Recommendations and Future Prospects. Rofo 2025; DOI 10.1055/a-2589-5696.
Page 339 of 3863853 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.