Sort by:
Page 86 of 1321316 results

Improvement of deep learning-based dose conversion accuracy to a Monte Carlo algorithm in proton beam therapy for head and neck cancers.

Kato R, Kadoya N, Kato T, Tozuka R, Ogawa S, Murakami M, Jingu K

pubmed logopapersMay 23 2025
This study is aimed to clarify the effectiveness of the image-rotation technique and zooming augmentation to improve the accuracy of the deep learning (DL)-based dose conversion from pencil beam (PB) to Monte Carlo (MC) in proton beam therapy (PBT). We adapted 85 patients with head and neck cancers. The patient dataset was randomly divided into 101 plans (334 beams) for training/validation and 11 plans (34 beams) for testing. Further, we trained a DL model that inputs a computed tomography (CT) image and the PB dose in a single-proton field and outputs the MC dose, applying the image-rotation technique and zooming augmentation. We evaluated the DL-based dose conversion accuracy in a single-proton field. The average γ-passing rates (a criterion of 3%/3 mm) were 80.6 ± 6.6% for the PB dose, 87.6 ± 6.0% for the baseline model, 92.1 ± 4.7% for the image-rotation model, and 93.0 ± 5.2% for the data-augmentation model, respectively. Moreover, the average range differences for R90 were - 1.5 ± 3.6% in the PB dose, 0.2 ± 2.3% in the baseline model, -0.5 ± 1.2% in the image-rotation model, and - 0.5 ± 1.1% in the data-augmentation model, respectively. The doses as well as ranges were improved by the image-rotation technique and zooming augmentation. The image-rotation technique and zooming augmentation greatly improved the DL-based dose conversion accuracy from the PB to the MC. These techniques can be powerful tools for improving the DL-based dose calculation accuracy in PBT.

Multimodal ultrasound-based radiomics and deep learning for differential diagnosis of O-RADS 4-5 adnexal masses.

Zeng S, Jia H, Zhang H, Feng X, Dong M, Lin L, Wang X, Yang H

pubmed logopapersMay 23 2025
Accurate differentiation between benign and malignant adnexal masses is crucial for patients to avoid unnecessary surgical interventions. Ultrasound (US) is the most widely utilized diagnostic and screening tool for gynecological diseases, with contrast-enhanced US (CEUS) offering enhanced diagnostic precision by clearly delineating blood flow within lesions. According to the Ovarian and Adnexal Reporting and Data System (O-RADS), masses classified as categories 4 and 5 carry the highest risk of malignancy. However, the diagnostic accuracy of US remains heavily reliant on the expertise and subjective interpretation of radiologists. Radiomics has demonstrated significant value in tumor differential diagnosis by extracting microscopic information imperceptible to the human eye. Despite this, no studies to date have explored the application of CEUS-based radiomics for differentiating adnexal masses. This study aims to develop and validate a multimodal US-based nomogram that integrates clinical variables, radiomics, and deep learning (DL) features to effectively distinguish adnexal masses classified as O-RADS 4-5. From November 2020 to March 2024, we enrolled 340 patients who underwent two-dimensional US (2DUS) and CEUS and had masses categorized as O-RADS 4-5. These patients were randomly divided into a training cohort and a test cohort in a 7:3 ratio. Adnexal masses were manually segmented from 2DUS and CEUS images. Using machine learning (ML) and DL techniques, five models were developed and validated to differentiate adnexal masses. The diagnostic performance of these models was evaluated using the area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, specificity, precision, and F1-score. Additionally, a nomogram was constructed to visualize outcome measures. The CEUS-based radiomics model outperformed the 2DUS model (AUC: 0.826 vs. 0.737). Similarly, the CEUS-based DL model surpassed the 2DUS model (AUC: 0.823 vs. 0.793). The ensemble model combining clinical variables, radiomics, and DL features achieved the highest AUC (0.929). Our study confirms the effectiveness of CEUS-based radiomics for distinguishing adnexal masses with high accuracy and specificity using a multimodal US-based radiomics DL nomogram. This approach holds significant promise for improving the diagnostic precision of adnexal masses classified as O-RADS 4-5.

A deep learning model integrating domain-specific features for enhanced glaucoma diagnosis.

Xu J, Jing E, Chai Y

pubmed logopapersMay 23 2025
Glaucoma is a group of serious eye diseases that can cause incurable blindness. Despite the critical need for early detection, over 60% of cases remain undiagnosed, especially in less developed regions. Glaucoma diagnosis is a costly task and some models have been proposed to automate diagnosis based on images of the retina, specifically the area known as the optic cup and the associated disc where retinal blood vessels and nerves enter and leave the eye. However, diagnosis is complicated because both normal and glaucoma-affected eyes can vary greatly in appearance. Some normal cases, like glaucoma, exhibit a larger cup-to-disc ratio, one of the main diagnostic criteria, making it challenging to distinguish between them. We propose a deep learning model with domain features (DLMDF) to combine unstructured and structured features to distinguish between glaucoma and physiologic large cups. The structured features were based upon the known cup-to-disc ratios of the four quadrants of the optic discs in normal, physiologic large cups, and glaucomatous optic cups. We segmented each cup and disc using a fully convolutional neural network and then calculated the cup size, disc size, and cup-to-disc ratio of each quadrant. The unstructured features were learned from a deep convolutional neural network. The average precision (AP) for disc segmentation was 98.52%, and for cup segmentation it was also 98.57%. Thus, the relatively high AP values enabled us to calculate the 15 reliable features from each segmented disc and cup. In classification tasks, the DLMDF outperformed other models, achieving superior accuracy, precision, and recall. These results validate the effectiveness of combining deep learning-derived features with domain-specific structured features, underscoring the potential of this approach to advance glaucoma diagnosis.

Development of a non-contrast CT-based radiomics nomogram for early prediction of delayed cerebral ischemia in aneurysmal subarachnoid hemorrhage.

Chen L, Wang X, Wang S, Zhao X, Yan Y, Yuan M, Sun S

pubmed logopapersMay 23 2025
Delayed cerebral ischemia (DCI) is a significant complication following aneurysmal subarachnoid hemorrhage (aSAH), leading to poor prognosis and high mortality. This study developed a non-contrast CT (NCCT)-based radiomics nomogram for early DCI prediction in aSAH patients. Three hundred seventy-seven aSAH patients were included in this retrospective study. Radiomic features from the baseline CTs were extracted using PyRadiomics. Feature selection was conducted using t-tests, Pearson correlation, and Lasso regression to identify those features most closely associated with DCI. Multivariable logistic regression was used to identify independent clinical and demographic risk factors. Eight machine learning algorithms were applied to construct radiomics-only and radiomics-clinical fusion nomogram models. The nomogram integrated the radscore and three clinically significant parameters (aneurysm and aneurysm treatment and admission Hunt-Hess score), with the Support Vector Machine model yielding the highest performance in the validation set. The radiomics model and nomogram produced AUCs of 0.696 (95% CI: 0.578-0.815) and 0.831 (95% CI: 0.739-0.923), respectively. The nomogram achieved an accuracy of 0.775, a sensitivity of 0.750, a specificity of 0.795, and an F1 score of 0.750. The NCCT-based radiomics nomogram demonstrated high predictive performance for DCI in aSAH patients, providing a valuable tool for early DCI identification and formulating appropriate treatment strategies. Not applicable.

Artificial intelligence automated measurements of spinopelvic parameters in adult spinal deformity-a systematic review.

Bishara A, Patel S, Warman A, Jo J, Hughes LP, Khalifeh JM, Azad TD

pubmed logopapersMay 23 2025
This review evaluates advances made in deep learning (DL) applications to automatic spinopelvic parameter estimation, comparing their accuracy to manual measurements performed by surgeons. The PubMed database was queried for studies on DL measurement of adult spinopelvic parameters between 2014 and 2024. Studies were excluded if they focused on pediatric patients, non-deformity-related conditions, non-human subjects, or if they lacked sufficient quantitative data comparing DL models to human measurements. Included studies were assessed based on model architecture, patient demographics, training, validation, testing methods, and sample sizes, as well as performance compared to manual methods. Of 442 screened articles, 16 were included, with sample sizes ranging from 15 to 9,832 radiograph images and reporting interclass correlation coefficients (ICCs) of 0.56 to 1.00. Measurements of pelvic tilt, pelvic incidence, T4-T12 kyphosis, L1-L4 lordosis, and SVA showed consistently high ICCs (>0.80) and low mean absolute deviations (MADs <6°), with substantial number of studies reporting pelvic tilt achieving an excellent ICC of 0.90 or greater. In contrast, T1-T12 kyphosis and L4-S1 lordosis exhibited lower ICCs and higher measurement errors. Overall, most DL models demonstrated strong correlations (>0.80) with clinician measurements and minimal differences compared to manual references, except for T1-T12 kyphosis (average Pearson correlation: 0.68), L1-L4 lordosis (average Pearson correlation: 0.75), and L4-S1 lordosis (average Pearson correlation: 0.65). Novel computer vision algorithms show promising accuracy in measuring spinopelvic parameters, comparable to manual surgeon measurements. Future research should focus on external validation, additional imaging modalities, and the feasibility of integration in clinical settings to assess model reliability and predictive capacity.

Evaluation of a deep-learning segmentation model for patients with colorectal cancer liver metastases (COALA) in the radiological workflow.

Zeeuw M, Bereska J, Strampel M, Wagenaar L, Janssen B, Marquering H, Kemna R, van Waesberghe JH, van den Bergh J, Nota I, Moos S, Nio Y, Kop M, Kist J, Struik F, Wesdorp N, Nelissen J, Rus K, de Sitter A, Stoker J, Huiskens J, Verpalen I, Kazemier G

pubmed logopapersMay 23 2025
For patients with colorectal liver metastases (CRLM), total tumor volume (TTV) is prognostic. A deep-learning segmentation model for CRLM to assess TTV called COlorectal cAncer Liver metastases Assessment (COALA) has been developed. This study evaluated COALA's performance and practical utility in the radiological picture archiving and communication system (PACS). A secondary aim was to provide lessons for future researchers on the implementation of artificial intelligence (AI) models. Patients discussed between January and December 2023 in a multidisciplinary meeting for CRLM were included. In those patients, CRLM was automatically segmented in portal-venous phase CT scans by COALA and integrated with PACS. Eight expert abdominal radiologists completed a questionnaire addressing segmentation accuracy and PACS integration. They were also asked to write down general remarks. In total, 57 patients were evaluated. Of those patients, 112 contrast-enhanced portal-venous phase CT scans were analyzed. Of eight radiologists, six (75%) evaluated the model as user-friendly in their radiological workflow. Areas of improvement of the COALA model were the segmentation of small lesions, heterogeneous lesions, and lesions at the border of the liver with involvement of the diaphragm or heart. Key lessons for implementation were a multidisciplinary approach, a robust method prior to model development and organizing evaluation sessions with end-users early in the development phase. This study demonstrates that the deep-learning segmentation model for patients with CRLM (COALA) is user-friendly in the radiologist's PACS. Future researchers striving for implementation should have a multidisciplinary approach, propose a robust methodology and involve end-users prior to model development. Many segmentation models are being developed, but none of those models are evaluated in the (radiological) workflow or clinically implemented. Our model is implemented in the radiological work system, providing valuable lessons for researchers to achieve clinical implementation. Developed segmentation models should be implemented in the radiological workflow. Our implemented segmentation model provides valuable lessons for future researchers. If implemented in clinical practice, our model could allow for objective radiological evaluation.

Lung volume assessment for mean dark-field coefficient calculation using different determination methods.

Gassert FT, Heuchert J, Schick R, Bast H, Urban T, Dorosti T, Zimmermann GS, Ziegelmayer S, Marka AW, Graf M, Makowski MR, Pfeiffer D, Pfeiffer F

pubmed logopapersMay 23 2025
Accurate lung volume determination is crucial for reliable dark-field imaging. We compared different approaches for the determination of lung volume in mean dark-field coefficient calculation. In this retrospective analysis of data prospectively acquired between October 2018 and October 2020, patients at least 18 years of age who underwent chest computed tomography (CT) were screened for study participation. Inclusion criteria were the ability to consent and to stand upright without help. Exclusion criteria were pregnancy, lung cancer, pleural effusion, atelectasis, air space disease, ground-glass opacities, and pneumothorax. Lung volume was calculated using four methods: conventional radiography (CR) using shape information; a convolutional neural network (CNN) trained for CR; CT-based volume estimation; and results from pulmonary function testing (PFT). Results were compared using a Student t-test and Spearman ρ correlation statistics. We studied 81 participants (51 men, 30 women), aged 64 ± 12 years (mean ± standard deviation). All lung volumes derived from the various methods were different from each other: CR, 7.27 ± 1.64 L; CNN, 4.91 ± 1.05 L; CT, 5.25 ± 1.36 L; PFT, 6.54 L ± 1.52 L; p < 0.001 for all comparisons. A high positive correlation was found for all combinations (p < 0.001 for all), the highest one being between CT and CR (ρ = 0.88) and the lowest one between PFT and CNN (ρ = 0.78). Lung volume and therefore mean dark-field coefficient calculation is highly dependent on the method used, taking into consideration different positioning and inhalation depths. This study underscores the impact of the method used for lung volume determination. In the context of mean dark-field coefficient calculation, CR-based methods are more desirable because both dark-field images and conventional images are acquired at the same breathing state, and therefore, biases due to differences in inhalation depth are eliminated. Lung volume measurements vary significantly between different determination methods. Mean dark-field coefficient calculations require the same method to ensure comparability. Radiography-based methods simplify workflows and minimize biases, making them most suitable.

Development and validation of a radiomics model using plain radiographs to predict spine fractures with posterior wall injury.

Liu W, Zhang X, Yu C, Chen D, Zhao K, Liang J

pubmed logopapersMay 23 2025
When spine fractures involve posterior wall damage, they pose a heightened risk of instability, consequently influencing treatment strategies. To enhance early diagnosis and refine treatment planning for these fractures, we implemented a radiomics analysis using deep learning techniques, based on both anteroposterior and lateral plain X-ray images. Retrospective data were collected for 130 patients with spine fractures who underwent anteroposterior and lateral imaging at two centers (Center 1, training cohort; Center 2, validation cohort) between January 2010 and June 2024. The Vision Transformer (ViT) technique was employed to extract imaging features. The features selected through multiple methods were then used to construct a machine learning model using NaiveBayes and Support Vector Machine (SVM). The model's performance was evaluated using the area under the curve (AUC) metric. 12 features were selected to form the deep learning features. The SVM model using a combination of anteroposterior and lateral plain images showed good performance in both centers with a high AUC for predicting spine fractures with posterior wall injury (Center 1, AUC: 0.909, 95% CI: 0.763-1.000; Center 2, AUC: 0.837, 95% CI: 0.678-0.996). The SVM model based on the combined images outperformed both the individual position images and a spine surgeon with 3 years of clinical experience in classification performance. Our study demonstrates that a radiomic model created by integrating anteroposterior and lateral plain X-ray images of the spine can more effectively predict spine fractures with posterior wall injury, aiding clinicians in making accurate diagnoses and treatment decisions.

End-to-end prognostication in pancreatic cancer by multimodal deep learning: a retrospective, multicenter study.

Schuurmans M, Saha A, Alves N, Vendittelli P, Yakar D, Sabroso-Lasa S, Xue N, Malats N, Huisman H, Hermans J, Litjens G

pubmed logopapersMay 23 2025
Pancreatic cancer treatment plans involving surgery and/or chemotherapy are highly dependent on disease stage. However, current staging systems are ineffective and poorly correlated with survival outcomes. We investigate how artificial intelligence (AI) can enhance prognostic accuracy in pancreatic cancer by integrating multiple data sources. Patients with histopathology and/or radiology/follow-up confirmed pancreatic ductal adenocarcinoma (PDAC) from a Dutch center (2004-2023) were included in the development cohort. Two additional PDAC cohorts from a Dutch and Spanish center were used for external validation. Prognostic models including clinical variables, contrast-enhanced CT images, and a combination of both were developed to predict high-risk short-term survival. All models were trained using five-fold cross-validation and assessed by the area under the time-dependent receiver operating characteristic curve (AUC). The models were developed on 401 patients (203 females, 198 males, median survival (OS) = 347 days, IQR: 171-585), with 98 (24.4%) short-term survivors (OS < 230 days) and 303 (75.6%) long-term survivors. The external validation cohorts included 361 patients (165 females, 138 males, median OS = 404 days, IQR: 173-736), with 110 (30.5%) short-term survivors and 251 (69.5%) longer survivors. The best AUC for predicting short vs. long-term survival was achieved with the multi-modal model (AUC = 0.637 (95% CI: 0.500-0.774)) in the internal validation set. External validation showed AUCs of 0.571 (95% CI: 0.453-0.689) and 0.675 (95% CI: 0.593-0.757). Multimodal AI can predict long vs. short-term survival in PDAC patients, showing potential as a prognostic tool in clinical decision-making. Question Prognostic tools for pancreatic ductal adenocarcinoma (PDAC) remain limited, with TNM staging offering suboptimal accuracy in predicting patient survival outcomes. Findings The multimodal AI model demonstrated improved prognostic performance over TNM and unimodal models for predicting short- and long-term survival in PDAC patients. Clinical relevance Multimodal AI provides enhanced prognostic accuracy compared to current staging systems, potentially improving clinical decision-making and personalized management strategies for PDAC patients.

EnsembleEdgeFusion: advancing semantic segmentation in microvascular decompression imaging with innovative ensemble techniques.

Dhiyanesh B, Vijayalakshmi M, Saranya P, Viji D

pubmed logopapersMay 23 2025
Semantic segmentation involves an imminent part in the investigation of medical images, particularly in the domain of microvascular decompression, where publicly available datasets are scarce, and expert annotation is demanding. In response to this challenge, this study presents a meticulously curated dataset comprising 2003 RGB microvascular decompression images, each intricately paired with annotated masks. Extensive data preprocessing and augmentation strategies were employed to fortify the training dataset, enhancing the robustness of proposed deep learning model. Numerous up-to-date semantic segmentation approaches, including DeepLabv3+, U-Net, DilatedFastFCN with JPU, DANet, and a custom Vanilla architecture, were trained and evaluated using diverse performance metrics. Among these models, DeepLabv3 + emerged as a strong contender, notably excelling in F1 score. Innovatively, ensemble techniques, such as stacking and bagging, were introduced to further elevate segmentation performance. Bagging, notably with the Naïve Bayes approach, exhibited significant improvements, underscoring the potential of ensemble methods in medical image segmentation. The proposed EnsembleEdgeFusion technique exhibited superior loss reduction during training compared to DeepLabv3 + and achieved maximum Mean Intersection over Union (MIoU) scores of 77.73%, surpassing other models. Category-wise analysis affirmed its superiority in accurately delineating various categories within the test dataset.
Page 86 of 1321316 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.