Sort by:
Page 64 of 2202194 results

Developing ultrasound-based machine learning models for accurate differentiation between sclerosing adenosis and invasive ductal carcinoma.

Liu G, Yang N, Qu Y, Chen G, Wen G, Li G, Deng L, Mai Y

pubmed logopapersJun 28 2025
This study aimed to develop a machine learning model using breast ultrasound images to improve the non-invasive differential diagnosis between Sclerosing Adenosis (SA) and Invasive Ductal Carcinoma (IDC). 2046 ultrasound images from 772 SA and IDC patients were collected, Regions of Interest (ROI) were delineated, and features were extracted. The dataset was split into training and test cohorts, and feature selection was performed by correlation coefficients and Recursive Feature Elimination. 10 classifiers with Grid Search and 5-fold cross-validation were applied during model training. Receiver Operating Characteristic (ROC) curve and Youden index were used to model evaluation. SHapley Additive exPlanations (SHAP) was employed for model interpretation. Another 224 ROIs of 84 patients from other hospitals were used for external validation. For the ROI-level model, XGBoost with 18 features achieved an area under the curve (AUC) of 0.9758 (0.9654-0.9847) in the test cohort and 0.9906 (0.9805-0.9973) in the validation cohort. For the patient-level model, logistic regression with 9 features achieved an AUC of 0.9653 (0.9402-0.9859) in the test cohort and 0.9846 (0.9615-0.9978) in the validation cohort. The feature "Original shape Major Axis Length" was identified as the most important, with its value positively correlated with a higher likelihood of the sample being IDC. Feature contributions for specific ROIs were visualized as well. We developed explainable, ultrasound-based machine learning models with high performance for differentiating SA and IDC, offering a potential non-invasive tool for improved differential diagnosis. Question Accurately distinguishing between sclerosing adenosis (SA) and invasive ductal carcinoma (IDC) in a non-invasive manner has been a diagnostic challenge. Findings Explainable, ultrasound-based machine learning models with high performance were developed for differentiating SA and IDC, and validated well in external validation cohort. Critical relevance These models provide non-invasive tools to reduce misdiagnoses of SA and improve early detection for IDC.

Identifying visible tissue in intraoperative ultrasound: a method and application.

Weld A, Dixon L, Dyck M, Anichini G, Ranne A, Camp S, Giannarou S

pubmed logopapersJun 28 2025
Intraoperative ultrasound scanning is a demanding visuotactile task. It requires operators to simultaneously localise the ultrasound perspective and manually perform slight adjustments to the pose of the probe, making sure not to apply excessive force or breaking contact with the tissue, while also characterising the visible tissue. To analyse the probe-tissue contact, an iterative filtering and topological method is proposed to identify the underlying visible tissue, which can be used to detect acoustic shadow and construct confidence maps of perceptual salience. For evaluation, datasets containing both in vivo and medical phantom data are created. A suite of evaluations is performed, including an evaluation of acoustic shadow classification. Compared to an ablation, deep learning, and statistical method, the proposed approach achieves superior classification on in vivo data, achieving an <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>F</mi> <mi>β</mi></msub> </math> score of 0.864, in comparison with 0.838, 0.808, and 0.808. A novel framework for evaluating the confidence estimation of probe-tissue contact is created. The phantom data are captured specifically for this, and comparison is made against two established methods. The proposed method produced the superior response, achieving an average normalised root-mean-square error of 0.168, in comparison with 1.836 and 4.542. Evaluation is also extended to determine the algorithm's robustness to parameter perturbation, speckle noise, data distribution shift, and capability for guiding a robotic scan. The results of this comprehensive set of experiments justify the potential clinical value of the proposed algorithm, which can be used to support clinical training and robotic ultrasound automation.

Prognostic value of body composition out of PSMA-PET/CT in prostate cancer patients undergoing PSMA-therapy.

Roll W, Plagwitz L, Ventura D, Masthoff M, Backhaus C, Varghese J, Rahbar K, Schindler P

pubmed logopapersJun 28 2025
This retrospective study aims to develop a deep learning-based approach to whole-body CT segmentation out of standard PSMA-PET-CT to assess body composition in metastatic castration resistant prostate cancer (mCRPC) patients prior to [<sup>177</sup>Lu]Lu-PSMA radioligand therapy (RLT). Our goal is to go beyond standard PSMA-PET-based pretherapeutic assessment and identify additional body composition metrics out of the CT-component, with potential prognostic value. We used a deep learning segmentation model to perform fully automated segmentation of different tissue compartments, including visceral- (VAT), subcutaneous- (SAT), intra/intermuscular- adipose tissue (IMAT) from [<sup>68</sup> Ga]Ga-PSMA-PET-CT scans of n = 86 prostate cancer patients before RLT. The proportions of different adipose tissue compartments to total adipose tissue (TAT) assessed on a 3D CT-volume of the abdomen or on a 2D single slice basis (centered at third lumbal vertebra (L3)) were compared for their prognostic value. First, univariate and multivariate Cox proportional hazards regression analyses were performed. Subsequently, the subjects were dichotomized at the median tissue composition, and these subgroups were evaluated by Kaplan-Meier analysis with the log-rank test. The automated segmentation model was useful for delineating different adipose tissue compartments and skeletal muscle across different patient anatomies. Analyses revealed significant correlations between lower SAT and higher IMAT ratios and poorer therapeutic outcomes in Cox regression analysis (SAT/TAT: p = 0.038; IMAT/TAT: p < 0.001) in the 3D model. In the single slice approach only IMAT/SAT was significantly associated with survival in Cox regression analysis (p < 0.001; SAT/TAT: p > 0.05). IMAT ratio remained an independent predictor of survival in multivariate analysis when including PSMA-PET and blood-based prognostic factors. In this proof-of-principle study the implementation of a deep learning-based whole-body analysis provides a robust and detailed CT-based assessment of body composition in mCRPC patients undergoing RLT. Potential prognostic parameters have to be corroborated in larger prospective datasets.

Non-contrast computed tomography radiomics model to predict benign and malignant thyroid nodules with lobe segmentation: A dual-center study.

Wang H, Wang X, Du YS, Wang Y, Bai ZJ, Wu D, Tang WL, Zeng HL, Tao J, He J

pubmed logopapersJun 28 2025
Accurate preoperative differentiation of benign and malignant thyroid nodules is critical for optimal patient management. However, conventional imaging modalities present inherent diagnostic limitations. To develop a non-contrast computed tomography-based machine learning model integrating radiomics and clinical features for preoperative thyroid nodule classification. This multicenter retrospective study enrolled 272 patients with thyroid nodules (376 thyroid lobes) from center A (May 2021-April 2024), using histopathological findings as the reference standard. The dataset was stratified into a training cohort (264 lobes) and an internal validation cohort (112 lobes). Additional prospective temporal (97 lobes, May-August 2024, center A) and external multicenter (81 lobes, center B) test cohorts were incorporated to enhance generalizability. Thyroid lobes were segmented along the isthmus midline, with segmentation reliability confirmed by an intraclass correlation coefficient (≥ 0.80). Radiomics feature extraction was performed using Pearson correlation analysis followed by least absolute shrinkage and selection operator regression with 10-fold cross-validation. Seven machine learning algorithms were systematically evaluated, with model performance quantified through the area under the receiver operating characteristic curve (AUC), Brier score, decision curve analysis, and DeLong test for comparison with radiologists interpretations. Model interpretability was elucidated using SHapley Additive exPlanations (SHAP). The extreme gradient boosting model demonstrated robust diagnostic performance across all datasets, achieving AUCs of 0.899 [95% confidence interval (CI): 0.845-0.932] in the training cohort, 0.803 (95%CI: 0.715-0.890) in internal validation, 0.855 (95%CI: 0.775-0.935) in temporal testing, and 0.802 (95%CI: 0.664-0.939) in external testing. These results were significantly superior to radiologists assessments (AUCs: 0.596, 0.529, 0.558, and 0.538, respectively; <i>P</i> < 0.001 by DeLong test). SHAP analysis identified radiomic score, age, tumor size stratification, calcification status, and cystic components as key predictive features. The model exhibited excellent calibration (Brier scores: 0.125-0.144) and provided significant clinical net benefit at decision thresholds exceeding 20%, as evidenced by decision curve analysis. The non-contrast computed tomography-based radiomics-clinical fusion model enables robust preoperative thyroid nodule classification, with SHAP-driven interpretability enhancing its clinical applicability for personalized decision-making.

Pulmonary hypertension: diagnostic aspects-what is the role of imaging?

Ali HJ, Guha A

pubmed logopapersJun 27 2025
The role of imaging in diagnosis of pulmonary hypertension is multifaceted, spanning from estimation of pulmonary arterial pressures, understanding pulmonary artery-right ventricular interaction, and identification of the cause. The purpose of this review is to provide a comprehensive overview of multimodality imaging in evaluation of pulmonary hypertension as well as the novel applications of imaging techniques that have improved our detection and understanding of pulmonary hypertension. There are diverse imaging modalities available for comprehensive assessment of pulmonary hypertension that are expanding with new tracers (e.g., hyperpolarized xenon gas, 129Xe) and imaging techniques (C-arm cone-bean computed tomography). Artificial intelligence applications may improve efficiency and accuracy of screening for pulmonary hypertension as well as further characterize pulmonary vasculopathies using computed tomography of the chest. In the face of increasing imaging options, a "value-based imaging" approach should be adopted to reduce unnecessary burden on the patient and the healthcare system without compromising the accuracy and completeness of diagnostic assessment. Future studies are needed to optimize use of multimodality imaging and artificial intelligence in comprehensive evaluation of patients with pulmonary hypertension.

Causality-Adjusted Data Augmentation for Domain Continual Medical Image Segmentation.

Zhu Z, Dong Q, Luo G, Wang W, Dong S, Wang K, Tian Y, Wang G, Li S

pubmed logopapersJun 27 2025
In domain continual medical image segmentation, distillation-based methods mitigate catastrophic forgetting by continuously reviewing old knowledge. However, these approaches often exhibit biases towards both new and old knowledge simultaneously due to confounding factors, which can undermine segmentation performance. To address these biases, we propose the Causality-Adjusted Data Augmentation (CauAug) framework, introducing a novel causal intervention strategy called the Texture-Domain Adjustment Hybrid-Scheme (TDAHS) alongside two causality-targeted data augmentation approaches: the Cross Kernel Network (CKNet) and the Fourier Transformer Generator (FTGen). (1) TDAHS establishes a domain-continual causal model that accounts for two types of knowledge biases by identifying irrelevant local textures (L) and domain-specific features (D) as confounders. It introduces a hybrid causal intervention that combines traditional confounder elimination with a proposed replacement approach to better adapt to domain shifts, thereby promoting causal segmentation. (2) CKNet eliminates confounder L to reduce biases in new knowledge absorption. It decreases reliance on local textures in input images, forcing the model to focus on relevant anatomical structures and thus improving generalization. (3) FTGen causally intervenes on confounder D by selectively replacing it to alleviate biases that impact old knowledge retention. It restores domain-specific features in images, aiding in the comprehensive distillation of old knowledge. Our experiments show that CauAug significantly mitigates catastrophic forgetting and surpasses existing methods in various medical image segmentation tasks. The implementation code is publicly available at: https://github.com/PerceptionComputingLab/CauAug_DCMIS.

Quantifying Sagittal Craniosynostosis Severity: A Machine Learning Approach With CranioRate.

Tao W, Somorin TJ, Kueper J, Dixon A, Kass N, Khan N, Iyer K, Wagoner J, Rogers A, Whitaker R, Elhabian S, Goldstein JA

pubmed logopapersJun 27 2025
ObjectiveTo develop and validate machine learning (ML) models for objective and comprehensive quantification of sagittal craniosynostosis (SCS) severity, enhancing clinical assessment, management, and research.DesignA cross-sectional study that combined the analysis of computed tomography (CT) scans and expert ratings.SettingThe study was conducted at a children's hospital and a major computer imaging institution. Our survey collected expert ratings from participating surgeons.ParticipantsThe study included 195 patients with nonsyndromic SCS, 221 patients with nonsyndromic metopic craniosynostosis (CS), and 178 age-matched controls. Fifty-four craniofacial surgeons participated in rating 20 patients head CT scans.InterventionsComputed tomography scans for cranial morphology assessment and a radiographic diagnosis of nonsyndromic SCS.Main OutcomesAccuracy of the proposed Sagittal Severity Score (SSS) in predicting expert ratings compared to cephalic index (CI). Secondary outcomes compared Likert ratings with SCS status, the predictive power of skull-based versus skin-based landmarks, and assessments of an unsupervised ML model, the Cranial Morphology Deviation (CMD), as an alternative without ratings.ResultsThe SSS achieved significantly higher accuracy in predicting expert responses than CI (<i>P</i> < .05). Likert ratings outperformed SCS status in supervising ML models to quantify within-group variations. Skin-based landmarks demonstrated equivalent predictive power as skull landmarks (<i>P</i> < .05, threshold 0.02). The CMD demonstrated a strong correlation with the SSS (Pearson coefficient: 0.92, Spearman coefficient: 0.90, <i>P</i> < .01).ConclusionsThe SSS and CMD can provide accurate, consistent, and comprehensive quantification of SCS severity. Implementing these data-driven ML models can significantly advance CS care through standardized assessments, enhanced precision, and informed surgical planning.

Automation in tibial implant loosening detection using deep-learning segmentation.

Magg C, Ter Wee MA, Buijs GS, Kievit AJ, Schafroth MU, Dobbe JGG, Streekstra GJ, Sánchez CI, Blankevoort L

pubmed logopapersJun 27 2025
Patients with recurrent complaints after total knee arthroplasty may suffer from aseptic implant loosening. Current imaging modalities do not quantify looseness of knee arthroplasty components. A recently developed and validated workflow quantifies the tibial component displacement relative to the bone from CT scans acquired under valgus and varus load. The 3D analysis approach includes segmentation and registration of the tibial component and bone. In the current approach, the semi-automatic segmentation requires user interaction, adding complexity to the analysis. The research question is whether the segmentation step can be fully automated while keeping outcomes indifferent. In this study, different deep-learning (DL) models for fully automatic segmentation are proposed and evaluated. For this, we employ three different datasets for model development (20 cadaveric CT pairs and 10 cadaveric CT scans) and evaluation (72 patient CT pairs). Based on the performance on the development dataset, the final model was selected, and its predictions replaced the semi-automatic segmentation in the current approach. Implant displacement was quantified by the rotation about the screw-axis, maximum total point motion, and mean target registration error. The displacement parameters of the proposed approach showed a statistically significant difference between fixed and loose samples in a cadaver dataset, as well as between asymptomatic and loose samples in a patient dataset, similar to the outcomes of the current approach. The methodological error calculated on a reproducibility dataset showed values that were not statistically significant different between the two approaches. The results of the proposed and current approaches showed excellent reliability for one and three operators on two datasets. The conclusion is that a full automation in knee implant displacement assessment is feasible by utilizing a DL-based segmentation model while maintaining the capability of distinguishing between fixed and loose implants.

Catheter detection and segmentation in X-ray images via multi-task learning.

Xi L, Ma Y, Koland E, Howell S, Rinaldi A, Rhode KS

pubmed logopapersJun 27 2025
Automated detection and segmentation of surgical devices, such as catheters or wires, in X-ray fluoroscopic images have the potential to enhance image guidance in minimally invasive heart surgeries. In this paper, we present a convolutional neural network model that integrates a resnet architecture with multiple prediction heads to achieve real-time, accurate localization of electrodes on catheters and catheter segmentation in an end-to-end deep learning framework. We also propose a multi-task learning strategy in which our model is trained to perform both accurate electrode detection and catheter segmentation simultaneously. A key challenge with this approach is achieving optimal performance for both tasks. To address this, we introduce a novel multi-level dynamic resource prioritization method. This method dynamically adjusts sample and task weights during training to effectively prioritize more challenging tasks, where task difficulty is inversely proportional to performance and evolves throughout the training process. The proposed method has been validated on both public and private datasets for single-task catheter segmentation and multi-task catheter segmentation and detection. The performance of our method is also compared with existing state-of-the-art methods, demonstrating significant improvements, with a mean <math xmlns="http://www.w3.org/1998/Math/MathML"><mi>J</mi></math> of 64.37/63.97 and with average precision over all IoU thresholds of 84.15/83.13, respectively, for detection and segmentation multi-task on the validation and test sets of the catheter detection and segmentation dataset. Our approach achieves a good balance between accuracy and efficiency, making it well-suited for real-time surgical guidance applications.

Hybrid segmentation model and CAViaR -based Xception Maxout network for brain tumor detection using MRI images.

Swapna S, Garapati Y

pubmed logopapersJun 27 2025
Brain tumor (BT) is a rapid growth of brain cells. If the BT is not identified and treated in the first stage, it could cause death. Despite several methods and efforts being developed for segmenting and identifying BT, the detection of BT is complicated due to the distinct position of the tumor and its size. To solve such issues, this paper proposes the Conditional Autoregressive Value-at-Risk_Xception Maxout-Network (Caviar_XM-Net) for BT detection utilizing magnetic resonance imaging (MRI) images. The input MRI image gathered from the dataset is denoised using the adaptive bilateral filter (ABF), and tumor region segmentation is done using BFC-MRFNet-RVSeg. Here, the segmentation is done by the Bayesian fuzzy clustering (BFC) and multi-branch residual fusion network (MRF-Net) separately. Subsequently, outputs from both segmentation techniques are combined using the RV coefficient. Image augmentation is performed to boost the quantity of images in the training process. Afterwards, feature extraction is done, where features, like local optimal oriented pattern (LOOP), convolutional neural network (CNN) features, median binary pattern (MBP) with statistical features, and local Gabor XOR pattern (LGXP), are extracted. Lastly, BT detection is carried out by employing Caviar_XM-Net, which is acquired by the assimilation of the Xception model and deep Maxout network (DMN) with the CAViaR approach. Furthermore, the effectiveness of Caviar_XM-Net is examined using the parameters, namely sensitivity, accuracy, specificity, precision, and F1-score, and the corresponding values of 91.59%, 91.36%, 90.83%, 90.99%, and 91.29% are attained. Hence, the Caviar_XM-Net performs better than the traditional methods with high efficiency.
Page 64 of 2202194 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.