Sort by:
Page 61 of 1341340 results

SCISSOR: Mitigating Semantic Bias through Cluster-Aware Siamese Networks for Robust Classification

Shuo Yang, Bardh Prenkaj, Gjergji Kasneci

arxiv logopreprintJun 17 2025
Shortcut learning undermines model generalization to out-of-distribution data. While the literature attributes shortcuts to biases in superficial features, we show that imbalances in the semantic distribution of sample embeddings induce spurious semantic correlations, compromising model robustness. To address this issue, we propose SCISSOR (Semantic Cluster Intervention for Suppressing ShORtcut), a Siamese network-based debiasing approach that remaps the semantic space by discouraging latent clusters exploited as shortcuts. Unlike prior data-debiasing approaches, SCISSOR eliminates the need for data augmentation and rewriting. We evaluate SCISSOR on 6 models across 4 benchmarks: Chest-XRay and Not-MNIST in computer vision, and GYAFC and Yelp in NLP tasks. Compared to several baselines, SCISSOR reports +5.3 absolute points in F1 score on GYAFC, +7.3 on Yelp, +7.7 on Chest-XRay, and +1 on Not-MNIST. SCISSOR is also highly advantageous for lightweight models with ~9.5% improvement on F1 for ViT on computer vision datasets and ~11.9% for BERT on NLP. Our study redefines the landscape of model generalization by addressing overlooked semantic biases, establishing SCISSOR as a foundational framework for mitigating shortcut learning and fostering more robust, bias-resistant AI systems.

Step-by-Step Approach to Design Image Classifiers in AI: An Exemplary Application of the CNN Architecture for Breast Cancer Diagnosis

Lohani, A., Mishra, B. K., Wertheim, K. Y., Fagbola, T. M.

medrxiv logopreprintJun 17 2025
In recent years, different Convolutional Neural Networks (CNNs) approaches have been applied for image classification in general and specific problems such as breast cancer diagnosis, but there is no standardising approach to facilitate comparison and synergy. This paper attempts a step-by-step approach to standardise a common application of image classification with the specific problem of classifying breast ultrasound images for breast cancer diagnosis as an illustrative example. In this study, three distinct datasets: Breast Ultrasound Image (BUSI), Breast Ultrasound Image (BUI), and Ultrasound Breast Images for Breast Cancer (UBIBC) datasets have been used to build and fine-tune custom and pre-trained CNN models systematically. Custom CNN models have been built, and hence, transfer learning (TL) has been applied to deploy a broad range of pre-trained models, optimised by applying data augmentation techniques and hyperparameter tuning. Models were trained and tested in scenarios involving limited and large datasets to gain insights into their robustness and generality. The obtained results indicated that the custom CNN and VGG19 are the two most suitable architectures for this problem. The experimental results highlight the significance of employing an effective step-by-step approach in image classification tasks to enhance the robustness and generalisation capabilities of CNN-based classifiers.

Integration of MRI radiomics and germline genetics to predict the IDH mutation status of gliomas.

Nakase T, Henderson GA, Barba T, Bareja R, Guerra G, Zhao Q, Francis SS, Gevaert O, Kachuri L

pubmed logopapersJun 16 2025
The molecular profiling of gliomas for isocitrate dehydrogenase (IDH) mutations currently relies on resected tumor samples, highlighting the need for non-invasive, preoperative biomarkers. We investigated the integration of glioma polygenic risk scores (PRS) and radiographic features for prediction of IDH mutation status. We used 256 radiomic features, a glioma PRS and demographic information in 158 glioma cases within elastic net and neural network models. The integration of glioma PRS with radiomics increased the area under the receiver operating characteristic curve (AUC) for distinguishing IDH-wildtype vs. IDH-mutant glioma from 0.83 to 0.88 (P<sub>ΔAUC</sub> = 6.9 × 10<sup>-5</sup>) in the elastic net model and from 0.91 to 0.92 (P<sub>ΔAUC</sub> = 0.32) in the neural network model. Incorporating age at diagnosis and sex further improved the classifiers (elastic net: AUC = 0.93, neural network: AUC = 0.93). Patients predicted to have IDH-mutant vs. IDH-wildtype tumors had significantly lower mortality risk (hazard ratio (HR) = 0.18, 95% CI: 0.08-0.40, P = 2.1 × 10<sup>-5</sup>), comparable to prognostic trajectories for biopsy-confirmed IDH status. The augmentation of imaging-based classifiers with genetic risk profiles may help delineate molecular subtypes and improve the timely, non-invasive clinical assessment of glioma patients.

Whole-lesion-aware network based on freehand ultrasound video for breast cancer assessment: a prospective multicenter study.

Han J, Gao Y, Huo L, Wang D, Xie X, Zhang R, Xiao M, Zhang N, Lei M, Wu Q, Ma L, Sun C, Wang X, Liu L, Cheng S, Tang B, Wang L, Zhu Q, Wang Y

pubmed logopapersJun 16 2025
The clinical application of artificial intelligence (AI) models based on breast ultrasound static images has been hindered in real-world workflows due to operator-dependence of standardized image acquisition and incomplete view of breast lesions on static images. To better exploit the real-time advantages of ultrasound and more conducive to clinical application, we proposed a whole-lesion-aware network based on freehand ultrasound video (WAUVE) scanning in an arbitrary direction for predicting overall breast cancer risk score. The WAUVE was developed using 2912 videos (2912 lesions) of 2771 patients retrospectively collected from May 2020 to August 2022 in two hospitals. We compared the diagnostic performance of WAUVE with static 2D-ResNet50 and dynamic TimeSformer models in the internal validation set. Subsequently, a dataset comprising 190 videos (190 lesions) from 175 patients prospectively collected from December 2022 to April 2023 in two other hospitals, was used as an independent external validation set. A reader study was conducted by four experienced radiologists on the external validation set. We compared the diagnostic performance of WAUVE with the four experienced radiologists and evaluated the auxiliary value of model for radiologists. The WAUVE demonstrated superior performance compared to the 2D-ResNet50 model, while similar to the TimeSformer model. In the external validation set, WAUVE achieved an area under the receiver operating characteristic curve (AUC) of 0.8998 (95% CI = 0.8529-0.9439), and showed a comparable diagnostic performance to that of four experienced radiologists in terms of sensitivity (97.39% vs. 98.48%, p = 0.36), specificity (49.33% vs. 50.00%, p = 0.92), and accuracy (78.42% vs.79.34%, p = 0.60). With the WAUVE model assistance, the average specificity of four experienced radiologists was improved by 6.67%, and higher consistency was achieved (from 0.807 to 0.838). The WAUVE based on non-standardized ultrasound scanning demonstrated excellent performance in breast cancer assessment which yielded outcomes similar to those of experienced radiologists, indicating the clinical application of the WAUVE model promising.

Imaging-Based AI for Predicting Lymphovascular Space Invasion in Cervical Cancer: Systematic Review and Meta-Analysis.

She L, Li Y, Wang H, Zhang J, Zhao Y, Cui J, Qiu L

pubmed logopapersJun 16 2025
The role of artificial intelligence (AI) in enhancing the accuracy of lymphovascular space invasion (LVSI) detection in cervical cancer remains debated. This meta-analysis aimed to evaluate the diagnostic accuracy of imaging-based AI for predicting LVSI in cervical cancer. We conducted a comprehensive literature search across multiple databases, including PubMed, Embase, and Web of Science, identifying studies published up to November 9, 2024. Studies were included if they evaluated the diagnostic performance of imaging-based AI models in detecting LVSI in cervical cancer. We used a bivariate random-effects model to calculate pooled sensitivity and specificity with corresponding 95% confidence intervals. Study heterogeneity was assessed using the I2 statistic. Of 403 studies identified, 16 studies (2514 patients) were included. For the interval validation set, the pooled sensitivity, specificity, and area under the curve (AUC) for detecting LVSI were 0.84 (95% CI 0.79-0.87), 0.78 (95% CI 0.75-0.81), and 0.87 (95% CI 0.84-0.90). For the external validation set, the pooled sensitivity, specificity, and AUC for detecting LVSI were 0.79 (95% CI 0.70-0.86), 0.76 (95% CI 0.67-0.83), and 0.84 (95% CI 0.81-0.87). Using the likelihood ratio test for subgroup analysis, deep learning demonstrated significantly higher sensitivity compared to machine learning (P=.01). Moreover, AI models based on positron emission tomography/computed tomography exhibited superior sensitivity relative to those based on magnetic resonance imaging (P=.01). Imaging-based AI, particularly deep learning algorithms, demonstrates promising diagnostic performance in predicting LVSI in cervical cancer. However, the limited external validation datasets and the retrospective nature of the research may introduce potential biases. These findings underscore AI's potential as an auxiliary diagnostic tool, necessitating further large-scale prospective validation.

Interpretable deep fuzzy network-aided detection of central lymph node metastasis status in papillary thyroid carcinoma.

Wang W, Ning Z, Zhang J, Zhang Y, Wang W

pubmed logopapersJun 16 2025
The non-invasive assessment of central lymph node metastasis (CLNM) in patients with papillary thyroid carcinoma (PTC) plays a crucial role in assisting treatment decision and prognosis planning. This study aims to use an interpretable deep fuzzy network guided by expert knowledge to predict the CLNM status of patients with PTC from ultrasound images. A total of 1019 PTC patients were enrolled in this study, comprising 465 CLNM patients and 554 non-CLNM patients. Pathological diagnosis served as the gold standard to determine metastasis status. Clinical and morphological features of thyroid were collected as expert knowledge to guide the deep fuzzy network in predicting CLNM status. The network consisted of a region of interest (ROI) segmentation module, a knowledge-aware feature extraction module, and a fuzzy prediction module. The network was trained on 652 patients, validated on 163 patients and tested on 204 patients. The model exhibited promising performance in predicting CLNM status, achieving the area under the receiver operating characteristic curve (AUC), accuracy, precision, sensitivity and specificity of 0.786 (95% CI 0.720-0.846), 0.745 (95% CI 0.681-0.799), 0.727 (95% CI 0.636-0.819), 0.696 (95% CI 0.594-0.789), and 0.786 (95% CI 0.712-0.864), respectively. In addition, the rules of the fuzzy system in the model are easy to understand and explain, and have good interpretability. The deep fuzzy network guided by expert knowledge predicted CLNM status of PTC patients with high accuracy and good interpretability, and may be considered as an effective tool to guide preoperative clinical decision-making.

Predicting mucosal healing in Crohn's disease: development of a deep-learning model based on intestinal ultrasound images.

Ma L, Chen Y, Fu X, Qin J, Luo Y, Gao Y, Li W, Xiao M, Cao Z, Shi J, Zhu Q, Guo C, Wu J

pubmed logopapersJun 16 2025
Predicting treatment response in Crohn's disease (CD) is essential for making an optimal therapeutic regimen, but relevant models are lacking. This study aimed to develop a deep learning model based on baseline intestinal ultrasound (IUS) images and clinical information to predict mucosal healing. Consecutive CD patients who underwent pretreatment IUS were retrospectively recruited at a tertiary hospital. A total of 1548 IUS images of longitudinal diseased bowel segments were collected and divided into a training cohort and a test cohort. A convolutional neural network model was developed to predict mucosal healing after one year of standardized treatment. The model's efficacy was validated using the five-fold internal cross-validation and further tested in the test cohort. A total of 190 patients (68.9% men, mean age 32.3 ± 14.1 years) were enrolled, consisting of 1038 IUS images of mucosal healing and 510 images of no mucosal healing. The mean area under the curve in the test cohort was 0.73 (95% CI: 0.68-0.78), with the mean sensitivity of 68.1% (95% CI: 60.5-77.4%), specificity of 69.5% (95% CI: 60.1-77.2%), positive prediction value of 80.0% (95% CI: 74.5-84.9%), negative prediction value of 54.8% (95% CI: 48.0-63.7%). Heat maps showing the deep-learning decision-making process revealed that information from the bowel wall, serous surface, and surrounding mesentery was mainly considered by the model. We developed a deep learning model based on IUS images to predict mucosal healing in CD with notable accuracy. Further validation and improvement of this model with more multi-center, real-world data are needed. Predicting treatment response in CD is essential to making an optimal therapeutic regimen. In this study, a deep-learning model using pretreatment ultrasound images and clinical information was generated to predict mucosal healing with an AUC of 0.73. Response to medication treatment is highly variable among patients with CD. High-resolution IUS images of the intestinal wall may hide significant characteristics for treatment response. A deep-learning model capable of predicting treatment response was generated using pretreatment IUS images.

Ultrasound for breast cancer detection: A bibliometric analysis of global trends between 2004 and 2024.

Sun YY, Shi XT, Xu LL

pubmed logopapersJun 16 2025
With the advancement of computer technology and imaging equipment, ultrasound has emerged as a crucial tool in breast cancer diagnosis. To gain deeper insights into the research landscape of ultrasound in breast cancer diagnosis, this study employed bibliometric methods for a comprehensive analysis spanning from 2004 to 2024, analyzing 3523 articles from 2176 institutions in 82 countries/regions. Over this period, publications on ultrasound diagnosis of breast cancer showed a fluctuating growth trend from 2004 to 2024. Notably, China, Seoul National University and Kim EK emerged as leading contributors in ultrasound for breast cancer detection, with the most published and cited journals being Ultrasound Med Biol and Radiology. The research spots in this area included "breast lesion", "dense breast" and "breast-conserving surgery", while "machine learning", "ultrasonic imaging", "convolutional neural network", "case report", "pathological complete response", "deep learning", "artificial intelligence" and "classification" are anticipated to become future research frontiers. This groundbreaking bibliometric analysis and visualization of ultrasonic breast cancer diagnosis publications offer clinical medical professionals a reliable research focus and direction.

MultiViT2: A Data-augmented Multimodal Neuroimaging Prediction Framework via Latent Diffusion Model

Bi Yuda, Jia Sihan, Gao Yutong, Abrol Anees, Fu Zening, Calhoun Vince

arxiv logopreprintJun 16 2025
Multimodal medical imaging integrates diverse data types, such as structural and functional neuroimaging, to provide complementary insights that enhance deep learning predictions and improve outcomes. This study focuses on a neuroimaging prediction framework based on both structural and functional neuroimaging data. We propose a next-generation prediction model, \textbf{MultiViT2}, which combines a pretrained representative learning base model with a vision transformer backbone for prediction output. Additionally, we developed a data augmentation module based on the latent diffusion model that enriches input data by generating augmented neuroimaging samples, thereby enhancing predictive performance through reduced overfitting and improved generalizability. We show that MultiViT2 significantly outperforms the first-generation model in schizophrenia classification accuracy and demonstrates strong scalability and portability.
Page 61 of 1341340 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.