Sort by:
Page 11 of 56552 results

Adaptive Contrast Adjustment Module: A Clinically-Inspired Plug-and-Play Approach for Enhanced Fetal Plane Classification

Yang Chen, Sanglin Zhao, Baoyu Chen, Mans Gustaf

arxiv logopreprintAug 31 2025
Fetal ultrasound standard plane classification is essential for reliable prenatal diagnosis but faces inherent challenges, including low tissue contrast, boundary ambiguity, and operator-dependent image quality variations. To overcome these limitations, we propose a plug-and-play adaptive contrast adjustment module (ACAM), whose core design is inspired by the clinical practice of doctors adjusting image contrast to obtain clearer and more discriminative structural information. The module employs a shallow texture-sensitive network to predict clinically plausible contrast parameters, transforms input images into multiple contrast-enhanced views through differentiable mapping, and fuses them within downstream classifiers. Validated on a multi-center dataset of 12,400 images across six anatomical categories, the module consistently improves performance across diverse models, with accuracy of lightweight models increasing by 2.02 percent, accuracy of traditional models increasing by 1.29 percent, and accuracy of state-of-the-art models increasing by 1.15 percent. The innovation of the module lies in its content-aware adaptation capability, replacing random preprocessing with physics-informed transformations that align with sonographer workflows while improving robustness to imaging heterogeneity through multi-view fusion. This approach effectively bridges low-level image features with high-level semantics, establishing a new paradigm for medical image analysis under real-world image quality variations.

Synthesize contrast-enhanced ultrasound image of thyroid nodules via generative adversarial networks.

Lai M, Yao J, Zhou Y, Zhou L, Jiang T, Sui L, Tang J, Zhu X, Huang J, Wang Y, Liu J, Xu D

pubmed logopapersAug 30 2025
This study aims to explore the feasibility of employing generative adversarial networks (GAN) to generate synthetic contrast-enhanced ultrasound (CEUS) from grayscale ultrasound images of patients with thyroid nodules while dispensing with the need for ultrasound contrast agents injection. Patients who underwent preoperative thyroid CEUS examinations between January 2020 and July 2022 were collected retrospectively. The cycle-GAN framework integrated paired and unpaired learning modules was employed to develop the non-invasive image generation process. The synthetic CEUS images was generated in three phases: pre-arterial, plateau, and venous. The evaluation included quantitative similarity metrics, classification performance, and qualitative assessment by radiologists. CEUS videos of 360 thyroid nodules from 314 patients (45 years ± 12 [SD]; 272 women) in the internal dataset and 202 thyroid nodules from 183 patients (46 years ± 13 [SD]; 148 women) in the external dataset were included. In the external testing dataset, quantitative analysis revealed a significant degree of similarity between real and synthetic CEUS images (structure similarity index, 0.89 ± 0.04; peak signal-to-noise ratio, 28.17 ± 2.42). Radiologists deemed 126 of 132 [95%] synthetic CEUS images diagnostically useful. The accuracy of radiologists in distinguishing between real and synthetic images was 55.6% (95% CI: 0.49, 0.63), with an AUC of 61.0% (95% CI: 0.65, 0.68). No statistically significant difference (p > 0.05) was observed when radiologists assessed peak intensity and enhancement patterns using real CEUS and synthetic CEUS. Both quantitative analysis and radiologist evaluations exhibited that synthetic CEUS images generated by generative adversarial networks were similar to real CEUS images. QuestionIt is feasible to generate synthetic thyroid contrast-enhanced ultrasound images using generative adversarial networks without ultrasound contrast agents injection. FindingsCompared to real contrast-enhanced ultrasound images, synthetic contrast-enhanced ultrasound images exhibited high similarity and image quality. Clinical relevanceThis non-invasive and intelligent transformation may reduce the requirement for ultrasound contrast agents in certain cases, particularly in scenarios where ultrasound contrast agents administration is contraindicated, such as in patients with allergies, poor tolerance, or limited access to resources.

MSFE-GallNet-X: a multi-scale feature extraction-based CNN Model for gallbladder disease analysis with enhanced explainability.

Nabil HR, Ahmed I, Das A, Mridha MF, Kabir MM, Aung Z

pubmed logopapersAug 30 2025
This study introduces MSFE-GallNet-X, a domain-adaptive deep learning model utilizing multi-scale feature extraction (MSFE) to improve the classification accuracy of gallbladder diseases from grayscale ultrasound images, while integrating explainable artificial intelligence (XAI) methods to enhance clinical interpretability. We developed a convolutional neural network-based architecture that automatically learns multi-scale features from a dataset comprising 10,692 high-resolution ultrasound images from 1,782 patients, covering nine gallbladder disease classes, including gallstones, cholecystitis, and carcinoma. The model incorporated Gradient-Weighted Class Activation Mapping (Grad-CAM) and Local Interpretable Model-Agnostic Explanations (LIME) to provide visual interpretability of diagnostic predictions. Model performance was evaluated using standard metrics, including accuracy and F1 score. The MSFE-GallNet-X achieved a classification accuracy of 99.63% and an F1 score of 99.50%, outperforming state-of-the-art models including VGG-19 (98.89%) and DenseNet121 (91.81%), while maintaining greater parameter efficiency, only 1·91 M parameters in gallbladder disease classification. Visualization through Grad-CAM and LIME highlighted critical image regions influencing model predictions, supporting explainability for clinical use. MSFE-GallNet-X demonstrates strong performance on a controlled and balanced dataset, suggesting its potential as an AI-assisted tool for clinical decision-making in gallbladder disease management. Not applicable.

Clinical Radiomics Nomogram Based on Ultrasound: A Tool for Preoperative Prediction of Uterine Sarcoma.

Zheng W, Lu A, Tang X, Chen L

pubmed logopapersAug 30 2025
This study aims to develop a noninvasive preoperative predictive model utilizing ultrasound radiomics combined with clinical characteristics to differentiate uterine sarcoma from leiomyoma. This study included 212 patients with uterine mesenchymal lesions (102 sarcomas and 110 leiomyomas). Clinical characteristics were systematically selected through both univariate and multivariate logistic regression analyses. A clinical model was constructed using the selected clinical characteristics. Radiomics features were extracted from transvaginal ultrasound images, and 6 machine learning algorithms were used to construct radiomics models. Then, a clinical radiomics nomogram was developed integrating clinical characteristics with radiomics signature. The effectiveness of these models in predicting uterine sarcoma was thoroughly evaluated. The area under the curve (AUC) was used to compare the predictive efficacy of the different models. The AUC of the clinical model was 0.835 (95% confidence interval [CI]: 0.761-0.883) and 0.791 (95% CI: 0.652-0.869) in the training and testing sets, respectively. The logistic regression model performed best in the radiomics model construction, with AUC values of 0.878 (95% CI: 0.811-0.918) and 0.818 (95% CI: 0.681-0.895) in the training and testing sets, respectively. The clinical radiomics nomogram performed well in differentiation, with AUC values of 0.955 (95% CI: 0.911-0.973) and 0.882 (95% CI: 0.767-0.936) in the training and testing sets, respectively. The clinical radiomics nomogram can provide more comprehensive and personalized diagnostic information, which is highly important for selecting treatment strategies and ultimately improving patient outcomes in the management of uterine mesenchymal tumors.

Proteogenomic Biomarker Profiling for Predicting Radiolabeled Immunotherapy Response in Resistant Prostate Cancer.

Yan B, Gao Y, Zou Y, Zhao L, Li Z

pubmed logopapersAug 29 2025
Treatment resistance prevents patients with preoperative chemoradiotherapy or targeted radiolabeled immunotherapy from achieving a good result, which remains a major challenge in the prostate cancer (PCa) area. A novel integrative framework combining a machine learning workflow with proteogenomic profiling was used to identify predictive ultrasound biomarkers and classify patient response to radiolabeled immunotherapy in high-risk PCa patients who are treatment resistant. The deep stacked autoencoder (DSAE) model, combined with Extreme Gradient Boosting, was designed for feature refinement and classification. The Cancer Genome Atlas and an independent radiotherapy-treated cohort have been utilized to collect multiomics data through their respective applications. In addition to genetic mutations (whole-exome sequencing), these data contained proteomic (mass spectrometry) and transcriptomic (RNA sequencing) data. Maintaining biological variety across omics layers while reducing the dimensionality of the data requires the use of the DSAE architecture. Resistance phenotypes show a notable relationship with proteogenomic profiles, including DNA repair pathways (Breast Cancer gene 2 [BRCA2], ataxia-telangiectasia mutated [ATM]), androgen receptor (AR) signaling regulators, and metabolic enzymes (ATP citrate lyase [ACLY], isocitrate dehydrogenase 1 [IDH1]). A specific panel of ultrasound biomarkers has been confirmed in a state deemed preclinical using patient-derived xenografts. To support clinical translation, real-time phenotypic features from ultrasound imaging (e.g., perfusion, stiffness) were also considered, providing complementary insights into the tumor microenvironment and treatment responsiveness. This approach provides an integrated platform that offers a clinically actionable foundation for the development of radiolabeled immunotherapy drugs before surgical operations.

Multimodal Deep Learning for Phyllodes Tumor Classification from Ultrasound and Clinical Data

Farhan Fuad Abir, Abigail Elliott Daly, Kyle Anderman, Tolga Ozmen, Laura J. Brattain

arxiv logopreprintAug 29 2025
Phyllodes tumors (PTs) are rare fibroepithelial breast lesions that are difficult to classify preoperatively due to their radiological similarity to benign fibroadenomas. This often leads to unnecessary surgical excisions. To address this, we propose a multimodal deep learning framework that integrates breast ultrasound (BUS) images with structured clinical data to improve diagnostic accuracy. We developed a dual-branch neural network that extracts and fuses features from ultrasound images and patient metadata from 81 subjects with confirmed PTs. Class-aware sampling and subject-stratified 5-fold cross-validation were applied to prevent class imbalance and data leakage. The results show that our proposed multimodal method outperforms unimodal baselines in classifying benign versus borderline/malignant PTs. Among six image encoders, ConvNeXt and ResNet18 achieved the best performance in the multimodal setting, with AUC-ROC scores of 0.9427 and 0.9349, and F1-scores of 0.6720 and 0.7294, respectively. This study demonstrates the potential of multimodal AI to serve as a non-invasive diagnostic tool, reducing unnecessary biopsies and improving clinical decision-making in breast tumor management.

Experimental Assessment of Conventional Features, CNN-Based Features and Ensemble Schemes for Discriminating Benign Versus Malignant Lesions on Breast Ultrasound Images.

Bianconi F, Khan MU, Du H, Jassim S

pubmed logopapersAug 28 2025
Breast ultrasound images play a pivotal role in assessing the nature of suspicious breast lesions, particularly in patients with dense tissue. Computerized analysis of breast ultrasound images has the potential to assist the physician in the clinical decision-making and improve subjective interpretation. We assess the performance of conventional features, deep learning features and ensemble schemes for discriminating benign versus malignant breast lesions on ultrasound images. A total of 19 individual feature sets (1 morphological, 2 first-order, 10 texture-based, and 6 CNN-based) were included in the analysis. Furthermore, four combined feature sets (Best by class; Top 3, 5, and 7) and four fusion schemes (feature concatenation, majority voting, sum and product rule) were considered to generate ensemble models. The experiments were carried out on three independent open-access datasets respectively containing 252 (154 benign, 98 malignant), 232 (109 benign, 123 malignant), and 281 (187 benign, 94 malignant) lesions. CNN-based features outperformed the other individual descriptors achieving levels of accuracy between 77.4% and 83.6%, followed by morphological features (71.6%-80.8%) and histograms of oriented gradients (71.4%-77.6%). Ensemble models further improved the accuracy to 80.2% to 87.5%. Fusion schemes based on product and sum rule were generally superior to feature concatenation and majority voting. Combining individual feature sets by ensemble schemes demonstrates advantages for discriminating benign versus malignant breast lesions on ultrasound images.

Self-Composing Neural Operators with Depth and Accuracy Scaling via Adaptive Train-and-Unroll Approach

Juncai He, Xinliang Liu, Jinchao Xu

arxiv logopreprintAug 28 2025
In this work, we propose a novel framework to enhance the efficiency and accuracy of neural operators through self-composition, offering both theoretical guarantees and practical benefits. Inspired by iterative methods in solving numerical partial differential equations (PDEs), we design a specific neural operator by repeatedly applying a single neural operator block, we progressively deepen the model without explicitly adding new blocks, improving the model's capacity. To train these models efficiently, we introduce an adaptive train-and-unroll approach, where the depth of the neural operator is gradually increased during training. This approach reveals an accuracy scaling law with model depth and offers significant computational savings through our adaptive training strategy. Our architecture achieves state-of-the-art (SOTA) performance on standard benchmarks. We further demonstrate its efficacy on a challenging high-frequency ultrasound computed tomography (USCT) problem, where a multigrid-inspired backbone enables superior performance in resolving complex wave phenomena. The proposed framework provides a computationally tractable, accurate, and scalable solution for large-scale data-driven scientific machine learning applications.

ProMUS-NET: Artificial intelligence detects more prostate cancer than urologists on micro-ultrasonography.

Zhou SR, Zhang L, Choi MH, Vesal S, Kinnaird A, Brisbane WG, Lughezzani G, Maffei D, Fasulo V, Albers P, Fan RE, Shao W, Sonn GA, Rusu M

pubmed logopapersAug 27 2025
To improve sensitivity and inter-reader consistency of prostate cancer localisation on micro-ultrasonography (MUS) by developing a deep learning model for automatic cancer segmentation, and to compare model performance with that of expert urologists. We performed an institutional review board-approved prospective collection of MUS images from patients undergoing magnetic resonance imaging (MRI)-ultrasonography fusion guided biopsy at a single institution. Patients underwent 14-core systematic biopsy and additional targeted sampling of suspicious MRI lesions. Biopsy pathology and MRI information were cross-referenced to annotate the locations of International Society of Urological Pathology Grade Group (GG) ≥2 clinically significant cancer on MUS images. We trained a no-new U-Net model - the Prostate Micro-Ultrasound Network (ProMUS-NET) - to localise GG ≥2 cancer on these image stacks in a fivefold cross-validation. Performance was compared vs that of six expert urologists in a matched sub-cohort. The artificial intelligence (AI) model achieved an area under the receiver-operating characteristic curve of 0.92 and detected more cancers than urologists (lesion-level sensitivity 73% vs 58%; patient-level sensitivity 77% vs 66%). AI lesion-level sensitivity for peripheral zone lesions was 86.2%. Our AI model identified prostate cancer lesions on MUS with high sensitivity and specificity. Further work is ongoing to improve margin overlap, to reduce false positives, and to perform external validation. AI-assisted prostate cancer detection on MUS has great potential to improve biopsy diagnosis by urologists.

A Hybrid CNN-Transformer Deep Learning Model for Differentiating Benign and Malignant Breast Tumors Using Multi-View Ultrasound Images

qi, z., Jianxing, Z., Pan, T., Miao, C.

medrxiv logopreprintAug 27 2025
Breast cancer is a leading malignancy threatening womens health globally, making early and accurate diagnosis crucial. Ultrasound is a key screening and diagnostic tool due to its non- invasive, real-time, and cost-effective nature. However, its diagnostic accuracy is highly dependent on operator experience, and conventional single-image analysis often fails to capture the comprehensive features of a lesion. This study introduces a computer-aided diagnosis (CAD) system that emulates a clinicians multi-view diagnostic process. We developed a novel hybrid deep learning model that integrates a Convolutional Neural Network (CNN) with a Transformer architecture. The model uses a pretrained EfficientNetV2 to extract spatial features from multiple, unordered ultrasound images of a single lesion. These features are then processed by a Transformer encoder, whose self-attention mechanism globally models and fuses their intrinsic correlations. A strict lesion-level data partitioning strategy ensured a rigorous evaluation. On an internal test set, our CNN-Transformer model achieved an accuracy of 0.93, a sensitivity of 0.92, a specificity of 0.94, and an Area Under the Curve (AUC) of 0.98. On an external test set, it demonstrated an accuracy of 0.93, a sensitivity of 0.94, a specificity of 0.91, and an AUC of 0.97. These results significantly outperform those of a baseline single-image model, which achieved accuracies of 0.88 and 0.89 and AUCs of 0.95 and 0.94 on the internal and external test sets, respectively. This study confirms that combining CNNs with Transformers yields a highly accurate and robust diagnostic system for breast ultrasound. By effectively fusing multi-view information, our model aligns with clinical logic and shows immense potential for improving diagnostic reliability.
Page 11 of 56552 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.