Sort by:
Page 51 of 2352345 results

Experimental Assessment of Conventional Features, CNN-Based Features and Ensemble Schemes for Discriminating Benign Versus Malignant Lesions on Breast Ultrasound Images.

Bianconi F, Khan MU, Du H, Jassim S

pubmed logopapersAug 28 2025
Breast ultrasound images play a pivotal role in assessing the nature of suspicious breast lesions, particularly in patients with dense tissue. Computerized analysis of breast ultrasound images has the potential to assist the physician in the clinical decision-making and improve subjective interpretation. We assess the performance of conventional features, deep learning features and ensemble schemes for discriminating benign versus malignant breast lesions on ultrasound images. A total of 19 individual feature sets (1 morphological, 2 first-order, 10 texture-based, and 6 CNN-based) were included in the analysis. Furthermore, four combined feature sets (Best by class; Top 3, 5, and 7) and four fusion schemes (feature concatenation, majority voting, sum and product rule) were considered to generate ensemble models. The experiments were carried out on three independent open-access datasets respectively containing 252 (154 benign, 98 malignant), 232 (109 benign, 123 malignant), and 281 (187 benign, 94 malignant) lesions. CNN-based features outperformed the other individual descriptors achieving levels of accuracy between 77.4% and 83.6%, followed by morphological features (71.6%-80.8%) and histograms of oriented gradients (71.4%-77.6%). Ensemble models further improved the accuracy to 80.2% to 87.5%. Fusion schemes based on product and sum rule were generally superior to feature concatenation and majority voting. Combining individual feature sets by ensemble schemes demonstrates advantages for discriminating benign versus malignant breast lesions on ultrasound images.

Hybrid quantum-classical-quantum convolutional neural networks.

Long C, Huang M, Ye X, Futamura Y, Sakurai T

pubmed logopapersAug 28 2025
Deep learning has achieved significant success in pattern recognition, with convolutional neural networks (CNNs) serving as a foundational architecture for extracting spatial features from images. Quantum computing provides an alternative computational framework, a hybrid quantum-classical convolutional neural networks (QCCNNs) leverage high-dimensional Hilbert spaces and entanglement to surpass classical CNNs in image classification accuracy under comparable architectures. Despite performance improvements, QCCNNs typically use fixed quantum layers without incorporating trainable quantum parameters. This limits their ability to capture non-linear quantum representations and separates the model from the potential advantages of expressive quantum learning. In this work, we present a hybrid quantum-classical-quantum convolutional neural network (QCQ-CNN) that incorporates a quantum convolutional filter, a shallow classical CNN, and a trainable variational quantum classifier. This architecture aims to enhance the expressivity of decision boundaries in image classification tasks by introducing tunable quantum parameters into the end-to-end learning process. Through a series of small-sample experiments on MNIST, F-MNIST, and MRI tumor datasets, QCQ-CNN demonstrates competitive accuracy and convergence behavior compared to classical and hybrid baselines. We further analyze the effect of ansatz depth and find that moderate-depth quantum circuits can improve learning stability without introducing excessive complexity. Additionally, simulations incorporating depolarizing noise and finite sampling shots suggest that QCQ-CNN maintains a certain degree of robustness under realistic quantum noise conditions. While our results are currently limited to simulations with small-scale quantum circuits, the proposed approach offers a potentially promising direction for hybrid quantum learning in near-term applications.

Mitigating MRI Domain Shift in Sex Classification: A Deep Learning Approach with ComBat Harmonization

Peyman Sharifian, Mohammad Saber Azimi, AliReza Karimian, Hossein Arabi

arxiv logopreprintAug 27 2025
Deep learning models for medical image analysis often suffer from performance degradation when applied to data from different scanners or protocols, a phenomenon known as domain shift. This study investigates this challenge in the context of sex classification from 3D T1-weighted brain magnetic resonance imaging (MRI) scans using the IXI and OASIS3 datasets. While models achieved high within-domain accuracy (around 0.95) when trained and tested on a single dataset (IXI or OASIS3), we demonstrate a significant performance drop to chance level (about 0.50) when models trained on one dataset are tested on the other, highlighting the presence of a strong domain shift. To address this, we employed the ComBat harmonization technique to align the feature distributions of the two datasets. We evaluated three state-of-the-art 3D deep learning architectures (3D ResNet18, 3D DenseNet, and 3D EfficientNet) across multiple training strategies. Our results show that ComBat harmonization effectively reduces the domain shift, leading to a substantial improvement in cross-domain classification performance. For instance, the cross-domain balanced accuracy of our best model (ResNet18 3D with Attention) improved from approximately 0.50 (chance level) to 0.61 after harmonization. t-SNE visualization of extracted features provides clear qualitative evidence of the reduced domain discrepancy post-harmonization. This work underscores the critical importance of domain adaptation techniques for building robust and generalizable neuroimaging AI models.

A Hybrid CNN-Transformer Deep Learning Model for Differentiating Benign and Malignant Breast Tumors Using Multi-View Ultrasound Images

qi, z., Jianxing, Z., Pan, T., Miao, C.

medrxiv logopreprintAug 27 2025
Breast cancer is a leading malignancy threatening womens health globally, making early and accurate diagnosis crucial. Ultrasound is a key screening and diagnostic tool due to its non- invasive, real-time, and cost-effective nature. However, its diagnostic accuracy is highly dependent on operator experience, and conventional single-image analysis often fails to capture the comprehensive features of a lesion. This study introduces a computer-aided diagnosis (CAD) system that emulates a clinicians multi-view diagnostic process. We developed a novel hybrid deep learning model that integrates a Convolutional Neural Network (CNN) with a Transformer architecture. The model uses a pretrained EfficientNetV2 to extract spatial features from multiple, unordered ultrasound images of a single lesion. These features are then processed by a Transformer encoder, whose self-attention mechanism globally models and fuses their intrinsic correlations. A strict lesion-level data partitioning strategy ensured a rigorous evaluation. On an internal test set, our CNN-Transformer model achieved an accuracy of 0.93, a sensitivity of 0.92, a specificity of 0.94, and an Area Under the Curve (AUC) of 0.98. On an external test set, it demonstrated an accuracy of 0.93, a sensitivity of 0.94, a specificity of 0.91, and an AUC of 0.97. These results significantly outperform those of a baseline single-image model, which achieved accuracies of 0.88 and 0.89 and AUCs of 0.95 and 0.94 on the internal and external test sets, respectively. This study confirms that combining CNNs with Transformers yields a highly accurate and robust diagnostic system for breast ultrasound. By effectively fusing multi-view information, our model aligns with clinical logic and shows immense potential for improving diagnostic reliability.

Deep learning-based prediction of axillary pathological complete response in patients with breast cancer using longitudinal multiregional ultrasound.

Liu Y, Wang Y, Huang J, Pei S, Wang Y, Cui Y, Yan L, Yao M, Wang Y, Zhu Z, Huang C, Liu Z, Liang C, Shi J, Li Z, Pei X, Wu L

pubmed logopapersAug 27 2025
Noninvasive biomarkers that capture the longitudinal multiregional tumour burden in patients with breast cancer may improve the assessment of residual nodal disease and guide axillary surgery. Additionally, a significant barrier to the clinical translation of the current data-driven deep learning model is the lack of interpretability. This study aims to develop and validate an information shared-private (iShape) model to predict axillary pathological complete response in patients with axillary lymph node (ALN)-positive breast cancer receiving neoadjuvant therapy (NAT) by learning common and specific image representations from longitudinal primary tumour and ALN ultrasound images. A total of 1135 patients with biopsy-proven ALN-positive breast cancer who received NAT were included in this multicentre, retrospective study. The iShape was trained on a dataset of 371 patients and validated on three external validation sets (EVS1-3), with 295, 244, and 225 patients, respectively. Model performance was evaluated using the area under the receiver operating characteristic curve (AUC). The false-negative rates (FNRs) of iShape alone and in combination with sentinel lymph node biopsy (SLNB) were also evaluated. Imaging feature visualisation and RNA sequencing analysis were performed to explore the underlying basis of iShape. The iShape achieved AUCs of 0.950-0.971 for EVS 1-3, which were better than those of the clinical model and the image signatures derived from the primary tumour, longitudinal primary tumour, or ALN (P < 0.05, as per the DeLong test). The performance of iShape remained satisfactory in subgroup analyses stratified by age, menstrual status, T stage, molecular subtype, treatment regimens, and machine type (AUCs of 0.812-1.000). More importantly, the FNR of iShape was 7.7%-8.1% in the EVSs, and the FNR of SLNB decreased from 13.4% to 3.6% with the aid of iShape in patients receiving SLNB and ALN dissection. The decision-making process of iShape was explained by feature visualisation. Additionally, RNA sequencing analysis revealed that a lower deep learning score was associated with immune infiltration and tumour proliferation pathways. The iShape model demonstrated good performance for the precise quantification of ALN status in patients with ALN-positive breast cancer receiving NAT, potentially benefiting individualised decision-making, and avoiding unnecessary axillary lymph node dissection. This study was supported by (1) Noncommunicable Chronic Diseases-National Science and Technology Major Project (No. 2024ZD0531100); (2) Key-Area Research and Development Program of Guangdong Province (No. 2021B0101420006); (3) National Natural Science Foundation of China (No. 82472051, 82471947, 82271941, 82272088); (4) National Science Foundation for Young Scientists of China (No. 82402270, 82202095, 82302190); (5) Guangzhou Municipal Science and Technology Planning Project (No. 2025A04J4773, 2025A04J4774); (6) the Natural Science Foundation of Guangdong Province of China (No. 2025A1515011607); (7) Medical Scientific Research Foundation of Guangdong Province of China (No. A2024403); (8) Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No. 2022B1212010011); (9) Outstanding Youth Science Foundation of Yunnan Basic Research Project (No. 202401AY070001-316); (10) Innovative Research Team of Yunnan Province (No. 202505AS350013).

Automatic opportunistic osteoporosis screening using chest X-ray images via deep neural networks.

Tang J, Yin X, Lai J, Luo K, Wu D

pubmed logopapersAug 27 2025
Osteoporosis is a bone disease characterized by reduced bone mineral density and quality, which increases the risk of fragility fractures. The current diagnostic gold standard, dual-energy X-ray absorptiometry (DXA), faces limitations such as low equipment penetration, high testing costs, and radiation exposure, restricting its feasibility as a screening tool. To address these limitations, We retrospectively collected data from 1995 patients who visited Daping Hospital in Chongqing from January 2019 to August 2024. We developed an opportunistic screening method using chest X-rays. Furthermore, we designed three innovative deep neural network models using transfer learning: Inception v3, VGG16, and ResNet50. These models were evaluated based on their classification performance for osteoporosis using chest X-ray images, with external validation via multi-center data. The ResNet50 model demonstrated superior performance, achieving average accuracies of 87.85 % and 90.38 % in the internal test dataset across two experiments, with AUC values of 0.945 and 0.957, respectively. These results outperformed traditional convolutional neural networks. In the external validation, the ResNet50 model achieved an AUC of 0.904, accuracy of 89 %, sensitivity of 90 %, and specificity of 88.57 %, demonstrating strong generalization ability. And the model shows robust performance despite concurrent pulmonary pathologies. This study provides an automatic screening method for osteoporosis using chest X-rays, without additional radiation exposure or cost. The ResNet50 model's high performance supports clinicians in the early identification and treatment of osteoporosis patients.

MRI-based machine-learning radiomics of the liver to predict liver-related events in hepatitis B virus-associated fibrosis.

Luo Y, Luo Q, Wu Y, Zhang S, Ren H, Wang X, Liu X, Yang Q, Xu W, Wu Q, Li Y

pubmed logopapersAug 27 2025
The onset of liver-related events (LREs) in fibrosis indicates a poor prognosis and worsens patients' quality of life, making the prediction and early detection of LREs crucial. The aim of this study was to develop a radiomics model using liver magnetic resonance imaging (MRI) to predict LRE risk in patients undergoing antiviral treatment for chronic fibrosis caused by hepatitis B virus (HBV). Patients with HBV-associated liver fibrosis and liver stiffness measurements ≥ 10 kPa were included. Feature selection and dimensionality reduction techniques identified discriminative features from three MRI sequences. Radiomics models were built using eight machine learning techniques and evaluated for performance. Shapley additive explanation and permutation importance techniques were applied to interpret the model output. A total of 222 patients aged 49 ± 10 years (mean ± standard deviation), 175 males, were evaluated, with 41 experiencing LREs. The radiomics model, incorporating 58 selected features, outperformed traditional clinical tools in prediction accuracy. Developed using a support vector machine classifier, the model achieved optimal areas under the receiver operating characteristic curves of 0.94 and 0.93 in the training and test sets, respectively, demonstrating good calibration. Machine learning techniques effectively predicted LREs in patients with fibrosis and HBV, offering comparable accuracy across algorithms and supporting personalized care decisions for HBV-related liver disease. Radiomics models based on liver multisequence MRI can improve risk prediction and management of patients with HBV-associated chronic fibrosis. In addition, it offers valuable prognostic insights and aids in making informed clinical decisions. Liver-related events (LREs) are associated with poor prognosis in chronic fibrosis. Radiomics models could predict LREs in patients with hepatitis B-associated chronic fibrosis. Radiomics contributes to personalized care choices for patients with hepatitis B-associated fibrosis.

HONeYBEE: Enabling Scalable Multimodal AI in Oncology Through Foundation Model-Driven Embeddings

Tripathi, A. G., Waqas, A., Schabath, M. B., Yilmaz, Y., Rasool, G.

medrxiv logopreprintAug 27 2025
HONeYBEE (Harmonized ONcologY Biomedical Embedding Encoder) is an open-source framework that integrates multimodal biomedical data for oncology applications. It processes clinical data (structured and unstructured), whole-slide images, radiology scans, and molecular profiles to generate unified patient-level embeddings using domain-specific foundation models and fusion strategies. These embeddings enable survival prediction, cancer-type classification, patient similarity retrieval, and cohort clustering. Evaluated on 11,400+ patients across 33 cancer types from The Cancer Genome Atlas (TCGA), clinical embeddings showed the strongest single-modality performance with 98.5% classification accuracy and 96.4% precision@10 in patient retrieval. They also achieved the highest survival prediction concordance indices across most cancer types. Multimodal fusion provided complementary benefits for specific cancers, improving overall survival prediction beyond clinical features alone. Comparative evaluation of four large language models revealed that general-purpose models like Qwen3 outperformed specialized medical models for clinical text representation, though task-specific fine-tuning improved performance on heterogeneous data such as pathology reports.

Development of Privacy-preserving Deep Learning Model with Homomorphic Encryption: A Technical Feasibility Study in Kidney CT Imaging.

Lee SW, Choi J, Park MJ, Kim H, Eo SH, Lee G, Kim S, Suh J

pubmed logopapersAug 27 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To evaluate the technical feasibility of implementing homomorphic encryption in deep learning models for privacy-preserving CT image analysis of renal masses. Materials and Methods A privacy-preserving deep learning system was developed through three sequential technical phases: a reference CNN model (Ref-CNN) based on ResNet architecture, modification for encryption compatibility (Approx-CNN) by replacing ReLU with polynomial approximation and max-pooling with averagepooling, and implementation of fully homomorphic encryption (HE-CNN). The CKKS encryption scheme was used for its capability to perform arithmetic operations on encrypted real numbers. Using 12,446 CT images from a public dataset (3,709 renal cysts, 5,077 normal kidneys, and 2,283 kidney tumors), we evaluated model performance using area under the receiver operating characteristic curve (AUC) and area under the precision-recall curve (AUPRC). Results All models demonstrated high diagnostic accuracy with AUC ranging from 0.89-0.99 and AUPRC from 0.67-0.99. The diagnostic performance trade-off was minimal from Ref-CNN to Approx-CNN (AUC: 0.99 to 0.97 for normal category), with no evidence of differences between models. However, encryption significantly increased storage and computational demands: a 256 × 256-pixel image expanded from 65KB to 32MB, requiring 50 minutes for CPU inference but only 90 seconds with GPU acceleration. Conclusion This technical development demonstrates that privacy-preserving deep learning inference using homomorphic encryption is feasible for renal mass classification on CT images, achieving comparable diagnostic performance while maintaining data privacy through end-to-end encryption. ©RSNA, 2025.

Quantum integration in swin transformer mitigates overfitting in breast cancer screening.

Xie Z, Yang X, Zhang S, Yang J, Zhu Y, Zhang A, Sun H, Dai Q, Li L, Liu H, Ming W, Dou M

pubmed logopapersAug 27 2025
To explore the potential of quantum computing in advancing transformer-based deep learning models for breast cancer screening, this study introduces the Quantum-Enhanced Swin Transformer (QEST). This model integrates a Variational Quantum Circuit (VQC) to replace the fully connected layer responsible for classification in the Swin Transformer architecture. In simulations, QEST exhibited competitive accuracy and generalization performance compared to the original Swin Transformer, while also demonstrating an effect in mitigating overfitting. Specifically, in 16-qubit simulations, the VQC reduced the parameter count by 62.5% compared with the replaced fully connected layer and improved the Balanced Accuracy (BACC) by 3.62% in external validation. Furthermore, validation experiments conducted on an actual quantum computer have corroborated the effectiveness of QEST.
Page 51 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.