Sort by:
Page 10 of 4494481 results

3DSN-net: dual-tandem attention mechanism interaction network for breast tumor classification.

Li L, Wang M, Li D, Yang T

pubmed logopapersSep 29 2025
Breast cancer is one of the most prevalent malignancies among women worldwide and remains a major public health concern. Accurate classification of breast tumor subtypes is essential for guiding treatment decisions and improving patient outcomes. However, existing deep learning methods for histopathological image analysis often face limitations in balancing classification accuracy with computational efficiency, while failing to fully exploit the deep semantic features in complex tumor images. We developed 3DSN-net, a dual-attention interaction network for multiclass breast tumor classification. The model combines two complementary strategies: (i) spatial–channel attention mechanisms to strengthen the representation of discriminative features, and (ii) deformable convolutional layers to capture fine-grained structural variations in histopathological images. To further improve efficiency, a lightweight attention component was introduced to support stable gradient propagation and multi-scale feature fusion Experimental findings demonstrate that 3DSN-net consistently outperforms competing methods in both accuracy and robustness while maintaining favorable computational efficiency. The model effectively distinguishes benign and malignant tumors as well as multiple subtypes, highlighting the advantages of combining spatial–channel attention with deformable feature modeling. The model was trained and evaluated on two histopathological datasets, BreakHis and BCPSD, and benchmarked against several state-of-the-art CNN and Transformer-based approaches under identical experimental conditions. Experimental results show that 3DSN-net consistently outperforms baseline CNN and Transformer models, achieving 92%–100% accuracy for benign tumors and 86%–99% for malignant tumors, with error rates below 8%. On average, it improves classification accuracy by 3%–5% and ROC-AUC by 0.02 to 0.04 compared with state-of-the-art methods, while maintaining competitive computational efficiency. By enhancing the interaction between spatial and channel attention mechanisms, the model effectively distinguishes breast cancer subtypes, with only a slight reduction in classification speed on larger datasets due to increased data complexity. This study presents 3DSN-net as a reliable and effective framework for breast tumor classification from histopathological images. Beyond methodological improvements, the enhanced diagnostic performance has direct clinical implications, offering potential to reduce misclassification, assist pathologists in decision-making, and improve patient outcomes. The approach can also be extended to other medical imaging tasks. Future work will focus on optimizing computational efficiency and validating generalizability across larger, multi-center datasets. The online version contains supplementary material available at 10.1186/s12880-025-01936-2.

Diagnostic accuracy of a machine learning model using radiomics features from breast synthetic MRI.

Matsuda T, Matsuda M, Haque H, Fuchibe S, Matsumoto M, Shiraishi Y, Nobe Y, Kuwabara K, Toshimori W, Okada K, Kawaguchi N, Kurata M, Kamei Y, Kitazawa R, Kido T

pubmed logopapersSep 29 2025
In breast magnetic resonance imaging (MRI), the differentiation between benign and malignant breast masses relies on the Breast Imaging Reporting and Data System Magnetic Resonance Imaging (BI-RADS-MRI) lexicon. While BI-RADS-MRI classification demonstrates high sensitivity, specificities vary. This study aimed to evaluate the feasibility of machine learning models utilizing radiomics features derived from synthetic MRI to distinguish benign from malignant breast masses. Patients who underwent breast MRI, including a multi-dynamic multi-echo (MDME) sequence using 3.0 T MRI, and had histopathologically diagnosed enhanced breast mass lesions were retrospectively included. Clinical features, lesion shape features, texture features, and textural evaluation metrics were extracted. Machine learning models were trained and evaluated, and an ensemble model integrating BI-RADS and the machine learning model was also assessed. A total of 199 lesions (48 benign, 151 malignant) in 199 patients were included in the cross-validation dataset, while 43 lesions (15 benign, 28 malignant) in 40 new patients were included in the test dataset. For the test dataset, the sensitivity, specificity, accuracy, and area under the curve (AUC) of the receiver operating characteristic for BI-RADS were 100%, 33.3%, 76.7%, and 0.667, respectively. The logistic regression model yielded 64.3% sensitivity, 80.0% specificity, 69.8% accuracy, and an AUC of 0.707. The ensemble model achieved 82.1% sensitivity, 86.7% specificity, 83.7% accuracy, and an AUC of 0.883. The AUC of the ensemble model was significantly larger than that of both BI-RADS and the machine learning model. The ensemble model integrating BI-RADS and machine learning improved lesion classification. The online version contains supplementary material available at 10.1186/s12880-025-01930-8.

Hepatocellular carcinoma (HCC) and focal nodular hyperplasia (FNH) showing iso- or hyperintensity in the hepatobiliary phase: differentiation using Gd-EOB-DTPA enhanced MRI radiomics and deep learning features.

Mao HY, Hu JC, Zhang T, Fan YF, Wang XM, Hu CH, Yu YX

pubmed logopapersSep 29 2025
To develop and validate radiomics and deep learning models based on Gd-EOB-DTPA enhanced MRI for differentiation between hepatocellular carcinoma (HCC) and focal nodular hyperplasia (FNH) showing iso- or hyperintensity in the hepatobiliary phase (HBP). 112 patients from three hospitals were collected totally. 84 patients from hospital a and b with 54 HCCs and 30 FNHs randomly divided into a training cohort (<i>n</i> = 59: 38 HCC; 21 FNH) and an internal validation cohort (<i>n</i> = 25: 16 HCC; 9 FNH). A total of 28 patients from hospital c (<i>n</i> = 28: 20 HCC; 8 FNH) acted as an external test cohort. 1781 radiomics features were extracted from tumor volumes of interest (VOIs) in the pre-contrast phase (Pre), arterial phase (AP), portal venous phase (PP) and HBP images. 512 deep learning features were extracted from VOIs in the AP, PP and HBP images. Pearson correlation coefficient (PCC) and analysis of variance (ANOVA) were used to select the useful features. Conventional, delta radiomics and deep learning models were established using machine learning algorithms (support vector machine [SVM] and logistic regression [LR]) and their discriminatory efficacy assessed and compared. The combined deep learning models demonstrated the highest diagnostic performance in both the internal validation and external test cohorts, with area under the curve (AUC) values of 0.965 (95% confidence interval [CI]: 0.906, 1.000) and 0.851 (95% CI: 0.620, 1.000) respectively. The conventional and delta radiomics models achieved AUCs of 0.944 (95% CI: 0.779–0.979) and 0.938 (95% CI: 0.836–1.000) respectively, which were not significantly different from the deep learning models or each other (<i>P</i> = 0.559, 0.256, and 0.137). The combined deep learning models based on Gd-EOB-DTPA enhanced MRI may be useful for discriminating HCC from FNH showing iso-or hyperintensity in the HBP. The online version contains supplementary material available at 10.1186/s12880-025-01927-3.

Artificial Intelligence to Detect Developmental Dysplasia of Hip: A Systematic Review.

Bhavsar S, Gowda BB, Bhavsar M, Patole S, Rao S, Rath C

pubmed logopapersSep 28 2025
Deep learning (DL), a branch of artificial intelligence (AI), has been applied to diagnose developmental dysplasia of the hip (DDH) on pelvic radiographs and ultrasound (US) images. This technology can potentially assist in early screening, enable timely intervention and improve cost-effectiveness. We conducted a systematic review to evaluate the diagnostic accuracy of the DL algorithm in detecting DDH. PubMed, Medline, EMBASE, EMCARE, the clinicaltrials.gov (clinical trial registry), IEEE Xplore and Cochrane Library databases were searched in October 2024. Prospective and retrospective cohort studies that included children (< 16 years) at risk of or suspected to have DDH and reported hip ultrasonography (US) or X-ray images using AI were included. A review was conducted using the guidelines of the Cochrane Collaboration Diagnostic Test Accuracy Working Group. Risk of bias was assessed using the QUADAS-2 tool. Twenty-three studies met inclusion criteria, with 15 (n = 8315) evaluating DDH on US images and eight (n = 7091) on pelvic radiographs. The area under the curve of the included studies ranged from 0.80 to 0.99 for pelvic radiographs and 0.90-0.99 for US images. Sensitivity and specificity for detecting DDH on radiographs ranged from 92.86% to 100% and 95.65% to 99.82%, respectively. For US images, sensitivity ranged from 86.54% to 100% and specificity from 62.5% to 100%. AI demonstrated comparable effectiveness to physicians in detecting DDH. However, limited evaluation on external datasets restricts its generalisability. Further research incorporating diverse datasets and real-world applications is needed to assess its broader clinical impact on DDH diagnosis.

Artificial intelligence in carotid computed tomography angiography plaque detection: Decade of progress and future perspectives.

Wang DY, Yang T, Zhang CT, Zhan PC, Miao ZX, Li BL, Yang H

pubmed logopapersSep 28 2025
The application of artificial intelligence (AI) in carotid atherosclerotic plaque detection <i>via</i> computed tomography angiography (CTA) has significantly advanced over the past decade. This mini-review consolidates recent innovations in deep learning architectures, domain adaptation techniques, and automated plaque characterization methodologies. Hybrid models, such as residual U-Net-Pyramid Scene Parsing Network, exhibit a remarkable precision of 80.49% in plaque segmentation, outperforming radiologists in diagnostic efficiency by reducing analysis time from minutes to mere seconds. Domain-adaptive frameworks, such as Lesion Assessment through Tracklet Evaluation, demonstrate robust performance across heterogeneous imaging datasets, achieving an area under the curve (AUC) greater than 0.88. Furthermore, novel approaches integrating U-Net and Efficient-Net architectures, enhanced by Bayesian optimization, have achieved impressive correlation coefficients (0.89) for plaque quantification. AI-powered CTA also enables high-precision three-dimensional vascular segmentation, with a Dice coefficient of 0.9119, and offers superior cardiovascular risk stratification compared to traditional Agatston scoring, yielding AUC values of 0.816 <i>vs</i> 0.729 at a 15-year follow-up. These breakthroughs address key challenges in plaque motion analysis, with systolic retractive motion biomarkers successfully identifying 80% of vulnerable plaques. Looking ahead, future directions focus on enhancing the interpretability of AI models through explainable AI and leveraging federated learning to mitigate data heterogeneity. This mini-review underscores the transformative potential of AI in carotid plaque assessment, offering substantial implications for stroke prevention and personalized cerebrovascular management strategies.

Predicting pathological complete response to chemoradiotherapy using artificial intelligence-based magnetic resonance imaging radiomics in esophageal squamous cell carcinoma.

Hirata A, Hayano K, Tochigi T, Kurata Y, Shiraishi T, Sekino N, Nakano A, Matsumoto Y, Toyozumi T, Uesato M, Ohira G

pubmed logopapersSep 28 2025
Advanced esophageal squamous cell carcinoma (ESCC) has an extremely poor prognosis. Preoperative chemoradiotherapy (CRT) can significantly prolong survival, especially in those who achieve pathological complete response (pCR). However, the pretherapeutic prediction of pCR remains challenging. To predict pCR and survival in ESCC patients undergoing CRT using an artificial intelligence (AI)-based diffusion-weighted magnetic resonance imaging (DWI-MRI) radiomics model. We retrospectively analyzed 70 patients with ESCC who underwent curative surgery following CRT. For each patient, pre-treatment tumors were semi-automatically segmented in three dimensions from DWI-MRI images (<i>b</i> = 0, 1000 second/mm²), and a total of 76 radiomics features were extracted from each segmented tumor. Using these features as explanatory variables and pCR as the objective variable, machine learning models for predicting pCR were developed using AutoGluon, an automated machine learning library, and validated by stratified double cross-validation. pCR was achieved in 15 patients (21.4%). Apparent diffusion coefficient skewness demonstrated the highest predictive performance [area under the curve (AUC) = 0.77]. Gray-level co-occurrence matrix (GLCM) entropy (<i>b</i> = 1000 second/mm²) was an independent prognostic factor for relapse-free survival (RFS) (hazard ratio = 0.32, <i>P</i> = 0.009). In Kaplan-Meier analysis, patients with high GLCM entropy showed significantly better RFS (<i>P</i> < 0.001, log-rank). The best-performing machine learning model achieved an AUC of 0.85. The predicted pCR-positive group showed significantly better RFS than the predicted pCR-negative group (<i>P</i> = 0.007, log-rank). AI-based radiomics analysis of DWI-MRI images in ESCC has the potential to accurately predict the effect of CRT before treatment and contribute to constructing optimal treatment strategies.

Advances in ultrasound-based imaging for diagnosis of endometrial cancer.

Tlais M, Hamze H, Hteit A, Haddad K, El Fassih I, Zalzali I, Mahmoud S, Karaki S, Jabbour D

pubmed logopapersSep 28 2025
Endometrial cancer (EC) is the most common gynecological malignancy in high-income countries, with incidence rates rising globally. Early and accurate diagnosis is essential for improving outcomes. Transvaginal ultrasound (TVUS) remains a cost-effective first-line tool, and emerging techniques such as three-dimensional (3D) ultrasound (US), contrast-enhanced US (CEUS), elastography, and artificial intelligence (AI)-enhanced imaging may further improve diagnostic performance. To systematically review recent advances in US-based imaging techniques for the diagnosis and staging of EC, and to compare their performance with magnetic resonance imaging (MRI). A systematic search of PubMed, Scopus, Web of Science, and Google Scholar was performed to identify studies published between January 2010 and March 2025. Eligible studies evaluated TVUS, 3D-US, CEUS, elastography, or AI-enhanced US in EC diagnosis and staging. Methodological quality was assessed using the QUADAS-2 tool. Sensitivity, specificity, and area under the curve (AUC) were extracted where available, with narrative synthesis due to heterogeneity. Forty-one studies met the inclusion criteria. TVUS demonstrated high sensitivity (76%-96%) but moderate specificity (61%-86%), while MRI achieved higher specificity (84%-95%) and superior staging accuracy. 3D-US yielded accuracy comparable to MRI in selected early-stage cases. CEUS and elastography enhanced tissue characterization, and AI-enhanced US achieved pooled AUCs up to 0.91 for risk prediction and lesion segmentation. Variability in performance was noted across modalities due to patient demographics, equipment differences, and operator experience. TVUS remains a highly sensitive initial screening tool, with MRI preferred for definitive staging. 3D-US, CEUS, elastography, and AI-enhanced techniques show promise as complementary or alternative approaches, particularly in low-resource settings. Standardization, multicenter validation, and integration of multi-modal imaging are needed to optimize diagnostic pathways for EC.

Dementia-related volumetric assessments in neuroradiology reports: a natural language processing-based study.

Mayers AJ, Roberts A, Venkataraman AV, Booth C, Stewart R

pubmed logopapersSep 28 2025
Structural MRI of the brain is routinely performed on patients referred to memory clinics; however, resulting radiology reports, including volumetric assessments, are conventionally stored as unstructured free text. We sought to use natural language processing (NLP) to extract text relating to intracranial volumetric assessment from brain MRI text reports to enhance routine data availability for research purposes. Electronic records from a large mental healthcare provider serving a geographic catchment of 1.3 million residents in four boroughs of south London, UK. A corpus of 4007 de-identified brain MRI reports from patients referred to memory assessment services. An NLP algorithm was developed, using a span categorisation approach, to extract six binary (presence/absence) categories from the text reports: (i) global volume loss, (ii) hippocampal/medial temporal lobe volume loss and (iii) other lobar/regional volume loss. Distributions of these categories were evaluated. The overall F1 score for the six categories was 0.89 (precision 0.92, recall 0.86), with the following precision/recall for each category: presence of global volume loss 0.95/0.95, absence of global volume loss 0.94/0.77, presence of regional volume loss 0.80/0.58, absence of regional volume loss 0.91/0.93, presence of hippocampal volume loss 0.90/0.88, and absence of hippocampal volume loss 0.94/0.92. These results support the feasibility and accuracy of using NLP techniques to extract volumetric assessments from radiology reports, and the potential for automated generation of novel meta-data from dementia assessments in electronic health records.

A Novel Hybrid Deep Learning and Chaotic Dynamics Approach for Thyroid Cancer Classification

Nada Bouchekout, Abdelkrim Boukabou, Morad Grimes, Yassine Habchi, Yassine Himeur, Hamzah Ali Alkhazaleh, Shadi Atalla, Wathiq Mansoor

arxiv logopreprintSep 28 2025
Timely and accurate diagnosis is crucial in addressing the global rise in thyroid cancer, ensuring effective treatment strategies and improved patient outcomes. We present an intelligent classification method that couples an Adaptive Convolutional Neural Network (CNN) with Cohen-Daubechies-Feauveau (CDF9/7) wavelets whose detail coefficients are modulated by an n-scroll chaotic system to enrich discriminative features. We evaluate on the public DDTI thyroid ultrasound dataset (n = 1,638 images; 819 malignant / 819 benign) using 5-fold cross-validation, where the proposed method attains 98.17% accuracy, 98.76% sensitivity, 97.58% specificity, 97.55% F1-score, and an AUC of 0.9912. A controlled ablation shows that adding chaotic modulation to CDF9/7 improves accuracy by +8.79 percentage points over a CDF9/7-only CNN (from 89.38% to 98.17%). To objectively position our approach, we trained state-of-the-art backbones on the same data and splits: EfficientNetV2-S (96.58% accuracy; AUC 0.987), Swin-T (96.41%; 0.986), ViT-B/16 (95.72%; 0.983), and ConvNeXt-T (96.94%; 0.987). Our method outperforms the best of these by +1.23 points in accuracy and +0.0042 in AUC, while remaining computationally efficient (28.7 ms per image; 1,125 MB peak VRAM). Robustness is further supported by cross-dataset testing on TCIA (accuracy 95.82%) and transfer to an ISIC skin-lesion subset (n = 28 unique images, augmented to 2,048; accuracy 97.31%). Explainability analyses (Grad-CAM, SHAP, LIME) highlight clinically relevant regions. Altogether, the wavelet-chaos-CNN pipeline delivers state-of-the-art thyroid ultrasound classification with strong generalization and practical runtime characteristics suitable for clinical integration.

Adversarial Versus Federated: An Adversarial Learning based Multi-Modality Cross-Domain Federated Medical Segmentation

You Zhou, Lijiang Chen, Shuchang Lyu, Guangxia Cui, Wenpei Bai, Zheng Zhou, Meng Li, Guangliang Cheng, Huiyu Zhou, Qi Zhao

arxiv logopreprintSep 28 2025
Federated learning enables collaborative training of machine learning models among different clients while ensuring data privacy, emerging as the mainstream for breaking data silos in the healthcare domain. However, the imbalance of medical resources, data corruption or improper data preservation may lead to a situation where different clients possess medical images of different modality. This heterogeneity poses a significant challenge for cross-domain medical image segmentation within the federated learning framework. To address this challenge, we propose a new Federated Domain Adaptation (FedDA) segmentation training framework. Specifically, we propose a feature-level adversarial learning among clients by aligning feature maps across clients through embedding an adversarial training mechanism. This design can enhance the model's generalization on multiple domains and alleviate the negative impact from domain-shift. Comprehensive experiments on three medical image datasets demonstrate that our proposed FedDA substantially achieves cross-domain federated aggregation, endowing single modality client with cross-modality processing capabilities, and consistently delivers robust performance compared to state-of-the-art federated aggregation algorithms in objective and subjective assessment. Our code are available at https://github.com/GGbond-study/FedDA.
Page 10 of 4494481 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.