Sort by:
Page 105 of 1111106 results

Improved Brain Tumor Detection in MRI: Fuzzy Sigmoid Convolution in Deep Learning

Muhammad Irfan, Anum Nawaz, Riku Klen, Abdulhamit Subasi, Tomi Westerlund, Wei Chen

arxiv logopreprintMay 8 2025
Early detection and accurate diagnosis are essential to improving patient outcomes. The use of convolutional neural networks (CNNs) for tumor detection has shown promise, but existing models often suffer from overparameterization, which limits their performance gains. In this study, fuzzy sigmoid convolution (FSC) is introduced along with two additional modules: top-of-the-funnel and middle-of-the-funnel. The proposed methodology significantly reduces the number of trainable parameters without compromising classification accuracy. A novel convolutional operator is central to this approach, effectively dilating the receptive field while preserving input data integrity. This enables efficient feature map reduction and enhances the model's tumor detection capability. In the FSC-based model, fuzzy sigmoid activation functions are incorporated within convolutional layers to improve feature extraction and classification. The inclusion of fuzzy logic into the architecture improves its adaptability and robustness. Extensive experiments on three benchmark datasets demonstrate the superior performance and efficiency of the proposed model. The FSC-based architecture achieved classification accuracies of 99.17%, 99.75%, and 99.89% on three different datasets. The model employs 100 times fewer parameters than large-scale transfer learning architectures, highlighting its computational efficiency and suitability for detecting brain tumors early. This research offers lightweight, high-performance deep-learning models for medical imaging applications.

Radiomics-based machine learning in prediction of response to neoadjuvant chemotherapy in osteosarcoma: A systematic review and meta-analysis.

Salimi M, Houshi S, Gholamrezanezhad A, Vadipour P, Seifi S

pubmed logopapersMay 8 2025
Osteosarcoma (OS) is the most common primary bone malignancy, and neoadjuvant chemotherapy (NAC) improves survival rates. However, OS heterogeneity results in variable treatment responses, highlighting the need for reliable, non-invasive tools to predict NAC response. Radiomics-based machine learning (ML) offers potential for identifying imaging biomarkers to predict treatment outcomes. This systematic review and meta-analysis evaluated the accuracy and reliability of radiomics models for predicting NAC response in OS. A systematic search was conducted in PubMed, Embase, Scopus, and Web of Science up to November 2024. Studies using radiomics-based ML for NAC response prediction in OS were included. Pooled sensitivity, specificity, and AUC for training and validation cohorts were calculated using bivariate random-effects modeling, with clinical-combined models analyzed separately. Quality assessment was performed using the QUADAS-2 tool, radiomics quality score (RQS), and METRICS scores. Sixteen studies were included, with 63 % using MRI and 37 % using CT. Twelve studies, comprising 1639 participants, were included in the meta-analysis. Pooled metrics for training cohorts showed an AUC of 0.93, sensitivity of 0.89, and specificity of 0.85. Validation cohorts achieved an AUC of 0.87, sensitivity of 0.81, and specificity of 0.82. Clinical-combined models outperformed radiomics-only models. The mean RQS score was 9.44 ± 3.41, and the mean METRICS score was 60.8 % ± 17.4 %. Radiomics-based ML shows promise for predicting NAC response in OS, especially when combined with clinical indicators. However, limitations in external validation and methodological consistency must be addressed.

Machine learning-based approaches for distinguishing viral and bacterial pneumonia in paediatrics: A scoping review.

Rickard D, Kabir MA, Homaira N

pubmed logopapersMay 8 2025
Pneumonia is the leading cause of hospitalisation and mortality among children under five, particularly in low-resource settings. Accurate differentiation between viral and bacterial pneumonia is essential for guiding appropriate treatment, yet it remains challenging due to overlapping clinical and radiographic features. Advances in machine learning (ML), particularly deep learning (DL), have shown promise in classifying pneumonia using chest X-ray (CXR) images. This scoping review summarises the evidence on ML techniques for classifying viral and bacterial pneumonia using CXR images in paediatric patients. This scoping review was conducted following the Joanna Briggs Institute methodology and the PRISMA-ScR guidelines. A comprehensive search was performed in PubMed, Embase, and Scopus to identify studies involving children (0-18 years) with pneumonia diagnosed through CXR, using ML models for binary or multiclass classification. Data extraction included ML models, dataset characteristics, and performance metrics. A total of 35 studies, published between 2018 and 2025, were included in this review. Of these, 31 studies used the publicly available Kermany dataset, raising concerns about overfitting and limited generalisability to broader, real-world clinical populations. Most studies (n=33) used convolutional neural networks (CNNs) for pneumonia classification. While many models demonstrated promising performance, significant variability was observed due to differences in methodologies, dataset sizes, and validation strategies, complicating direct comparisons. For binary classification (viral vs bacterial pneumonia), a median accuracy of 92.3% (range: 80.8% to 97.9%) was reported. For multiclass classification (healthy, viral pneumonia, and bacterial pneumonia), the median accuracy was 91.8% (range: 76.8% to 99.7%). Current evidence is constrained by a predominant reliance on a single dataset and variability in methodologies, which limit the generalisability and clinical applicability of findings. To address these limitations, future research should focus on developing diverse and representative datasets while adhering to standardised reporting guidelines. Such efforts are essential to improve the reliability, reproducibility, and translational potential of machine learning models in clinical settings.

MRI-based machine learning reveals proteasome subunit PSMB8-mediated malignant glioma phenotypes through activating TGFBR1/2-SMAD2/3 axis.

Pei D, Ma Z, Qiu Y, Wang M, Wang Z, Liu X, Zhang L, Zhang Z, Li R, Yan D

pubmed logopapersMay 8 2025
Gliomas are the most prevalent and aggressive neoplasms of the central nervous system, representing a major challenge for effective treatment and patient prognosis. This study identifies the proteasome subunit beta type-8 (PSMB8/LMP7) as a promising prognostic biomarker for glioma. Using a multiparametric radiomic model derived from preoperative magnetic resonance imaging (MRI), we accurately predicted PSMB8 expression levels. Notably, radiomic prediction of poor prognosis was highly consistent with elevated PSMB8 expression. Our findings demonstrate that PSMB8 depletion not only suppressed glioma cell proliferation and migration but also induced apoptosis via activation of the transforming growth factor beta (TGF-β) signaling pathway. This was supported by downregulation of key receptors (TGFBR1 and TGFBR2). Furthermore, interference with PSMB8 expression impaired phosphorylation and nuclear translocation of SMAD2/3, critical mediators of TGF-β signaling. Consequently, these molecular alterations resulted in reduced tumor progression and enhanced sensitivity to temozolomide (TMZ), a standard chemotherapeutic agent. Overall, our findings highlight PSMB8's pivotal role in glioma pathophysiology and its potential as a prognostic marker. This study also demonstrates the clinical utility of MRI radiomics for preoperative risk stratification and pre-diagnosis. Targeted inhibition of PSMB8 may represent a therapeutic strategy to overcome TMZ resistance and improve glioma patient outcomes.

Effective data selection via deep learning processes and corresponding learning strategies in ultrasound image classification.

Lee H, Kwak JY, Lee E

pubmed logopapersMay 8 2025
In this study, we propose a novel approach to enhancing transfer learning by optimizing data selection through deep learning techniques and corresponding innovative learning strategies. This method is particularly beneficial when the available dataset has reached its limit and cannot be further expanded. Our approach focuses on maximizing the use of existing data to improve learning outcomes which offers an effective solution for data-limited applications in medical imaging classification. The proposed method consists of two stages. In the first stage, an original network performs the initial classification. When the original network exhibits low confidence in its predictions, ambiguous classifications are passed to a secondary decision-making step involving a newly trained network, referred to as the True network. The True network shares the same architecture as the original network but is trained on a subset of the original dataset that is selected based on consensus among multiple independent networks. It is then used to verify the classification results of the original network, identifying and correcting any misclassified images. To evaluate the effectiveness of our approach, we conducted experiments using thyroid nodule ultrasound images with the ResNet101 and Vision Transformer architectures along with eleven other pre-trained neural networks. The proposed method led to performance improvements across all five key metrics, accuracy, sensitivity, specificity, F1-score, and AUC, compared to using only the original or True networks in ResNet101. Additionally, the True network showed strong performance when applied to the Vision Transformer and similar enhancements were observed across multiple convolutional neural network architectures. Furthermore, to assess the robustness and adaptability of our method across different medical imaging modalities, we applied it to dermoscopic images and observed similar performance enhancements. These results provide evidence of the effectiveness of our approach in improving transfer learning-based medical image classification without requiring additional training data.

Ultrasound-based deep learning radiomics for enhanced axillary lymph node metastasis assessment: a multicenter study.

Zhang D, Zhou W, Lu WW, Qin XC, Zhang XY, Luo YH, Wu J, Wang JL, Zhao JJ, Zhang CX

pubmed logopapersMay 8 2025
Accurate preoperative assessment of axillary lymph node metastasis (ALNM) in breast cancer is crucial for guiding treatment decisions. This study aimed to develop a deep-learning radiomics model for assessing ALNM and to evaluate its impact on radiologists' diagnostic accuracy. This multicenter study included 866 breast cancer patients from 6 hospitals. The data were categorized into training, internal test, external test, and prospective test sets. Deep learning and handcrafted radiomics features were extracted from ultrasound images of primary tumors and lymph nodes. The tumor score and LN score were calculated following feature selection, and a clinical-radiomics model was constructed based on these scores along with clinical-ultrasonic risk factors. The model's performance was validated across the 3 test sets. Additionally, the diagnostic performance of radiologists, with and without model assistance, was evaluated. The clinical-radiomics model demonstrated robust discrimination with AUCs of 0.94, 0.92, 0.91, and 0.95 in the training, internal test, external test, and prospective test sets, respectively. It surpassed the clinical model and single score in all sets (P < .05). Decision curve analysis and clinical impact curves validated the clinical utility of the clinical-radiomics model. Moreover, the model significantly improved radiologists' diagnostic accuracy, with AUCs increasing from 0.71 to 0.82 for the junior radiologist and from 0.75 to 0.85 for the senior radiologist. The clinical-radiomics model effectively predicts ALNM in breast cancer patients using noninvasive ultrasound features. Additionally, it enhances radiologists' diagnostic accuracy, potentially optimizing resource allocation in breast cancer management.

Artificial intelligence applied to ultrasound diagnosis of pelvic gynecological tumors: a systematic review and meta-analysis.

Geysels A, Garofalo G, Timmerman S, Barreñada L, De Moor B, Timmerman D, Froyman W, Van Calster B

pubmed logopapersMay 8 2025
To perform a systematic review on artificial intelligence (AI) studies focused on identifying and differentiating pelvic gynecological tumors on ultrasound scans. Studies developing or validating AI models for diagnosing gynecological pelvic tumors on ultrasound scans were eligible for inclusion. We systematically searched PubMed, Embase, Web of Science, and Cochrane Central from their database inception until April 30th, 2024. To assess the quality of the included studies, we adapted the QUADAS-2 risk of bias tool to address the unique challenges of AI in medical imaging. Using multi-level random effects models, we performed a meta-analysis to generate summary estimates of the area under the receiver operating characteristic curve (AUC), sensitivity, and specificity. To provide a reference point of current diagnostic support tools for ultrasound examiners, we descriptively compared the pooled performance to that of the well-recognized ADNEX model on external validation. Subgroup analyses were performed to explore sources of heterogeneity. From 9151 records retrieved, 44 studies were eligible: 40 on ovarian, three on endometrial, and one on myometrial pathology. Overall, 95% were at high risk of bias - primarily due to inappropriate study inclusion criteria, the absence of a patient-level split of training and testing image sets, and no calibration assessment. For ovarian tumors, the summary AUC for AI models distinguishing benign from malignant tumors was 0.89 (95% CI: 0.85-0.92). In lower-risk studies (at least three low-risk domains), the summary AUC dropped to 0.87 (0.83-0.90), with deep learning models outperforming radiomics-based machine learning approaches in this subset. Only five studies included an external validation, and six evaluated calibration performance. In a recent systematic review of external validation studies, the ADNEX model had a pooled AUC of 0.93 (0.91-0.94) in studies at low risk of bias. Studies on endometrial and myometrial pathologies were reported individually. Although AI models show promising discriminative performances for diagnosing gynecological tumors on ultrasound, most studies have methodological shortcomings that result in a high risk of bias. In addition, the ADNEX model appears to outperform most AI approaches for ovarian tumors. Future research should emphasize robust study designs - ideally large, multicenter, and prospective cohorts that mirror real-world populations - along with external validation, proper calibration, and standardized reporting. This study was pre-registered with Open Science Framework (OSF): https://doi.org/10.17605/osf.io/bhkst.

Cross-scale prediction of glioblastoma MGMT methylation status based on deep learning combined with magnetic resonance images and pathology images

Wu, X., Wei, W., Li, Y., Ma, M., Hu, Z., Xu, Y., Hu, W., Chen, G., Zhao, R., Kang, X., Yin, H., Xi, Y.

medrxiv logopreprintMay 8 2025
BackgroundIn glioblastoma (GBM), promoter methylation of the O6-methylguanine-DNA methyltransferase (MGMT) is associated with beneficial chemotherapy but has not been accurately evaluated based on radiological and pathological sections. To develop and validate an MRI and pathology image-based deep learning radiopathomics model for predicting MGMT promoter methylation in patients with GBM. MethodsA retrospective collection of pathologically confirmed isocitrate dehydrogenase (IDH) wild-type GBM patients (n=207) from three centers was performed, all of whom underwent MRI scanning within 2 weeks prior to surgery. The pre-trained ResNet50 was used as the feature extractor. Features of 1024 dimensions were extracted from MRI and pathological images, respectively, and the features were screened for modeling. Then feature fusion was performed by calculating the normalized multimode MRI fusion features and pathological features, and prediction models of MGMT based on deep learning radiomics, pathomics, and radiopathomics (DLRM, DLPM, DLRPM) were constructed and applied to internal and external validation cohorts. ResultsIn the training, internal and external validation cohorts, the DLRPM further improved the predictive performance, with a significantly better predictive performance than the DLRM and DLPM, with AUCs of 0.920 (95% CI 0.870-0.968), 0.854 (95% CI 0.702-1), and 0.840 (95% CI 0.625-1). ConclusionWe developed and validated cross-scale radiology and pathology models for predicting MGMT methylation status, with DLRPM predicting the best performance, and this cross-scale approach paves the way for further research and clinical applications in the future.

False Promises in Medical Imaging AI? Assessing Validity of Outperformance Claims

Evangelia Christodoulou, Annika Reinke, Pascaline Andrè, Patrick Godau, Piotr Kalinowski, Rola Houhou, Selen Erkan, Carole H. Sudre, Ninon Burgos, Sofiène Boutaj, Sophie Loizillon, Maëlys Solal, Veronika Cheplygina, Charles Heitz, Michal Kozubek, Michela Antonelli, Nicola Rieke, Antoine Gilson, Leon D. Mayer, Minu D. Tizabi, M. Jorge Cardoso, Amber Simpson, Annette Kopp-Schneider, Gaël Varoquaux, Olivier Colliot, Lena Maier-Hein

arxiv logopreprintMay 7 2025
Performance comparisons are fundamental in medical imaging Artificial Intelligence (AI) research, often driving claims of superiority based on relative improvements in common performance metrics. However, such claims frequently rely solely on empirical mean performance. In this paper, we investigate whether newly proposed methods genuinely outperform the state of the art by analyzing a representative cohort of medical imaging papers. We quantify the probability of false claims based on a Bayesian approach that leverages reported results alongside empirically estimated model congruence to estimate whether the relative ranking of methods is likely to have occurred by chance. According to our results, the majority (>80%) of papers claims outperformance when introducing a new method. Our analysis further revealed a high probability (>5%) of false outperformance claims in 86% of classification papers and 53% of segmentation papers. These findings highlight a critical flaw in current benchmarking practices: claims of outperformance in medical imaging AI are frequently unsubstantiated, posing a risk of misdirecting future research efforts.

MRI-based multimodal AI model enables prediction of recurrence risk and adjuvant therapy in breast cancer.

Yu Y, Ren W, Mao L, Ouyang W, Hu Q, Yao Q, Tan Y, He Z, Ban X, Hu H, Lin R, Wang Z, Chen Y, Wu Z, Chen K, Ouyang J, Li T, Zhang Z, Liu G, Chen X, Li Z, Duan X, Wang J, Yao H

pubmed logopapersMay 7 2025
Timely intervention and improved prognosis for breast cancer patients rely on early metastasis risk detection and accurate treatment predictions. This study introduces an advanced multimodal MRI and AI-driven 3D deep learning model, termed the 3D-MMR-model, designed to predict recurrence risk in non-metastatic breast cancer patients. We conducted a multicenter study involving 1199 non-metastatic breast cancer patients from four institutions in China, with comprehensive MRI and clinical data retrospectively collected. Our model employed multimodal-data fusion, utilizing contrast-enhanced T1-weighted imaging (T1 + C) and T2-weighted imaging (T2WI) volumes, processed through a modified 3D-UNet for tumor segmentation and a DenseNet121-based architecture for disease-free survival (DFS) prediction. Additionally, we performed RNA-seq analysis to delve further into the relationship between concentrated hotspots within the tumor region and the tumor microenvironment. The 3D-MR-model demonstrated superior predictive performance, with time-dependent ROC analysis yielding AUC values of 0.90, 0.89, and 0.88 for 2-, 3-, and 4-year DFS predictions, respectively, in the training cohort. External validation cohorts corroborated these findings, highlighting the model's robustness across diverse clinical settings. Integration of clinicopathological features further enhanced the model's accuracy, with a multimodal approach significantly improving risk stratification and decision-making in clinical practice. Visualization techniques provided insights into the decision-making process, correlating predictions with tumor microenvironment characteristics. In summary, the 3D-MMR-model represents a significant advancement in breast cancer prognosis, combining cutting-edge AI technology with multimodal imaging to deliver precise and clinically relevant predictions of recurrence risk. This innovative approach holds promise for enhancing patient outcomes and guiding individualized treatment plans in breast cancer care.
Page 105 of 1111106 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.