Sort by:
Page 103 of 2402393 results

Enhancing InceptionResNet to Diagnose COVID-19 from Medical Images.

Aljawarneh S, Ray I

pubmed logopapersJul 24 2025
This investigation delves into the diagnosis of COVID-19, using X-ray images generated by way of an effective deep learning model. In terms of assessing the COVID-19 diagnosis learning model, the methods currently employed tend to focus on the accuracy rate level, while neglecting several significant assessment parameters. These parameters, which include precision, sensitivity and specificity, significantly, F1-score, and ROC-AUC influence the performance level of the model. In this paper, we have improved the InceptionResNet and called Enhanced InceptionResNet with restructured parameters termed, "Enhanced InceptionResNet," which incorporates depth-wise separable convolutions to enhance the efficiency of feature extraction and minimize the consumption of computational resources. For this investigation, three residual network (ResNet) models, namely Res- Net, InceptionResNet model, and the Enhanced InceptionResNet with restructured parameters, were employed for a medical image classification assignment. The performance of each model was evaluated on a balanced dataset of 2600 X-ray images. The models were subsequently assessed for accuracy and loss, as well subjected to a confusion matrix analysis. The Enhanced InceptionResNet consistently outperformed ResNet and InceptionResNet in terms of validation and testing accuracy, recall, precision, F1-score, and ROC-AUC demonstrating its superior capacity for identifying pertinent information in the data. In the context of validation and testing accuracy, our Enhanced InceptionRes- Net repeatedly proved to be more reliable than ResNet, an indication of the former's capacity for the efficient identification of pertinent information in the data (99.0% and 98.35%, respectively), suggesting enhanced feature extraction capabilities. The Enhanced InceptionResNet excelled in COVID-19 diagnosis from chest X-rays, surpassing ResNet and Default InceptionResNet in accuracy, precision, and sensitivity. Despite computational demands, it shows promise for medical image classification. Future work should leverage larger datasets, cloud platforms, and hyperparameter optimisation to improve performance, especially for distinguishing normal and pneumonia cases.

Interpretable AI Framework for Secure and Reliable Medical Image Analysis in IoMT Systems.

Matthew UO, Rosa RL, Saadi M, Rodriguez DZ

pubmed logopapersJul 23 2025
The integration of artificial intelligence (AI) into medical image analysis has transformed healthcare, offering unprecedented precision in diagnosis, treatment planning, and disease monitoring. However, its adoption within the Internet of Medical Things (IoMT) raises significant challenges related to transparency, trustworthiness, and security. This paper introduces a novel Explainable AI (XAI) framework tailored for Medical Cyber-Physical Systems (MCPS), addressing these challenges by combining deep neural networks with symbolic knowledge reasoning to deliver clinically interpretable insights. The framework incorporates an Enhanced Dynamic Confidence-Weighted Attention (Enhanced DCWA) mechanism, which improves interpretability and robustness by dynamically refining attention maps through adaptive normalization and multi-level confidence weighting. Additionally, a Resilient Observability and Detection Engine (RODE) leverages sparse observability principles to detect and mitigate adversarial threats, ensuring reliable performance in dynamic IoMT environments. Evaluations conducted on benchmark datasets, including CheXpert, RSNA Pneumonia Detection Challenge, and NIH Chest X-ray Dataset, demonstrate significant advancements in classification accuracy, adversarial robustness, and explainability. The framework achieves a 15% increase in lesion classification accuracy, a 30% reduction in robustness loss, and a 20% improvement in the Explainability Index compared to state-of-the-art methods.

Back to the Future-Cardiovascular Imaging From 1966 to Today and Tomorrow.

Wintersperger BJ, Alkadhi H, Wildberger JE

pubmed logopapersJul 23 2025
This article, on the 60th anniversary of the journal Investigative Radiology, a journal dedicated to cutting-edge imaging technology, discusses key historical milestones in CT and MRI technology, as well as the ongoing advancement of contrast agent development for cardiovascular imaging over the past decades. It specifically highlights recent developments and the current state-of-the-art technology, including photon-counting detector CT and artificial intelligence, which will further push the boundaries of cardiovascular imaging. What were once ideas and visions have become today's clinical reality for the benefit of patients, and imaging technology will continue to evolve and transform modern medicine.

Development of a deep learning model for T1N0 gastric cancer diagnosis using 2.5D radiomic data in preoperative CT images.

He J, Xu J, Chen W, Cao M, Zhang J, Yang Q, Li E, Zhang R, Tong Y, Zhang Y, Gao C, Zhao Q, Xu Z, Wang L, Cheng X, Zheng G, Pan S, Hu C

pubmed logopapersJul 23 2025
Early detection and precise preoperative staging of early gastric cancer (EGC) are critical. Therefore, this study aims to develop a deep learning model using portal venous phase CT images to accurately distinguish EGC without lymph node metastasis. This study included 3164 patients with gastric cancer (GC) who underwent radical surgery at two medical centers in China from 2006 to 2019. Moreover, 2.5D radiomic data and multi-instance learning (MIL) were novel approaches applied in this study. By basing the selection of features on 2.5D radiomic data and MIL, the ResNet101 model combined with the XGBoost model represented a satisfactory performance for diagnosing pT1N0 GC. Furthermore, the 2.5D MIL-based model demonstrated a markedly superior predictive performance compared to traditional radiomics models and clinical models. We first constructed a deep learning prediction model based on 2.5D radiomics and MIL for effectively diagnosing pT1N0 GC patients, which provides valuable information for the individualized treatment selection.

Fetal neurobehavior and consciousness: a systematic review of 4D ultrasound evidence and ethical challenges.

Pramono MBA, Andonotopo W, Bachnas MA, Dewantiningrum J, Sanjaya INH, Sulistyowati S, Stanojevic M, Kurjak A

pubmed logopapersJul 23 2025
Recent advancements in four-dimensional (4D) ultrasonography have enabled detailed observation of fetal behavior <i>in utero</i>, including facial movements, limb gestures, and stimulus responses. These developments have prompted renewed inquiry into whether such behaviors are merely reflexive or represent early signs of integrated neural function. However, the relationship between fetal movement patterns and conscious awareness remains scientifically uncertain and ethically contested. A systematic review was conducted in accordance with PRISMA 2020 guidelines. Four databases (PubMed, Scopus, Embase, Web of Science) were searched for English-language articles published from 2000 to 2025, using keywords including "fetal behavior," "4D ultrasound," "neurodevelopment," and "consciousness." Studies were included if they involved human fetuses, used 4D ultrasound or functional imaging modalities, and offered interpretation relevant to neurobehavioral or ethical analysis. A structured appraisal using AMSTAR-2 was applied to assess study quality. Data were synthesized narratively to map fetal behaviors onto developmental milestones and evaluate their interpretive limits. Seventy-four studies met inclusion criteria, with 23 rated as high-quality. Fetal behaviors such as yawning, hand-to-face movement, and startle responses increased in complexity between 24-34 weeks gestation. These patterns aligned with known neurodevelopmental events, including thalamocortical connectivity and cortical folding. However, no study provided definitive evidence linking observed behaviors to conscious experience. Emerging applications of artificial intelligence in ultrasound analysis were found to enhance pattern recognition but lack external validation. Fetal behavior observed via 4D ultrasound may reflect increasing neural integration but should not be equated with awareness. Interpretations must remain cautious, avoiding anthropomorphic assumptions. Ethical engagement requires attention to scientific limits, sociocultural diversity, and respect for maternal autonomy as imaging technologies continue to evolve.

Mammo-Mamba: A Hybrid State-Space and Transformer Architecture with Sequential Mixture of Experts for Multi-View Mammography

Farnoush Bayatmakou, Reza Taleei, Nicole Simone, Arash Mohammadi

arxiv logopreprintJul 23 2025
Breast cancer (BC) remains one of the leading causes of cancer-related mortality among women, despite recent advances in Computer-Aided Diagnosis (CAD) systems. Accurate and efficient interpretation of multi-view mammograms is essential for early detection, driving a surge of interest in Artificial Intelligence (AI)-powered CAD models. While state-of-the-art multi-view mammogram classification models are largely based on Transformer architectures, their computational complexity scales quadratically with the number of image patches, highlighting the need for more efficient alternatives. To address this challenge, we propose Mammo-Mamba, a novel framework that integrates Selective State-Space Models (SSMs), transformer-based attention, and expert-driven feature refinement into a unified architecture. Mammo-Mamba extends the MambaVision backbone by introducing the Sequential Mixture of Experts (SeqMoE) mechanism through its customized SecMamba block. The SecMamba is a modified MambaVision block that enhances representation learning in high-resolution mammographic images by enabling content-adaptive feature refinement. These blocks are integrated into the deeper stages of MambaVision, allowing the model to progressively adjust feature emphasis through dynamic expert gating, effectively mitigating the limitations of traditional Transformer models. Evaluated on the CBIS-DDSM benchmark dataset, Mammo-Mamba achieves superior classification performance across all key metrics while maintaining computational efficiency.

MaskedCLIP: Bridging the Masked and CLIP Space for Semi-Supervised Medical Vision-Language Pre-training

Lei Zhu, Jun Zhou, Rick Siow Mong Goh, Yong Liu

arxiv logopreprintJul 23 2025
Foundation models have recently gained tremendous popularity in medical image analysis. State-of-the-art methods leverage either paired image-text data via vision-language pre-training or unpaired image data via self-supervised pre-training to learn foundation models with generalizable image features to boost downstream task performance. However, learning foundation models exclusively on either paired or unpaired image data limits their ability to learn richer and more comprehensive image features. In this paper, we investigate a novel task termed semi-supervised vision-language pre-training, aiming to fully harness the potential of both paired and unpaired image data for foundation model learning. To this end, we propose MaskedCLIP, a synergistic masked image modeling and contrastive language-image pre-training framework for semi-supervised vision-language pre-training. The key challenge in combining paired and unpaired image data for learning a foundation model lies in the incompatible feature spaces derived from these two types of data. To address this issue, we propose to connect the masked feature space with the CLIP feature space with a bridge transformer. In this way, the more semantic specific CLIP features can benefit from the more general masked features for semantic feature extraction. We further propose a masked knowledge distillation loss to distill semantic knowledge of original image features in CLIP feature space back to the predicted masked image features in masked feature space. With this mutually interactive design, our framework effectively leverages both paired and unpaired image data to learn more generalizable image features for downstream tasks. Extensive experiments on retinal image analysis demonstrate the effectiveness and data efficiency of our method.

Preoperative MRI-based radiomics analysis of intra- and peritumoral regions for predicting CD3 expression in early cervical cancer.

Zhang R, Jiang C, Li F, Li L, Qin X, Yang J, Lv H, Ai T, Deng L, Huang C, Xing H, Wu F

pubmed logopapersJul 23 2025
The study investigates the correlation between CD3 T-cell expression levels and cervical cancer (CC) while developing a magnetic resonance (MR) imaging-based radiomics model for preoperative prediction of CD3 T-cell expression levels. Prognostic correlations between CD3D, CD3E, and CD3G gene expressions and various cancers were analyzed using the Cancer Genome Atlas (TCGA) database. Protein-protein interaction (PPI) analysis via the STRING database identified associations between these genes and T lymphocyte activity. Gene Set Enrichment Analysis (GSEA) revealed immune pathway enrichment by categorizing genes based on CD3D expression levels. Correlations between immune checkpoint molecules and CD3 complex genes were also assessed. The study retrospectively included 202 patients with pathologically confirmed early-stage CC who underwent preoperative MRI, divided into training and test groups. Radiomic features were extracted from the whole-lesion tumor region of interest (ROI<sub>tumor</sub>) and from peritumoral regions with 3 mm and 5 mm margins (ROI<sub>3mm</sub> and ROI<sub>5mm</sub>, respectively). Various machine learning algorithms, including Support Vector Machine (SVM), Logistic Regression, Random Forest, AdaBoost, and Decision Tree, were used to construct radiomics models based on different ROIs, and diagnostic performances were compared to identify the optimal approach. The best-performing algorithm was combined with intra- and peritumoral features and clinically relevant independent risk factors to develop a comprehensive predictive model. Analysis of the TCGA database demonstrated significant associations between CD3D, CD3E, and CD3G expressions and several cancers, including CC (p < 0.05). PPI analysis highlighted connections between these genes and T lymphocyte function, while GSEA indicated enrichment of immune-related pathways linked to CD3D. Immune checkpoint correlations showed positive associations with CD3 complex genes. Radiomics analysis selected 18 features from ROI<sub>tumor</sub> and ROI<sub>3mm</sub> across MRI sequences. The SVM algorithm achieved the highest predictive performance for CD3 T-cell expression status, with an area under the curve (AUC) of 0.93 in the training group and 0.92 in the test group. This MR-based radiomics model effectively predicts CD3 expression status in patients with early-stage CC, offering a non-invasive tool for preoperative assessment of CD3 expression, but its clinical utility needs further prospective validation.

Non-invasive meningitis screening in neonates and infants: multicentre international study.

Ajanovic S, Jobst B, Jiménez J, Quesada R, Santos F, Carandell F, Lopez-Azorín M, Valverde E, Ybarra M, Bravo MC, Petrone P, Sial H, Muñoz D, Agut T, Salas B, Carreras N, Alarcón A, Iriondo M, Luaces C, Sidat M, Zandamela M, Rodrigues P, Graça D, Ngovene S, Bramugy J, Cossa A, Mucasse C, Buck WC, Arias S, El Abbass C, Tligi H, Barkat A, Ibáñez A, Parrilla M, Elvira L, Calvo C, Pellicer A, Cabañas F, Bassat Q

pubmed logopapersJul 23 2025
Meningitis diagnosis requires a lumbar puncture (LP) to obtain cerebrospinal fluid (CSF) for a laboratory-based analysis. In high-income settings, LPs are part of the systematic approach to screen for meningitis, and most yield negative results. In low- and middle-income settings, LPs are seldom performed, and suspected cases are often treated empirically. The aim of this study was to validate a non-invasive transfontanellar white blood cell (WBC) counter in CSF to screen for meningitis. We conducted a prospective study across three Spanish hospitals, one Mozambican and one Moroccan hospital (2020-2023). We included patients under 24 months with suspected meningitis, an open fontanelle, and a LP performed within 24 h from recruitment. High-resolution-ultrasound (HRUS) images of the CSF were obtained using a customized probe. A deep-learning model was trained to classify CSF patterns based on LPs WBC counts, using a 30cells/mm<sup>3</sup> threshold. The algorithm was applied to 3782 images from 76 patients. It correctly classified 17/18 CSFs with <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>≥</mo></math> 30 WBC, and 55/58 controls (sensitivity 94.4%, specificity 94.8%). The only false negative was paired to a traumatic LP with 40 corrected WBC/mm<sup>3</sup>. This non-invasive device could be an accurate tool for screening meningitis in neonates and young infants, modulating LP indications. Our non-invasive, high-resolution ultrasound device achieved 94% accuracy in detecting elevated leukocyte counts in neonates and infants with suspected meningitis, compared to the gold standard (lumbar punctures and laboratory analysis). This first-in-class screening device introduces the first non-invasive method for neonatal and infant meningitis screening, potentially modulating lumbar puncture indications. This technology could substantially reduce lumbar punctures in low-suspicion cases and provides a viable alternative critically ill patients worldwide or in settings where lumbar punctures are unfeasible, especially in low-income countries).

CT-based intratumoral and peritumoral radiomics to predict the treatment response to hepatic arterial infusion chemotherapy plus lenvatinib and PD-1 in high-risk hepatocellular carcinoma cases: a multi-center study.

Liu Z, Li X, Huang Y, Chang X, Zhang H, Wu X, Diao Y, He F, Sun J, Feng B, Liang H

pubmed logopapersJul 23 2025
Noninvasive and precise tools for treatment response estimation in patients with high-risk hepatocellular carcinoma (HCC) who could benefit from hepatic arterial infusion chemotherapy (HAIC) plus lenvatinib and humanized programmed death receptor-1 inhibitors (PD-1) (HAIC-LEN-PD1) are lacking. This study aimed to evaluate the predictive potential of intratumoral and peritumoral radiomics for preoperative treatment response assessment to HAIC-LEN-PD1 in high-risk HCC cases. Totally 630 high-risk HCC cases administered HAIC-LEN-PD1 at three institutions were retrospectively identified and assigned to training, validation and external test sets. Totally 1834 radiomic features were, respectively, obtained from intratumoral and peritumoral regions and radiomics models were established using five classifiers. Based on the optimal model, a nomogram was developed and evaluated using areas under the curves (AUCs), calibration curves and decision curve analysis (DCA). Overall survival (OS) and progression-free survival (PFS) were assessed by Kaplan-Meier curves. The Intratumoral + Peritumoral 10 mm (Intra + Peri10) radiomics models were superior to the intratumor models and peritumor models, with AUCs of 0.919 (95%CI 0.889-0.949) in the training set, 0.874 (95%CI 0.812-0.936) in validation set and 0.893 (95%CI 0.839-0.948) in external test sets. The nomogram had good calibration ability and clinical value, with the AUCs of 0.936 (95%CI 0.907-0.965) in the training set, 0.878 (95%CI 0.916-0.940) in validation set and 0.902 (95%CI 0.848-0.957) in external test sets. The Kaplan-Meier analysis showed that high-score patients had significantly shorter OS and PFS than the low-score patients (median OS: 11.7 vs. 29.6 months, the whole set, p < 0.001; median PFS: 6.0 vs. 12.0 months, the whole set, p < 0.001). The Intra + Peri10 model can effectively predict the treatment response of high-risk HCC cases administered HAIC-LEN-PD1. The nomogram could provide an effective tool to evaluate the treatment response and risk stratification.
Page 103 of 2402393 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.