Sort by:
Page 12 of 99982 results

The Role of Artificial Intelligence, Including Endoscopic Diagnosis, in the Prediction of Presence, Bleeding, and Mortality of Esophageal Varices.

Furuichi Y, Nishiguchi R, Furuichi Y, Kobayashi S, Fujiwara T, Sato K

pubmed logopapersSep 18 2025
Esophagogastric varices (EGVs) are a disease that occurs as a complication of the progression of liver cirrhosis, and since bleeding can be fatal, regular endoscopy is necessary. With the development of artificial intelligence (AI) in recent years, it is beginning to be applied to predicting the presence of EGVs, predicting bleeding, and making a diagnosis and prognosis. Based on previous reports, application methods of AI can be classified into the following four categories: (1) noninvasive prediction using clinical data obtained from clinical records such as laboratory data, past history, and present illness, (2) invasive detection and prediction using endoscopy and computed tomography (CT), (3) invasive prediction using multimodal AI (clinical data and endoscopy), (4) invasive virtual measurement on the image of endoscopy and CT. These methods currently allow for the use of AI in the following ways: (1) prediction of EGVs existence, variceal grade, bleeding risk, and survival rate, (2) detection and diagnosis of esophageal varices (EVs), (3) prediction of bleeding within 1 year, (4) prediction of variceal diameter and portal pressure gradient. This review explores current studies on AI applications in assessing EGVs, highlighting their benefits, limitations, and future directions.

Deep Learning for Automated Measures of SUV and Molecular Tumor Volume in [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL, [<sup>18</sup>F]FDG, and [<sup>177</sup>Lu]Lu-PSMA-617 Imaging with Global Threshold Regional Consensus Network.

Jackson P, Buteau JP, McIntosh L, Sun Y, Kashyap R, Casanueva S, Ravi Kumar AS, Sandhu S, Azad AA, Alipour R, Saghebi J, Kong G, Jewell K, Eifer M, Bollampally N, Hofman MS

pubmed logopapersSep 18 2025
Metastatic castration-resistant prostate cancer has a high rate of mortality with a limited number of effective treatments after hormone therapy. Radiopharmaceutical therapy with [<sup>177</sup>Lu]Lu-prostate-specific membrane antigen-617 (LuPSMA) is one treatment option; however, response varies and is partly predicted by PSMA expression and metabolic activity, assessed on [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL and [<sup>18</sup>F]FDG PET, respectively. Automated methods to measure these on PET imaging have previously yielded modest accuracy. Refining computational workflows and standardizing approaches may improve patient selection and prognostication for LuPSMA therapy. <b>Methods:</b> PET/CT and quantitative SPECT/CT images from an institutional cohort of patients staged for LuPSMA therapy were annotated for total disease burden. In total, 676 [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL PET, 390 [<sup>18</sup>F]FDG PET, and 477 LuPSMA SPECT images were used for development of automated workflow and tested on 56 cases with externally referred PET/CT staging. A segmentation framework, the Global Threshold Regional Consensus Network, was developed based on nnU-Net, with processing refinements to improve boundary definition and overall label accuracy. <b>Results:</b> Using the model to contour disease extent, the mean volumetric Dice similarity coefficient for [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL PET was 0.94, for [<sup>18</sup>F]FDG PET was 0.84, and for LuPSMA SPECT was 0.97. On external test cases, Dice accuracy was 0.95 and 0.84 on PSMA and FDG PET, respectively. The refined models yielded consistent improvements compared with nnU-Net, with an increase of 3%-5% in Dice accuracy and 10%-17% in surface agreement. Quantitative biomarkers were compared with a human-defined ground truth using the Pearson coefficient, with scores for [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL, [<sup>18</sup>F]FDG, and LuPSMA, respectively, of 0.98, 0.94, and 0.99 for disease volume; 0.98, 0.88, and 0.99 for SUV<sub>mean</sub>; 0.96, 0.91, and 0.99 for SUV<sub>max</sub>; and 0.97, 0.96, and 0.99 for volume intensity product. <b>Conclusion:</b> Delineation of disease extent and tracer avidity can be performed with a high degree of accuracy using automated deep learning methods. By incorporating threshold-based postprocessing, the tools can closely match the output of manual workflows. Pretrained models and scripts to adapt to institutional data are provided for open use.

HybridMamba: A Dual-domain Mamba for 3D Medical Image Segmentation

Weitong Wu, Zhaohu Xing, Jing Gong, Qin Peng, Lei Zhu

arxiv logopreprintSep 18 2025
In the domain of 3D biomedical image segmentation, Mamba exhibits the superior performance for it addresses the limitations in modeling long-range dependencies inherent to CNNs and mitigates the abundant computational overhead associated with Transformer-based frameworks when processing high-resolution medical volumes. However, attaching undue importance to global context modeling may inadvertently compromise critical local structural information, thus leading to boundary ambiguity and regional distortion in segmentation outputs. Therefore, we propose the HybridMamba, an architecture employing dual complementary mechanisms: 1) a feature scanning strategy that progressively integrates representations both axial-traversal and local-adaptive pathways to harmonize the relationship between local and global representations, and 2) a gated module combining spatial-frequency analysis for comprehensive contextual modeling. Besides, we collect a multi-center CT dataset related to lung cancer. Experiments on MRI and CT datasets demonstrate that HybridMamba significantly outperforms the state-of-the-art methods in 3D medical image segmentation.

Artificial Intelligence in Cardiac Amyloidosis: A Systematic Review and Meta-Analysis of Diagnostic Accuracy Across Imaging and Non-Imaging Modalities

Kumbalath, R. M., Challa, D., Patel, M. K., Prajapati, S. D., Kumari, K., mehan, A., Chopra, R., Somegowda, Y. M., Khan, R., Ramteke, H. D., juneja, M.

medrxiv logopreprintSep 18 2025
IntroductionCardiac amyloidosis (CA) is an underdiagnosed infiltrative cardiomyopathy associated with poor outcomes if not detected early. Artificial intelligence (AI) has emerged as a promising adjunct to conventional diagnostics, leveraging imaging and non-imaging data to improve recognition of CA. However, evidence on the comparative diagnostic performance of AI across modalities remains fragmented. This meta-analysis aimed to synthesize and quantify the diagnostic performance of AI models in CA across multiple modalities. MethodsA systematic literature search was conducted in PubMed, Embase, Web of Science, and Cochrane Library from inception to August 2025. Only published observational studies applying AI to the diagnosis of CA were included. Data were extracted on patient demographics, AI algorithms, modalities, and diagnostic performance metrics. Risk of bias was assessed using QUADAS-2, and certainty of evidence was graded using GRADE. Random-effects meta-analysis (REML) was performed to pool accuracy, precision, recall, F1-score, and area under the curve (AUC). ResultsFrom 115 screened studies, 25 observational studies met the inclusion criteria, encompassing a total of 589,877 patients with a male predominance (372,458 males, 63.2%; 221,818 females, 36.6%). A wide range of AI algorithms were applied, most notably convolutional neural networks (CNNs), which accounted for 526,879 patients, followed by 3D-ResNet architectures (56,872 patients), hybrid segmentation-classification networks (3,747), and smaller studies employing random forests (636), Res-CRNN (89), and traditional machine learning approaches (769). Data modalities included ECG (341,989 patients), echocardiography (>70,000 patients across multiple cohorts), scintigraphy ([~]24,000 patients), cardiac MRI ([~]900 patients), CT (299 patients), and blood tests (261 patients). Pooled diagnostic performance across all modalities demonstrated an overall accuracy of 84.0% (95% CI: 74.6-93.5), precision of 85.8% (95% CI: 79.6-92.0), recall (sensitivity) of 89.6% (95% CI: 85.7-93.4), and an F1-score of 87.2% (95% CI: 81.8-92.6). Area under the curve (AUC) analysis revealed modality-specific variation, with scintigraphy achieving the highest pooled AUC (99.7%), followed by MRI (96.8%), echocardiography (94.3%), blood tests (95.0%), CT (98.0%), and ECG (88.5%). Subgroup analysis confirmed significant differences between modalities (p < 0.001), with MRI and scintigraphy showing consistent high performance and low-to-moderate heterogeneity, while echocardiography displayed moderate accuracy but marked variability, and ECG demonstrated the lowest and most heterogeneous results. ConclusionAI demonstrates strong potential for improving CA diagnosis, with MRI and scintigraphy providing the most reliable performance, echocardiography offering an accessible but heterogeneous option, and ECG models remaining least consistent. While promising, future prospective multicenter studies are needed to validate AI models, improve subtype discrimination, and optimize multimodal integration for real-world clinical use.

Habitat-aware radiomics and adaptive 2.5D deep learning predict treatment response and long-term survival in ESCC patients undergoing neoadjuvant chemoimmunotherapy.

Gao X, Yang L, She T, Wang F, Ding H, Lu Y, Xu Y, Wang Y, Li P, Duan X, Leng X

pubmed logopapersSep 17 2025
Current radiomic approaches inadequately resolve spatial intratumoral heterogeneity (ITH) in esophageal squamous cell carcinoma (ESCC), limiting neoadjuvant chemoimmunotherapy (NACI) response prediction. We propose an interpretable multimodal framework to: (1) quantitatively map intra-/peritumoral heterogeneity via voxel-wise habitat radiomics; (2) model cross-sectional tumor biology using 2.5D deep learning; and (3) establish mechanism-driven biomarkers via SHAP interpretability to identify resistance-linked subregions. This dual-center retrospective study analyzed 269 treatment-naïve ESCC patients with baseline PET/CT (training: n = 144; validation: n = 62; test: n = 63). Habitat radiomics delineated tumor subregions via K-means clustering (Calinski-Harabasz-optimized) on PET/CT, extracting 1,834 radiomic features per modality. A multi-stage pipeline (univariate filtering, mRMR, LASSO regression) selected 32 discriminative features. The 2.5D model aggregated ± 4 peri-tumoral slices, fusing PET/CT via MixUp channels using a fine-tuned ResNet50 (ImageNet-pretrained), with multi-instance learning (MIL) translating slice-level features to patient-level predictions. Habitat features, MIL signatures, and clinical variables were integrated via five-classifier ensemble (ExtraTrees/SVM/RandomForest) and Crossformer architecture (SMOTE-balanced). Validation included AUC, sensitivity, specificity, calibration curves, decision curve analysis (DCA), survival metrics (C-index, Kaplan-Meier), and interpretability (SHAP, Grad-CAM). Habitat radiomics achieved superior validation AUC (0.865, 95% CI: 0.778-0.953), outperforming conventional radiomics (ΔAUC + 3.6%, P < 0.01) and clinical models (ΔAUC + 6.4%, P < 0.001). SHAP identified the invasive front (H2) as dominant predictor (40% of top features), with wavelet_LHH_firstorder_Entropy showing highest impact (SHAP = + 0.42). The 2.5D MIL model demonstrated strong generalizability (validation AUC: 0.861). The combined model achieved state-of-the-art test performance (AUC = 0.824, sensitivity = 0.875) with superior calibration (Hosmer-Lemeshow P > 0.800), effective survival stratification (test C-index: 0.809), and 23-41% net benefit improvement in DCA. Integrating habitat radiomics and 2.5D deep learning enables interpretable dual diagnostic-prognostic stratification in ESCC, advancing precision oncology by decoding spatial heterogeneity.

Taylor-Series Expanded Kolmogorov-Arnold Network for Medical Imaging Classification

Kaniz Fatema, Emad A. Mohammed, Sukhjit Singh Sehra

arxiv logopreprintSep 17 2025
Effective and interpretable classification of medical images is a challenge in computer-aided diagnosis, especially in resource-limited clinical settings. This study introduces spline-based Kolmogorov-Arnold Networks (KANs) for accurate medical image classification with limited, diverse datasets. The models include SBTAYLOR-KAN, integrating B-splines with Taylor series; SBRBF-KAN, combining B-splines with Radial Basis Functions; and SBWAVELET-KAN, embedding B-splines in Morlet wavelet transforms. These approaches leverage spline-based function approximation to capture both local and global nonlinearities. The models were evaluated on brain MRI, chest X-rays, tuberculosis X-rays, and skin lesion images without preprocessing, demonstrating the ability to learn directly from raw data. Extensive experiments, including cross-dataset validation and data reduction analysis, showed strong generalization and stability. SBTAYLOR-KAN achieved up to 98.93% accuracy, with a balanced F1-score, maintaining over 86% accuracy using only 30% of the training data across three datasets. Despite class imbalance in the skin cancer dataset, experiments on both imbalanced and balanced versions showed SBTAYLOR-KAN outperforming other models, achieving 68.22% accuracy. Unlike traditional CNNs, which require millions of parameters (e.g., ResNet50 with 24.18M), SBTAYLOR-KAN achieves comparable performance with just 2,872 trainable parameters, making it more suitable for constrained medical environments. Gradient-weighted Class Activation Mapping (Grad-CAM) was used for interpretability, highlighting relevant regions in medical images. This framework provides a lightweight, interpretable, and generalizable solution for medical image classification, addressing the challenges of limited datasets and data-scarce scenarios in clinical AI applications.

DBCM-net:dual backbone cascaded multi-convolutional segmentation network for medical image segmentation.

Wang X, Li B, Ma J, Huo L, Tian X

pubmed logopapersSep 17 2025
Medical image segmentation plays a vital role in diagnosis, treatment planning, and disease monitoring. However, endoscopic and dermoscopic images often exhibit blurred boundaries and low contrast, presenting a significant challenge for precise segmentation. Moreover, single encoder-decoder architectures suffer from inherent limitations, resulting in the loss of either fine-grained details or global context. Some dual-encoder models yield inaccurate results due to mismatched receptive fields and overly simplistic fusion strategies. To overcome these issues, we present the Dual Backbone Cascaded Multi-Convolutional Segmentation Network (DBCM-Net). Our approach employs a Multi-Axis Vision Transformer and a Vision Mamba encoder to extract semantic features at multiple scales, with a cascaded design that enables information sharing between the two backbones. We introduce the Global and Local Fusion Attention Block (GLFAB) to generate attention masks that seamlessly integrate global context with local detail, producing more precise feature maps. Additionally, we incorporate a Depthwise Separable Convolution Attention Module (DSCAM) within the encoders to strengthen the model's ability to capture critical features. A Feature Refinement Fusion Block (FRFB) is further applied to refine these feature maps before subsequent processing. The cascaded network architecture synergistically combines the complementary strengths of both encoders. We rigorously evaluated our model on three distinct datasets, achieving Dice coefficients of 94.93% on the CVC-ClinicDB polyp dataset, 91.93% on ISIC2018, and 92.73% on ACDC, each surpassing current state-of-the-art methods. Extensive experiments demonstrate that the proposed method excels in segmentation accuracy and preserves edge details effectively.

Consistent View Alignment Improves Foundation Models for 3D Medical Image Segmentation

Puru Vaish, Felix Meister, Tobias Heimann, Christoph Brune, Jelmer M. Wolterink

arxiv logopreprintSep 17 2025
Many recent approaches in representation learning implicitly assume that uncorrelated views of a data point are sufficient to learn meaningful representations for various downstream tasks. In this work, we challenge this assumption and demonstrate that meaningful structure in the latent space does not emerge naturally. Instead, it must be explicitly induced. We propose a method that aligns representations from different views of the data to align complementary information without inducing false positives. Our experiments show that our proposed self-supervised learning method, Consistent View Alignment, improves performance for downstream tasks, highlighting the critical role of structured view alignment in learning effective representations. Our method achieved first and second place in the MICCAI 2025 SSL3D challenge when using a Primus vision transformer and ResEnc convolutional neural network, respectively. The code and pretrained model weights are released at https://github.com/Tenbatsu24/LatentCampus.

<sup>18</sup>F-FDG PET/CT-based Radiomics Analysis of Different Machine Learning Models for Predicting Pathological Highly Invasive Non-small Cell Lung Cancer.

Li Y, Shen MJ, Yi JW, Zhao QQ, Zhao QP, Hao LY, Qi JJ, Li WH, Wu XD, Zhao L, Wang Y

pubmed logopapersSep 17 2025
This study aimed to develop and validate machine learning models integrating clinicoradiological and radiomic features from 2-[18 F]-fluoro-2-deoxy-D-glucose (<sup>18</sup>F-FDG) positron emission tomography/computed tomography (PET/CT) to predict pathological high invasiveness in cT1-sized (tumor size ≤ 3 cm) non-small cell lung cancer (NSCLC). We retrospectively reviewed 1459 patients with NSCLC (633 with pathological high invasiveness and 826 with pathological non-high invasiveness) from two medical centers. Patients with cT1-sized NSCLC were included. 1145 radiomic features were extracted per modality (PET and CT) from each patient. Optimal predictors were selected to construct a radiomics score (Rad-score) for the PET/CT radiomics model. A combined model incorporating significant clinicoradiological features and the Rad-score was developed. Logistic regression (LR), random forest (RF), support vector machine (SVM), and extreme gradient boosting (XGBoost) algorithms were used to train the combined model. Model performance was assessed the area under the receiver operating characteristic (ROC) curve (AUC), calibration curve, and decision curve analysis (DCA). Shapley Additive Explanations (SHAP) was applied to visualize the prediction process. The radiomics model was built using 11 radiomic features, achieving AUCs of 0.851 (training), 0.859 (internal validation), and 0.829 (external validation). Among all models, the XGBoost combined model demonstrated the best predictive performance, with AUCs of 0.958, 0.919, and 0.903, respectively, along with good calibration and high net benefit. The XGBoost combined model showed strong performance in predicting pathological high invasiveness in cT1-sized NSCLC.

Robust and explainable framework to address data scarcity in diagnostic imaging.

Zhao Z, Alzubaidi L, Zhang J, Duan Y, Naseem U, Gu Y

pubmed logopapersSep 17 2025
Deep learning has significantly advanced automatic medical diagnostics, releasing human resources from clinical pressure, yet the persistent challenge of data scarcity in this area hampers its further improvements and applications. To address this gap, we introduce a novel ensemble framework called 'Efficient Transfer and Self-supervised Learning based Ensemble Framework' (ETSEF). ETSEF leverages features from multiple pre-trained deep learning models to efficiently learn powerful representations from a limited number of data samples. To the best of our knowledge, ETSEF is the first strategy that combines two pre-training methodologies (Transfer Learning and Self-supervised Learning) with ensemble learning approaches. Various data enhancement techniques, including data augmentation, feature fusion, feature selection, and decision fusion, have also been deployed to maximise the efficiency and robustness of the ETSEF model. Five independent medical imaging tasks, including endoscopy, breast cancer detection, monkeypox detection, brain tumour detection, and glaucoma detection, were tested to demonstrate ETSEF's effectiveness and robustness. Facing limited sample numbers and challenging medical tasks, ETSEF has demonstrated its effectiveness by improving diagnostic accuracy by up to 13.3% compared to strong ensemble baseline models and up to 14.4% compared with recent state-of-the-art methods. Moreover, we emphasise the robustness and trustworthiness of the ETSEF method through various vision-explainable artificial intelligence techniques, including Grad-CAM, SHAP, and t-SNE. Compared to large-scale deep learning models, ETSEF can be flexibly deployed and maintain superior performance for challenging medical imaging tasks, demonstrating potential for application in areas lacking training data. The code is available at Github ETSEF.
Page 12 of 99982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.