Sort by:
Page 157 of 6486473 results

Kumbalath, R. M., Challa, D., Patel, M. K., Prajapati, S. D., Kumari, K., mehan, A., Chopra, R., Somegowda, Y. M., Khan, R., Ramteke, H. D., juneja, M.

medrxiv logopreprintSep 18 2025
IntroductionCardiac amyloidosis (CA) is an underdiagnosed infiltrative cardiomyopathy associated with poor outcomes if not detected early. Artificial intelligence (AI) has emerged as a promising adjunct to conventional diagnostics, leveraging imaging and non-imaging data to improve recognition of CA. However, evidence on the comparative diagnostic performance of AI across modalities remains fragmented. This meta-analysis aimed to synthesize and quantify the diagnostic performance of AI models in CA across multiple modalities. MethodsA systematic literature search was conducted in PubMed, Embase, Web of Science, and Cochrane Library from inception to August 2025. Only published observational studies applying AI to the diagnosis of CA were included. Data were extracted on patient demographics, AI algorithms, modalities, and diagnostic performance metrics. Risk of bias was assessed using QUADAS-2, and certainty of evidence was graded using GRADE. Random-effects meta-analysis (REML) was performed to pool accuracy, precision, recall, F1-score, and area under the curve (AUC). ResultsFrom 115 screened studies, 25 observational studies met the inclusion criteria, encompassing a total of 589,877 patients with a male predominance (372,458 males, 63.2%; 221,818 females, 36.6%). A wide range of AI algorithms were applied, most notably convolutional neural networks (CNNs), which accounted for 526,879 patients, followed by 3D-ResNet architectures (56,872 patients), hybrid segmentation-classification networks (3,747), and smaller studies employing random forests (636), Res-CRNN (89), and traditional machine learning approaches (769). Data modalities included ECG (341,989 patients), echocardiography (>70,000 patients across multiple cohorts), scintigraphy ([~]24,000 patients), cardiac MRI ([~]900 patients), CT (299 patients), and blood tests (261 patients). Pooled diagnostic performance across all modalities demonstrated an overall accuracy of 84.0% (95% CI: 74.6-93.5), precision of 85.8% (95% CI: 79.6-92.0), recall (sensitivity) of 89.6% (95% CI: 85.7-93.4), and an F1-score of 87.2% (95% CI: 81.8-92.6). Area under the curve (AUC) analysis revealed modality-specific variation, with scintigraphy achieving the highest pooled AUC (99.7%), followed by MRI (96.8%), echocardiography (94.3%), blood tests (95.0%), CT (98.0%), and ECG (88.5%). Subgroup analysis confirmed significant differences between modalities (p < 0.001), with MRI and scintigraphy showing consistent high performance and low-to-moderate heterogeneity, while echocardiography displayed moderate accuracy but marked variability, and ECG demonstrated the lowest and most heterogeneous results. ConclusionAI demonstrates strong potential for improving CA diagnosis, with MRI and scintigraphy providing the most reliable performance, echocardiography offering an accessible but heterogeneous option, and ECG models remaining least consistent. While promising, future prospective multicenter studies are needed to validate AI models, improve subtype discrimination, and optimize multimodal integration for real-world clinical use.

Riaz F, Muzammal M, Atanbori J, Sodhro AH

pubmed logopapersSep 17 2025
Osteoporosis classification from X-ray images remains challenging due to the high visual similarity between scans of healthy individuals and osteoporotic patients. In this paper, we propose a novel framework that extracts a discriminative gradient-based map from each X-ray image, capturing subtle structural differences that are not readily apparent to the human eye. The method uses analytic Gabor filters to decompose the image into multi-scale, multi-orientation components. At each pixel, we construct a filter response matrix, from which second-order texture features are derived via covariance analysis, followed by eigenvalue decomposition to capture dominant local patterns. The resulting Gabor Eigen Map serves as a compact, information-rich representation that is both interpretable and lightweight, making it well-suited for deployment on edge devices. These feature maps are further processed using a convolutional neural network (CNN) to extract high-level descriptors, followed by classification using standard machine learning algorithms. Experimental results demonstrate that the proposed framework outperforms existing methods in identifying osteoporotic cases, while offering strong potential for real-time, privacy-preserving inference at the point of care.

Kim JM, Jung H, Kwon HE, Ko Y, Jung JH, Shin S, Kim YH, Kim YH, Jun TJ, Kwon H

pubmed logopapersSep 17 2025
Accurately predicting post-transplant renal function is essential for optimizing donor-recipient matching and improving long-term outcomes in kidney transplantation (KT). Traditional models using only structured clinical data often fail to account for complex biological and anatomical factors. This study aimed to develop and validate a multimodal deep learning model that integrates computed tomography (CT) imaging, radiology report text, and structured clinical variables to predict 1-year estimated glomerular filtration rate (eGFR) in living donor kidney transplantation (LDKT) recipients. A retrospective cohort of 1,937 LDKT recipients was selected from 3,772 KT cases. Exclusions included deceased donor KT, immunologic high-risk recipients (n = 304), missing CT imaging, early graft complications, and anatomical abnormalities. eGFR at 1 year post-transplant was classified into four categories: > 90, 75-90, 60-75, and 45-60 mL/min/1.73 m2. Radiology reports were embedded using BioBERT, while CT videos were encoded using a CLIP-based visual extractor. These were fused with structured clinical features and input into ensemble classifiers including XGBoost. Model performance was evaluated using cross-validation and SHapley Additive exPlanations (SHAP) analysis. The full multimodal model achieved a macro F1 score of 0.675, micro F1 score of 0.704, and weighted F1 score of 0.698-substantially outperforming the clinical-only model (macro F1 = 0.292). CT imaging contributed more than text data (clinical + CT macro F1 = 0.651; clinical + text = 0.486). The model showed highest accuracy in the >90 (F1 = 0.7773) and 60-75 (F1 = 0.7303) categories. SHAP analysis identified donor age, BMI, and donor sex as key predictors. Dimensionality reduction confirmed internal feature validity. Multimodal deep learning integrating clinical, imaging, and textual data enhances prediction of post-transplant renal function. This framework offers a robust and interpretable approach for individualized risk stratification in LDKT, supporting precision medicine in transplantation.

Rigal L, Bellec J, Lemaire L, Duverge L, Benali K, Lederlin M, Martins R, De Crevoisier R, Simon A

pubmed logopapersSep 17 2025
Stereotactic Arrhythmia Radioablation (STAR) is a promising treatment for refractory ventricular tachycardia. However, its precision may be hampered by cardiac and respiratory motions. Multiple techniques exist to mitigate the effects of these displacements. The purpose of this work was, based on cardiac and respiratory dynamic CT scans, to generate a patient-specific dynamic model of the structures of interest, that enables simulation of treatments for evaluation of motion management methods. Deep learning-based segmentation was used to extract the geometry of the cardiac structures, whose deformations and displacements were assessed using deformable and rigid image registrations. The combination of the model with dose maps enabled to evaluate the dose locally accumulated during the treatment. The reproducibility of each step was evaluated considering expert references, and treatment simulations were evaluated using data of a physical phantom. The exploitation of the model was illustrated on the data of nine patients, demonstrating that the impact of cardiorespiratory dynamics is potentially important and highly patient-specific, and allowing for future evaluations of motion management methods.

Liu W, Shao H, Deng X, Jiang Y

pubmed logopapersSep 17 2025
Glaucoma is the second leading cause of blindness worldwide and the only form of irreversible vision loss, making early and accurate diagnosis essential. Although deep learning has revolutionized medical image analysis, its dependence on large-scale annotated datasets poses a significant barrier, especially in clinical scenarios with limited labeled data. To address this challenge, we propose a Classical-Quantum Hybrid Mean Prototype Network (CQH-MPN) tailored for few-shot glaucoma diagnosis. CQH-MPN integrates a quantum feature encoder, which exploits quantum superposition and entanglement for enhanced global representation learning, with a classical convolutional encoder to capture local structural features. These dual encodings are fused and projected into a shared embedding space, where mean prototype representations are computed for each class. We introduce a fuzzy proximity-based metric that extends traditional prototype distance measures by incorporating intra-class variability and inter-class ambiguity, thereby improving classification sensitivity under uncertainty. Our model is evaluated on two public retinal fundus image datasets-ACRIMA and ORIGA-under 1-shot, 3-shot, and 5-shot settings. Results show that CQH-MPN consistently outperforms other models, achieving an accuracy of 94.50%$\pm$1.04% on the ACRIMA dataset under the 1-shot setting. Moreover, the proposed method demonstrates significant performance improvements across different shot configurations on both datasets. By effectively bridging the representational power of quantum computing with classical deep learning, CQH-MPN demonstrates robust generalization in data-scarce environments. This work lays the foundation for quantum-augmented few-shot learning in medical imaging and offers a viable solution for real-world, low-resource diagnostic applications.

Çelebi E, Akkaya N, Ünsal G

pubmed logopapersSep 17 2025
This study aimed to develop and evaluate a deep convolutional neural network (CNN) model for the automatic segmentation of foreign bodies and ghost images in panoramic radiographs (PRs), which can complicate diagnostic interpretation. A dataset of 11,226 PRs from four devices was annotated by two radiologists using the Computer Vision Annotation Tool. A U-Net-based CNN model was trained and evaluated using Intersection over Union (IoU), Dice coefficient, accuracy, precision, recall, and F1 score. For foreign body segmentation, the model achieved validation Dice and IoU scores of 0.9439 and 0.9043, and test scores of 0.9657 and 0.9371. For ghost image segmentation, validation Dice and IoU were 0.8234 and 0.7388, with test scores of 0.8749 and 0.8145. Overall test accuracy exceeded 0.999. The AI model showed high accuracy in segmenting foreign bodies and ghost images in PRs, indicating its potential to assist radiologists. Further clinical validation is recommended.

Lotter W, Hippe DS, Oshiro T, Lowry KP, Milch HS, Miglioretti DL, Elmore JG, Lee CI, Hsu W

pubmed logopapersSep 17 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To evaluate the impact of screening mammography acquisition parameters on the interpretive performance of AI and radiologists. Materials and Methods The associations between seven mammogram acquisition parameters-mammography machine version, kVp, x-ray exposure delivered, relative x-ray exposure, paddle size, compression force, and breast thickness-and AI and radiologist performance in interpreting two-dimensional screening mammograms acquired by a diverse health system between December 2010 and 2019 were retrospectively evaluated. The top 11 AI models and the ensemble model from the Digital Mammography DREAM Challenge were assessed. The associations between each acquisition parameter and the sensitivity and specificity of the AI models and the radiologists' interpretations were separately evaluated using generalized estimating equations-based models at the examination level, adjusted for several clinical factors. Results The dataset included 28,278 screening two-dimensional mammograms from 22,626 women (mean age 58.5 years ± 11.5 [SD]; 4913 women had multiple mammograms). Of these, 324 examinations resulted in breast cancer diagnosis within 1 year. The acquisition parameters were significantly associated with the performance of both AI and radiologists, with absolute effect sizes reaching 10% for sensitivity and 5% for specificity; however, the associations differed between AI and radiologists for several parameters. Increased exposure delivered reduced the specificity for the ensemble AI (-4.5% per 1 SD increase; <i>P</i> < .001) but not radiologists (<i>P</i> = .44). Increased compression force reduced the specificity for radiologists (-1.3% per 1 SD increase; <i>P</i> < .001) but not for AI (<i>P</i> = .60). Conclusion Screening mammography acquisition parameters impacted the performance of both AI and radiologists, with some parameters impacting performance differently. ©RSNA, 2025.

Gao X, Yang L, She T, Wang F, Ding H, Lu Y, Xu Y, Wang Y, Li P, Duan X, Leng X

pubmed logopapersSep 17 2025
Current radiomic approaches inadequately resolve spatial intratumoral heterogeneity (ITH) in esophageal squamous cell carcinoma (ESCC), limiting neoadjuvant chemoimmunotherapy (NACI) response prediction. We propose an interpretable multimodal framework to: (1) quantitatively map intra-/peritumoral heterogeneity via voxel-wise habitat radiomics; (2) model cross-sectional tumor biology using 2.5D deep learning; and (3) establish mechanism-driven biomarkers via SHAP interpretability to identify resistance-linked subregions. This dual-center retrospective study analyzed 269 treatment-naïve ESCC patients with baseline PET/CT (training: n = 144; validation: n = 62; test: n = 63). Habitat radiomics delineated tumor subregions via K-means clustering (Calinski-Harabasz-optimized) on PET/CT, extracting 1,834 radiomic features per modality. A multi-stage pipeline (univariate filtering, mRMR, LASSO regression) selected 32 discriminative features. The 2.5D model aggregated ± 4 peri-tumoral slices, fusing PET/CT via MixUp channels using a fine-tuned ResNet50 (ImageNet-pretrained), with multi-instance learning (MIL) translating slice-level features to patient-level predictions. Habitat features, MIL signatures, and clinical variables were integrated via five-classifier ensemble (ExtraTrees/SVM/RandomForest) and Crossformer architecture (SMOTE-balanced). Validation included AUC, sensitivity, specificity, calibration curves, decision curve analysis (DCA), survival metrics (C-index, Kaplan-Meier), and interpretability (SHAP, Grad-CAM). Habitat radiomics achieved superior validation AUC (0.865, 95% CI: 0.778-0.953), outperforming conventional radiomics (ΔAUC + 3.6%, P < 0.01) and clinical models (ΔAUC + 6.4%, P < 0.001). SHAP identified the invasive front (H2) as dominant predictor (40% of top features), with wavelet_LHH_firstorder_Entropy showing highest impact (SHAP = + 0.42). The 2.5D MIL model demonstrated strong generalizability (validation AUC: 0.861). The combined model achieved state-of-the-art test performance (AUC = 0.824, sensitivity = 0.875) with superior calibration (Hosmer-Lemeshow P > 0.800), effective survival stratification (test C-index: 0.809), and 23-41% net benefit improvement in DCA. Integrating habitat radiomics and 2.5D deep learning enables interpretable dual diagnostic-prognostic stratification in ESCC, advancing precision oncology by decoding spatial heterogeneity.

Wang X, Li B, Ma J, Huo L, Tian X

pubmed logopapersSep 17 2025
Medical image segmentation plays a vital role in diagnosis, treatment planning, and disease monitoring. However, endoscopic and dermoscopic images often exhibit blurred boundaries and low contrast, presenting a significant challenge for precise segmentation. Moreover, single encoder-decoder architectures suffer from inherent limitations, resulting in the loss of either fine-grained details or global context. Some dual-encoder models yield inaccurate results due to mismatched receptive fields and overly simplistic fusion strategies. To overcome these issues, we present the Dual Backbone Cascaded Multi-Convolutional Segmentation Network (DBCM-Net). Our approach employs a Multi-Axis Vision Transformer and a Vision Mamba encoder to extract semantic features at multiple scales, with a cascaded design that enables information sharing between the two backbones. We introduce the Global and Local Fusion Attention Block (GLFAB) to generate attention masks that seamlessly integrate global context with local detail, producing more precise feature maps. Additionally, we incorporate a Depthwise Separable Convolution Attention Module (DSCAM) within the encoders to strengthen the model's ability to capture critical features. A Feature Refinement Fusion Block (FRFB) is further applied to refine these feature maps before subsequent processing. The cascaded network architecture synergistically combines the complementary strengths of both encoders. We rigorously evaluated our model on three distinct datasets, achieving Dice coefficients of 94.93% on the CVC-ClinicDB polyp dataset, 91.93% on ISIC2018, and 92.73% on ACDC, each surpassing current state-of-the-art methods. Extensive experiments demonstrate that the proposed method excels in segmentation accuracy and preserves edge details effectively.

Fang Y, Xiong H, Huang J, Liu F, Shen Z, Cai X, Zhang H, Wang Q

pubmed logopapersSep 17 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To develop a three-stage, age-and modality-conditioned framework to synthesize longitudinal infant brain MRI scans, and account for rapid structural and contrast changes during early brain development. Materials and Methods This retrospective study used T1- and T2-weighted MRI scans (848 scans) from 139 infants in the Baby Connectome Project, collected since September 2016. The framework models three critical image cues related: volumetric expansion, cortical folding, and myelination, predicting missing time points with age and modality as predictive factors. The method was compared with LGAN, CounterSyn, and Diffusion-based approach using peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM) and the Dice similarity coefficient (DSC). Results The framework was trained on 119 participants (average age: 11.25 ± 6.16 months, 60 female, 59 male) and tested on 20 (average age: 12.98 ± 6.59 months, 11 female, 9 male). For T1-weighted images, PSNRs were 25.44 ± 1.95 and 26.93 ± 2.50 for forward and backward MRI synthesis, and SSIMs of 0.87 ± 0.03 and 0.90 ± 0.02. For T2-weighted images, PSNRs were 26.35 ± 2.30 and 26.40 ± 2.56, with SSIMs of 0.87 ± 0.03 and 0.89 ± 0.02, significantly outperforming competing methods (<i>P</i> < .001). The framework also excelled in tissue segmentation (<i>P</i> < .001) and cortical reconstruction, achieving DSC of 0.85 for gray matter and 0.86 for white matter, with intraclass correlation coefficients exceeding 0.8 in most cortical regions. Conclusion The proposed three-stage framework effectively synthesized age-specific infant brain MRI scans, outperforming competing methods in image quality and tissue segmentation with strong performance in cortical reconstruction, demonstrating potential for developmental modeling and longitudinal analyses. ©RSNA, 2025.
Page 157 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.