Sort by:
Page 174 of 2442432 results

Multimodal deep learning for enhanced breast cancer diagnosis on sonography.

Wei TR, Chang A, Kang Y, Patel M, Fang Y, Yan Y

pubmed logopapersJun 12 2025
This study introduces a novel multimodal deep learning model tailored for the differentiation of benign and malignant breast masses using dual-view breast ultrasound images (radial and anti-radial views) in conjunction with corresponding radiology reports. The proposed multimodal model architecture includes specialized image and text encoders for independent feature extraction, along with a transformation layer to align the multimodal features for the subsequent classification task. The model achieved an area of the curve of 85% and outperformed unimodal models with 6% and 8% in Youden index. Additionally, our multimodal model surpassed zero-shot predictions generated by prominent foundation models such as CLIP and MedCLIP. In direct comparison with classification results based on physician-assessed ratings, our model exhibited clear superiority, highlighting its practical significance in diagnostics. By integrating both image and text modalities, this study exemplifies the potential of multimodal deep learning in enhancing diagnostic performance, laying the foundation for developing robust and transparent AI-assisted solutions.

Tackling Tumor Heterogeneity Issue: Transformer-Based Multiple Instance Enhancement Learning for Predicting EGFR Mutation via CT Images.

Fang Y, Wang M, Song Q, Cao C, Gao Z, Song B, Min X, Li A

pubmed logopapersJun 12 2025
Accurate and non-invasive prediction of epidermal growth factor receptor (EGFR) mutation is crucial for the diagnosis and treatment of non-small cell lung cancer (NSCLC). While computed tomography (CT) imaging shows promise in identifying EGFR mutation, current prediction methods heavily rely on fully supervised learning, which overlooks the substantial heterogeneity of tumors and therefore leads to suboptimal results. To tackle tumor heterogeneity issue, this study introduces a novel weakly supervised method named TransMIEL, which leverages multiple instance learning techniques for accurate EGFR mutation prediction. Specifically, we first propose an innovative instance enhancement learning (IEL) strategy that strengthens the discriminative power of instance features for complex tumor CT images by exploring self-derived soft pseudo-labels. Next, to improve tumor representation capability, we design a spatial-aware transformer (SAT) that fully captures inter-instance relationships of different pathological subregions to mirror the diagnostic processes of radiologists. Finally, an instance adaptive gating (IAG) module is developed to effectively emphasize the contribution of informative instance features in heterogeneous tumors, facilitating dynamic instance feature aggregation and increasing model generalization performance. Experimental results demonstrate that TransMIEL significantly outperforms existing fully and weakly supervised methods on both public and in-house NSCLC datasets. Additionally, visualization results show that our approach can highlight intra-tumor and peri-tumor areas relevant to EGFR mutation status. Therefore, our method holds significant potential as an effective tool for EGFR prediction and offers a novel perspective for future research on tumor heterogeneity.

PiPViT: Patch-based Visual Interpretable Prototypes for Retinal Image Analysis

Marzieh Oghbaie, Teresa Araújo, Hrvoje Bogunović

arxiv logopreprintJun 12 2025
Background and Objective: Prototype-based methods improve interpretability by learning fine-grained part-prototypes; however, their visualization in the input pixel space is not always consistent with human-understandable biomarkers. In addition, well-known prototype-based approaches typically learn extremely granular prototypes that are less interpretable in medical imaging, where both the presence and extent of biomarkers and lesions are critical. Methods: To address these challenges, we propose PiPViT (Patch-based Visual Interpretable Prototypes), an inherently interpretable prototypical model for image recognition. Leveraging a vision transformer (ViT), PiPViT captures long-range dependencies among patches to learn robust, human-interpretable prototypes that approximate lesion extent only using image-level labels. Additionally, PiPViT benefits from contrastive learning and multi-resolution input processing, which enables effective localization of biomarkers across scales. Results: We evaluated PiPViT on retinal OCT image classification across four datasets, where it achieved competitive quantitative performance compared to state-of-the-art methods while delivering more meaningful explanations. Moreover, quantitative evaluation on a hold-out test set confirms that the learned prototypes are semantically and clinically relevant. We believe PiPViT can transparently explain its decisions and assist clinicians in understanding diagnostic outcomes. Github page: https://github.com/marziehoghbaie/PiPViT

PiPViT: Patch-based Visual Interpretable Prototypes for Retinal Image Analysis

Marzieh Oghbaie, Teresa Araújoa, Hrvoje Bogunović

arxiv logopreprintJun 12 2025
Background and Objective: Prototype-based methods improve interpretability by learning fine-grained part-prototypes; however, their visualization in the input pixel space is not always consistent with human-understandable biomarkers. In addition, well-known prototype-based approaches typically learn extremely granular prototypes that are less interpretable in medical imaging, where both the presence and extent of biomarkers and lesions are critical. Methods: To address these challenges, we propose PiPViT (Patch-based Visual Interpretable Prototypes), an inherently interpretable prototypical model for image recognition. Leveraging a vision transformer (ViT), PiPViT captures long-range dependencies among patches to learn robust, human-interpretable prototypes that approximate lesion extent only using image-level labels. Additionally, PiPViT benefits from contrastive learning and multi-resolution input processing, which enables effective localization of biomarkers across scales. Results: We evaluated PiPViT on retinal OCT image classification across four datasets, where it achieved competitive quantitative performance compared to state-of-the-art methods while delivering more meaningful explanations. Moreover, quantitative evaluation on a hold-out test set confirms that the learned prototypes are semantically and clinically relevant. We believe PiPViT can transparently explain its decisions and assist clinicians in understanding diagnostic outcomes. Github page: https://github.com/marziehoghbaie/PiPViT

CT derived fractional flow reserve: Part 2 - Critical appraisal of the literature.

Rodriguez-Lozano PF, Waheed A, Evangelou S, Kolossváry M, Shaikh K, Siddiqui S, Stipp L, Lakshmanan S, Wu EH, Nurmohamed NS, Orbach A, Baliyan V, de Matos JFRG, Trivedi SJ, Madan N, Villines TC, Ihdayhid AR

pubmed logopapersJun 12 2025
The integration of computed tomography-derived fractional flow reserve (CT-FFR), utilizing computational fluid dynamics and artificial intelligence (AI) in routine coronary computed tomographic angiography (CCTA), presents a promising approach to enhance evaluations of functional lesion severity. Extensive evidence underscores the diagnostic accuracy, prognostic significance, and clinical relevance of CT-FFR, prompting recent clinical guidelines to recommend its combined use with CCTA for selected individuals with with intermediate stenosis on CCTA and stable or acute chest pain. This manuscript critically examines the existing clinical evidence, evaluates the diagnostic performance, and outlines future perspectives for integrating noninvasive assessments of coronary anatomy and physiology. Furthermore, it serves as a practical guide for medical imaging professionals by addressing common pitfalls and challenges associated with CT-FFR while proposing potential solutions to facilitate its successful implementation in clinical practice.

NeuroEmo: A neuroimaging-based fMRI dataset to extract temporal affective brain dynamics for Indian movie video clips stimuli using dynamic functional connectivity approach with graph convolution neural network (DFC-GCNN).

Abgeena A, Garg S, Goyal N, P C JR

pubmed logopapersJun 12 2025
FMRI, a non-invasive neuroimaging technique, can detect emotional brain activation patterns. It allows researchers to observe functional changes in the brain, making it a valuable tool for emotion recognition. For improved emotion recognition systems, it becomes crucial to understand the neural mechanisms behind emotional processing in the brain. There have been multiple studies across the world on the same, however, research on fMRI-based emotion recognition within the Indian population remains scarce, limiting the generalizability of existing models. To address this gap, a culturally relevant neuroimaging dataset has been created https://openneuro.org/datasets/ds005700 for identifying five emotional states i.e., calm, afraid, delighted, depressed and excited-in a diverse group of Indian participants. To ensure cultural relevance, emotional stimuli were derived from Bollywood movie clips. This study outlines the fMRI task design, experimental setup, data collection procedures, preprocessing steps, statistical analysis using the General Linear Model (GLM), and region-of-interest (ROI)-based dynamic functional connectivity (DFC) extraction using parcellation based on the Power et al. (2011) functional atlas. A supervised emotion classification model has been proposed using a Graph Convolutional Neural Network (GCNN), where graph structures were constructed from DFC matrices at varying thresholds. The DFC-GCNN model achieved an impressive 95% classification accuracy across 5-fold cross-validation, highlighting emotion-specific connectivity dynamics in key affective regions, including the amygdala, prefrontal cortex, and anterior insula. These findings emphasize the significance of temporal variability in emotional state classification. By introducing a culturally specific neuroimaging dataset and a GCNN-based emotion recognition framework, this research enhances the applicability of graph-based models for identifying region-wise connectivity patterns in fMRI data. It also offers novel insights into cross-cultural differences in emotional processing at the neural level. Furthermore, the high spatial and temporal resolution of the fMRI dataset provides a valuable resource for future studies in emotional neuroscience and related disciplines.

AI-based identification of patients who benefit from revascularization: a multicenter study

Zhang, W., Miller, R. J., Patel, K., Shanbhag, A., Liang, J., Lemley, M., Ramirez, G., Builoff, V., Yi, J., Zhou, J., Kavanagh, P., Acampa, W., Bateman, T. M., Di Carli, M. F., Dorbala, S., Einstein, A. J., Fish, M. B., Hauser, M. T., Ruddy, T., Kaufmann, P. A., Miller, E. J., Sharir, T., Martins, M., Halcox, J., Chareonthaitawee, P., Dey, D., Berman, D., Slomka, P.

medrxiv logopreprintJun 12 2025
Background and AimsRevascularization in stable coronary artery disease often relies on ischemia severity, but we introduce an AI-driven approach that uses clinical and imaging data to estimate individualized treatment effects and guide personalized decisions. MethodsUsing a large, international registry from 13 centers, we developed an AI model to estimate individual treatment effects by simulating outcomes under alternative therapeutic strategies. The model was trained on an internal cohort constructed using 1:1 propensity score matching to emulate randomized controlled trials (RCTs), creating balanced patient pairs in which only the treatment strategy--early revascularization (defined as any procedure within 90 days of MPI) versus medical therapy--differed. This design allowed the model to estimate individualized treatment effects, forming the basis for counterfactual reasoning at the patient level. We then derived the AI-REVASC score, which quantifies the potential benefit, for each patient, of early revascularization. The score was validated in the held-out testing cohort using Cox regression. ResultsOf 45,252 patients, 19,935 (44.1%) were female, median age 65 (IQR: 57-73). During a median follow-up of 3.6 years (IQR: 2.7-4.9), 4,323 (9.6%) experienced MI or death. The AI model identified a group (n=1,335, 5.9%) that benefits from early revascularization with a propensity-adjusted hazard ratio of 0.50 (95% CI: 0.25-1.00). Patients identified for early revascularization had higher prevalence of hypertension, diabetes, dyslipidemia, and lower LVEF. ConclusionsThis study pioneers a scalable, data-driven approach that emulates randomized trials using retrospective data. The AI-REVASC score enables precision revascularization decisions where guidelines and RCTs fall short. Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=104 SRC="FIGDIR/small/25329295v1_ufig1.gif" ALT="Figure 1"> View larger version (31K): [email protected]@1df75d8org.highwire.dtl.DTLVardef@1b1ce68org.highwire.dtl.DTLVardef@663cdf_HPS_FORMAT_FIGEXP M_FIG C_FIG

Exploring the limit of image resolution for human expert classification of vascular ultrasound images in giant cell arteritis and healthy subjects: the GCA-US-AI project.

Bauer CJ, Chrysidis S, Dejaco C, Koster MJ, Kohler MJ, Monti S, Schmidt WA, Mukhtyar CB, Karakostas P, Milchert M, Ponte C, Duftner C, de Miguel E, Hocevar A, Iagnocco A, Terslev L, Døhn UM, Nielsen BD, Juche A, Seitz L, Keller KK, Karalilova R, Daikeler T, Mackie SL, Torralba K, van der Geest KSM, Boumans D, Bosch P, Tomelleri A, Aschwanden M, Kermani TA, Diamantopoulos A, Fredberg U, Inanc N, Petzinna SM, Albarqouni S, Behning C, Schäfer VS

pubmed logopapersJun 12 2025
Prompt diagnosis of giant cell arteritis (GCA) with ultrasound is crucial for preventing severe ocular and other complications, yet expertise in ultrasound performance is scarce. The development of an artificial intelligence (AI)-based assistant that facilitates ultrasound image classification and helps to diagnose GCA early promises to close the existing gap. In the projection of the planned AI, this study investigates the minimum image resolution required for human experts to reliably classify ultrasound images of arteries commonly affected by GCA for the presence or absence of GCA. Thirty-one international experts in GCA ultrasonography participated in a web-based exercise. They were asked to classify 10 ultrasound images for each of 5 vascular segments as GCA, normal, or not able to classify. The following segments were assessed: (1) superficial common temporal artery, (2) its frontal and (3) parietal branches (all in transverse view), (4) axillary artery in transverse view, and 5) axillary artery in longitudinal view. Identical images were shown at different resolutions, namely 32 × 32, 64 × 64, 128 × 128, 224 × 224, and 512 × 512 pixels, thereby resulting in a total of 250 images to be classified by every study participant. Classification performance improved with increasing resolution up to a threshold, plateauing at 224 × 224 pixels. At 224 × 224 pixels, the overall classification sensitivity was 0.767 (95% CI, 0.737-0.796), and specificity was 0.862 (95% CI, 0.831-0.888). A resolution of 224 × 224 pixels ensures reliable human expert classification and aligns with the input requirements of many common AI-based architectures. Thus, the results of this study substantially guide projected AI development.

Non-invasive multi-phase CT artificial intelligence for predicting pre-treatment enlarged lymph node status in colorectal cancer: a prospective validation study.

Sun K, Wang J, Wang B, Wang Y, Lu S, Jiang Z, Fu W, Zhou X

pubmed logopapersJun 12 2025
Benign lymph node enlargement can mislead surgeons into overstaging colorectal cancer (CRC), causing unnecessarily extended lymphadenectomy. This study aimed to develop and validate a machine learning (ML) classifier utilizing multi-phase CT (MPCT) radiomics for accurate evaluation of the pre-treatment status of enlarged tumor-draining lymph nodes (TDLNs; defined as long-axis diameter ≥ 10 mm). This study included 430 pathologically confirmed CRC patients who underwent radical resection, stratified into a development cohort (n = 319; January 2015-December 2019, retrospectively enrolled) and test cohort (n = 111; January 2020-May 2023, prospectively enrolled). Radiomics features were extracted from multi-regional lesions (tumor and enlarged TDLNs) on MPCT. Following rigorous feature selection, optimal features were employed to train multiple ML classifiers. The top-performing classifier based on area under receiver operating characteristic curves (AUROCs) was validated. Ultimately, 15 classifiers based on features from multi-regional lesions were constructed (Tumor<sub>N, A</sub>, <sub>V</sub>; Ln<sub>N</sub>, <sub>A</sub>, <sub>V</sub>; Ln, lymph node; <sub>N</sub>, non-contrast phase; <sub>A</sub>, arterial phase; <sub>V</sub>, venous phase). Among all classifiers, the enlarged TDLNs fusion MPCT classifier (Ln<sub>NAV</sub>) demonstrated the highest predictive efficacy, with AUROCs and AUPRCs of 0.820 and 0.883, respectively. When pre-treatment clinical variables were integrated (Clinical_Ln<sub>NAV</sub>), the model's efficacy improved, with AUROCs of 0.839, AUPRCs of 0.903, accuracy of 76.6%, sensitivity of 67.7%, and specificity of 89.1%. The classifier Clinical_Ln<sub>NAV</sub> demonstrated well performance in evaluating pre-treatment status of enlarged TDLNs. This tool may support clinicians in developing individualized treatment plans for CRC patients, helping to avoid inappropriate treatment. Question There are currently no effective non-invasive tools to assess the status of enlarged tumor-draining lymph nodes in colorectal cancer prior to treatment. Findings Pre-treatment multi-phase CT radiomics, combined with clinical variables, effectively assessed the status of enlarged tumor-draining lymph nodes, achieving a specificity of 89.1%. Clinical relevance statement The multi-phase CT-based classifier may assist clinicians in developing individualized treatment plans for colorectal cancer patients, potentially helping to avoid inappropriate preoperative adjuvant therapy and unnecessary extended lymphadenectomy.

Radiogenomic correlation of hypoxia-related biomarkers in clear cell renal cell carcinoma.

Shao Y, Cen HS, Dhananjay A, Pawan SJ, Lei X, Gill IS, D'souza A, Duddalwar VA

pubmed logopapersJun 12 2025
This study aimed to evaluate radiomic models' ability to predict hypoxia-related biomarker expression in clear cell renal cell carcinoma (ccRCC). Clinical and molecular data from 190 patients were extracted from The Cancer Genome Atlas-Kidney Renal Clear Cell Carcinoma dataset, and corresponding CT imaging data were manually segmented from The Cancer Imaging Archive. A panel of 2,824 radiomic features was analyzed, and robust, high-interscanner-reproducibility features were selected. Gene expression data for 13 hypoxia-related biomarkers were stratified by tumor grade (1/2 vs. 3/4) and stage (I/II vs. III/IV) and analyzed using Wilcoxon rank sum test. Machine learning modeling was conducted using the High-Performance Random Forest (RF) procedure in SAS Enterprise Miner 15.1, with significance at P < 0.05. Descriptive univariate analysis revealed significantly lower expression of several biomarkers in high-grade and late-stage tumors, with KLF6 showing the most notable decrease. The RF model effectively predicted the expression of KLF6, ETS1, and BCL2, as well as PLOD2 and PPARGC1A underexpression. Stratified performance assessment showed improved predictive ability for RORA, BCL2, and KLF6 in high-grade tumors and for ETS1 across grades, with no significant performance difference across grade or stage. The RF model demonstrated modest but significant associations between texture metrics derived from clinical CT scans, such as GLDM and GLCM, and key hypoxia-related biomarkers including KLF6, BCL2, ETS1, and PLOD2. These findings suggest that radiomic analysis could support ccRCC risk stratification and personalized treatment planning by providing non-invasive insights into tumor biology.
Page 174 of 2442432 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.