Sort by:
Page 122 of 1401395 results

FeaInfNet: Diagnosis of Medical Images With Feature-Driven Inference and Visual Explanations.

Peng Y, He L, Hu D, Liu Y, Yang L, Shang S

pubmed logopapersJun 1 2025
Interpretable deep-learning models have received widespread attention in the field of image recognition. However, owing to the coexistence of medical-image categories and the challenge of identifying subtle decision-making regions, many proposed interpretable deep-learning models suffer from insufficient accuracy and interpretability in diagnosing images of medical diseases. Therefore, this study proposed a feature-driven inference network (FeaInfNet) that incorporates a feature-based network reasoning structure. Specifically, local feature masks (LFM) were developed to extract feature vectors, thereby providing global information for these vectors and enhancing the expressive ability of FeaInfNet. Second, FeaInfNet compares the similarity of the feature vector corresponding to each subregion image patch with the disease and normal prototype templates that may appear in the region. It then combines the comparison of each subregion when making the final diagnosis. This strategy simulates the diagnosis process of doctors, making the model interpretable during the reasoning process, while avoiding misleading results caused by the participation of normal areas during reasoning. Finally, we proposed adaptive dynamic masks (Adaptive-DM) to interpret feature vectors and prototypes into human-understandable image patches to provide an accurate visual interpretation. Extensive experiments on multiple publicly available medical datasets, including RSNA, iChallenge-PM, COVID-19, ChinaCXRSet, MontgomerySet, and CBIS-DDSM, demonstrated that our method achieves state-of-the-art classification accuracy and interpretability compared with baseline methods in the diagnosis of medical images. Additional ablation studies were performed to verify the effectiveness of each component.

Knowledge-Aware Multisite Adaptive Graph Transformer for Brain Disorder Diagnosis.

Song X, Shu K, Yang P, Zhao C, Zhou F, Frangi AF, Xiao X, Dong L, Wang T, Wang S, Lei B

pubmed logopapersJun 1 2025
Brain disorder diagnosis via resting-state functional magnetic resonance imaging (rs-fMRI) is usually limited due to the complex imaging features and sample size. For brain disorder diagnosis, the graph convolutional network (GCN) has achieved remarkable success by capturing interactions between individuals and the population. However, there are mainly three limitations: 1) The previous GCN approaches consider the non-imaging information in edge construction but ignore the sensitivity differences of features to non-imaging information. 2) The previous GCN approaches solely focus on establishing interactions between subjects (i.e., individuals and the population), disregarding the essential relationship between features. 3) Multisite data increase the sample size to help classifier training, but the inter-site heterogeneity limits the performance to some extent. This paper proposes a knowledge-aware multisite adaptive graph Transformer to address the above problems. First, we evaluate the sensitivity of features to each piece of non-imaging information, and then construct feature-sensitive and feature-insensitive subgraphs. Second, after fusing the above subgraphs, we integrate a Transformer module to capture the intrinsic relationship between features. Third, we design a domain adaptive GCN using multiple loss function terms to relieve data heterogeneity and to produce the final classification results. Last, the proposed framework is validated on two brain disorder diagnostic tasks. Experimental results show that the proposed framework can achieve state-of-the-art performance.

Explicit Abnormality Extraction for Unsupervised Motion Artifact Reduction in Magnetic Resonance Imaging.

Zhou Y, Li H, Liu J, Kong Z, Huang T, Ahn E, Lv Z, Kim J, Feng DD

pubmed logopapersJun 1 2025
Motion artifacts compromise the quality of magnetic resonance imaging (MRI) and pose challenges to achieving diagnostic outcomes and image-guided therapies. In recent years, supervised deep learning approaches have emerged as successful solutions for motion artifact reduction (MAR). One disadvantage of these methods is their dependency on acquiring paired sets of motion artifact-corrupted (MA-corrupted) and motion artifact-free (MA-free) MR images for training purposes. Obtaining such image pairs is difficult and therefore limits the application of supervised training. In this paper, we propose a novel UNsupervised Abnormality Extraction Network (UNAEN) to alleviate this problem. Our network is capable of working with unpaired MA-corrupted and MA-free images. It converts the MA-corrupted images to MA-reduced images by extracting abnormalities from the MA-corrupted images using a proposed artifact extractor, which intercepts the residual artifact maps from the MA-corrupted MR images explicitly, and a reconstructor to restore the original input from the MA-reduced images. The performance of UNAEN was assessed by experimenting with various publicly available MRI datasets and comparing them with state-of-the-art methods. The quantitative evaluation demonstrates the superiority of UNAEN over alternative MAR methods and visually exhibits fewer residual artifacts. Our results substantiate the potential of UNAEN as a promising solution applicable in real-world clinical environments, with the capability to enhance diagnostic accuracy and facilitate image-guided therapies.

Score-Based Diffusion Models With Self-Supervised Learning for Accelerated 3D Multi-Contrast Cardiac MR Imaging.

Liu Y, Cui ZX, Qin S, Liu C, Zheng H, Wang H, Zhou Y, Liang D, Zhu Y

pubmed logopapersJun 1 2025
Long scan time significantly hinders the widespread applications of three-dimensional multi-contrast cardiac magnetic resonance (3D-MC-CMR) imaging. This study aims to accelerate 3D-MC-CMR acquisition by a novel method based on score-based diffusion models with self-supervised learning. Specifically, we first establish a mapping between the undersampled k-space measurements and the MR images, utilizing a self-supervised Bayesian reconstruction network. Secondly, we develop a joint score-based diffusion model on 3D-MC-CMR images to capture their inherent distribution. The 3D-MC-CMR images are finally reconstructed using the conditioned Langenvin Markov chain Monte Carlo sampling. This approach enables accurate reconstruction without fully sampled training data. Its performance was tested on the dataset acquired by a 3D joint myocardial $ \text {T}_{{1}}$ and $ \text {T}_{{1}\rho }$ mapping sequence. The $ \text {T}_{{1}}$ and $ \text {T}_{{1}\rho }$ maps were estimated via a dictionary matching method from the reconstructed images. Experimental results show that the proposed method outperforms traditional compressed sensing and existing self-supervised deep learning MRI reconstruction methods. It also achieves high quality $ \text {T}_{{1}}$ and $ \text {T}_{{1}\rho }$ parametric maps close to the reference maps, even at a high acceleration rate of 14.

A Foundation Model for Lesion Segmentation on Brain MRI With Mixture of Modality Experts.

Zhang X, Ou N, Doga Basaran B, Visentin M, Qiao M, Gu R, Matthews PM, Liu Y, Ye C, Bai W

pubmed logopapersJun 1 2025
Brain lesion segmentation is crucial for neurological disease research and diagnosis. As different types of lesions exhibit distinct characteristics on different imaging modalities, segmentation methods are typically developed in a task-specific manner, where each segmentation model is tailored to a specific lesion type and modality. However, the use of task-specific models requires predetermination of the lesion type and imaging modality, which complicates their deployment in real-world scenarios. In this work, we propose a universal foundation model for brain lesion segmentation on magnetic resonance imaging (MRI), which can automatically segment different types of brain lesions given input of various MRI modalities. We develop a novel Mixture of Modality Experts (MoME) framework with multiple expert networks attending to different imaging modalities. A hierarchical gating network is proposed to combine the expert predictions and foster expertise collaboration. Moreover, to avoid the degeneration of each expert network, we introduce a curriculum learning strategy during training to preserve the specialisation of each expert. In addition to MoME, to handle the combination of multiple input modalities, we propose MoME+, which uses a soft dispatch network for input modality routing. We evaluated the proposed method on nine brain lesion datasets, encompassing five imaging modalities and eight lesion types. The results show that our model outperforms state-of-the-art universal models for brain lesion segmentation and achieves promising generalisation performance onto unseen datasets.

Comparing Artificial Intelligence and Traditional Regression Models in Lung Cancer Risk Prediction Using A Systematic Review and Meta-Analysis.

Leonard S, Patel MA, Zhou Z, Le H, Mondal P, Adams SJ

pubmed logopapersJun 1 2025
Accurately identifying individuals who are at high risk of lung cancer is critical to optimize lung cancer screening with low-dose CT (LDCT). We sought to compare the performance of traditional regression models and artificial intelligence (AI)-based models in predicting future lung cancer risk. A systematic review and meta-analysis were conducted with reporting according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. We searched MEDLINE, Embase, Scopus, and the Cumulative Index to Nursing and Allied Health Literature databases for studies reporting the performance of AI or traditional regression models for predicting lung cancer risk. Two researchers screened articles, and a third researcher resolved conflicts. Model characteristics and predictive performance metrics were extracted. The quality of studies was assessed using the Prediction model Risk of Bias Assessment Tool. A meta-analysis assessed the discrimination performance of models, based on area under the receiver operating characteristic curve (AUC). One hundred forty studies met inclusion criteria and included 185 traditional and 64 AI-based models. Of these, 16 AI models and 65 traditional models have been externally validated. The pooled AUC of external validations of AI models was 0.82 (95% confidence interval [CI], 0.80-0.85), and the pooled AUC for traditional regression models was 0.73 (95% CI, 0.72-0.74). In a subgroup analysis, AI models that included LDCT had a pooled AUC of 0.85 (95% CI, 0.82-0.88). Overall risk of bias was high for both AI and traditional models. AI-based models, particularly those using imaging data, show promise for improving lung cancer risk prediction over traditional regression models. Future research should focus on prospective validation of AI models and direct comparisons with traditional methods in diverse populations.

Accelerated High-resolution T1- and T2-weighted Breast MRI with Deep Learning Super-resolution Reconstruction.

Mesropyan N, Katemann C, Leutner C, Sommer A, Isaak A, Weber OM, Peeters JM, Dell T, Bischoff L, Kuetting D, Pieper CC, Lakghomi A, Luetkens JA

pubmed logopapersJun 1 2025
To assess the performance of an industry-developed deep learning (DL) algorithm to reconstruct low-resolution Cartesian T1-weighted dynamic contrast-enhanced (T1w) and T2-weighted turbo-spin-echo (T2w) sequences and compare them to standard sequences. Female patients with indications for breast MRI were included in this prospective study. The study protocol at 1.5 Tesla MRI included T1w and T2w. Both sequences were acquired in standard resolution (T1<sub>S</sub> and T2<sub>S</sub>) and in low-resolution with following DL reconstructions (T1<sub>DL</sub> and T2<sub>DL</sub>). For DL reconstruction, two convolutional networks were used: (1) Adaptive-CS-Net for denoising with compressed sensing, and (2) Precise-Image-Net for resolution upscaling of previously downscaled images. Overall image quality was assessed using 5-point-Likert scale (from 1=non-diagnostic to 5=excellent). Apparent signal-to-noise (aSNR) and contrast-to-noise (aCNR) ratios were calculated. Breast Imaging Reporting and Data System (BI-RADS) agreement between different sequence types was assessed. A total of 47 patients were included (mean age, 58±11 years). Acquisition time for T1<sub>DL</sub> and T2<sub>DL</sub> were reduced by 51% (44 vs. 90 s per dynamic phase) and 46% (102 vs. 192 s), respectively. T1<sub>DL</sub> and T2<sub>DL</sub> showed higher overall image quality (e.g., 4 [IQR, 4-4] for T1<sub>S</sub> vs. 5 [IQR, 5-5] for T1<sub>DL</sub>, P<0.001). Both, T1<sub>DL</sub> and T2<sub>DL</sub> revealed higher aSNR and aCNR than T1<sub>S</sub> and T2<sub>S</sub> (e.g., aSNR: 32.35±10.23 for T2<sub>S</sub> vs. 27.88±6.86 for T2<sub>DL</sub>, P=0.014). Cohen k agreement by BI-RADS assessment was excellent (0.962, P<0.001). DL for denoising and resolution upscaling reduces acquisition time and improves image quality for T1w and T2w breast MRI.

GDP-Net: Global Dependency-Enhanced Dual-Domain Parallel Network for Ring Artifact Removal.

Zhang Y, Liu G, Liu Y, Xie S, Gu J, Huang Z, Ji X, Lyu T, Xi Y, Zhu S, Yang J, Chen Y

pubmed logopapersJun 1 2025
In Computed Tomography (CT) imaging, the ring artifacts caused by the inconsistent detector response can significantly degrade the reconstructed images, having negative impacts on the subsequent applications. The new generation of CT systems based on photon-counting detectors are affected by ring artifacts more severely. The flexibility and variety of detector responses make it difficult to build a well-defined model to characterize the ring artifacts. In this context, this study proposes the global dependency-enhanced dual-domain parallel neural network for Ring Artifact Removal (RAR). First, based on the fact that the features of ring artifacts are different in Cartesian and Polar coordinates, the parallel architecture is adopted to construct the deep neural network so that it can extract and exploit the latent features from different domains to improve the performance of ring artifact removal. Besides, the ring artifacts are globally relevant whether in Cartesian or Polar coordinate systems, but convolutional neural networks show inherent shortcomings in modeling long-range dependency. To tackle this problem, this study introduces the novel Mamba mechanism to achieve a global receptive field without incurring high computational complexity. It enables effective capture of the long-range dependency, thereby enhancing the model performance in image restoration and artifact reduction. The experiments on the simulated data validate the effectiveness of the dual-domain parallel neural network and the Mamba mechanism, and the results on two unseen real datasets demonstrate the promising performance of the proposed RAR algorithm in eliminating ring artifacts and recovering image details.

MedBookVQA: A Systematic and Comprehensive Medical Benchmark Derived from Open-Access Book

Sau Lai Yip, Sunan He, Yuxiang Nie, Shu Pui Chan, Yilin Ye, Sum Ying Lam, Hao Chen

arxiv logopreprintJun 1 2025
The accelerating development of general medical artificial intelligence (GMAI), powered by multimodal large language models (MLLMs), offers transformative potential for addressing persistent healthcare challenges, including workforce deficits and escalating costs. The parallel development of systematic evaluation benchmarks emerges as a critical imperative to enable performance assessment and provide technological guidance. Meanwhile, as an invaluable knowledge source, the potential of medical textbooks for benchmark development remains underexploited. Here, we present MedBookVQA, a systematic and comprehensive multimodal benchmark derived from open-access medical textbooks. To curate this benchmark, we propose a standardized pipeline for automated extraction of medical figures while contextually aligning them with corresponding medical narratives. Based on this curated data, we generate 5,000 clinically relevant questions spanning modality recognition, disease classification, anatomical identification, symptom diagnosis, and surgical procedures. A multi-tier annotation system categorizes queries through hierarchical taxonomies encompassing medical imaging modalities (42 categories), body anatomies (125 structures), and clinical specialties (31 departments), enabling nuanced analysis across medical subdomains. We evaluate a wide array of MLLMs, including proprietary, open-sourced, medical, and reasoning models, revealing significant performance disparities across task types and model categories. Our findings highlight critical capability gaps in current GMAI systems while establishing textbook-derived multimodal benchmarks as essential evaluation tools. MedBookVQA establishes textbook-derived benchmarking as a critical paradigm for advancing clinical AI, exposing limitations in GMAI systems while providing anatomically structured performance metrics across specialties.

Predicting hemorrhagic transformation in acute ischemic stroke: a systematic review, meta-analysis, and methodological quality assessment of CT/MRI-based deep learning and radiomics models.

Salimi M, Vadipour P, Bahadori AR, Houshi S, Mirshamsi A, Fatemian H

pubmed logopapersJun 1 2025
Acute ischemic stroke (AIS) is a major cause of mortality and morbidity, with hemorrhagic transformation (HT) as a severe complication. Accurate prediction of HT is essential for optimizing treatment strategies. This review assesses the accuracy and utility of deep learning (DL) and radiomics in predicting HT through imaging, regarding clinical decision-making for AIS patients. A literature search was conducted across five databases (Pubmed, Scopus, Web of Science, Embase, IEEE) up to January 23, 2025. Studies involving DL or radiomics-based ML models for predicting HT in AIS patients were included. Data from training, validation, and clinical-combined models were extracted and analyzed separately. Pooled sensitivity, specificity, and AUC were calculated with a random-effects bivariate model. For the quality assessment of studies, the Methodological Radiomics Score (METRICS) and QUADAS-2 tool were used. 16 studies consisting of 3,083 individual participants were included in the meta-analysis. The pooled AUC for training cohorts was 0.87, sensitivity 0.80, and specificity 0.85. For validation cohorts, AUC was 0.87, sensitivity 0.81, and specificity 0.86. Clinical-combined models showed an AUC of 0.93, sensitivity 0.84, and specificity 0.89. Moderate to severe heterogeneity was noted and addressed. Deep-learning models outperformed radiomics models, while clinical-combined models outperformed deep learning-only and radiomics-only models. The average METRICS score was 62.85%. No publication bias was detected. DL and radiomics models showed great potential in predicting HT in AIS patients. However, addressing methodological issues-such as inconsistent reference standards and limited external validation-is essential for the clinical implementation of these models.
Page 122 of 1401395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.