Sort by:
Page 59 of 1421420 results

Towards Real-Time Detection of Fatty Liver Disease in Ultrasound Imaging: Challenges and Opportunities.

Alshagathrh FM, Schneider J, Househ MS

pubmed logopapersAug 7 2025
This study presents an AI framework for real-time NAFLD detection using ultrasound imaging, addressing operator dependency, imaging variability, and class imbalance. It integrates CNNs with machine learning classifiers and applies preprocessing techniques, including normalization and GAN-based augmentation, to enhance prediction for underrepresented disease stages. Grad-CAM provides visual explanations to support clinical interpretation. Trained on 10,352 annotated images from multiple Saudi centers, the framework achieved 98.9% accuracy and an AUC of 0.99, outperforming baseline CNNs by 12.4% and improving sensitivity for advanced fibrosis and subtle features. Future work will extend multi-class classification, validate performance across settings, and integrate with clinical systems.

Sparse transformer and multipath decision tree: a novel approach for efficient brain tumor classification.

Li P, Jin Y, Wang M, Liu F

pubmed logopapersAug 7 2025
Early classification of brain tumors is the key to effective treatment. With advances in medical imaging technology, automated classification algorithms face challenges due to tumor diversity. Although Swin Transformer is effective in handling high-resolution images, it encounters difficulties with small datasets and high computational complexity. This study introduces SparseSwinMDT, a novel model that combines sparse token representation with multipath decision trees. Experimental results show that SparseSwinMDT achieves an accuracy of 99.47% in brain tumor classification, significantly outperforming existing methods while reducing computation time, making it particularly suitable for resource-constrained medical environments.

Memory-enhanced and multi-domain learning-based deep unrolling network for medical image reconstruction.

Jiang H, Zhang Q, Hu Y, Jin Y, Liu H, Chen Z, Yumo Z, Fan W, Zheng HR, Liang D, Hu Z

pubmed logopapersAug 7 2025
Reconstructing high-quality images from corrupted measurements remains a fundamental challenge in medical imaging. Recently, deep unrolling (DUN) methods have emerged as a promising solution, combining the interpretability of traditional iterative algorithms with the powerful representation capabilities of deep learning. However, their performance is often limited by weak information flow between iterative stages and a constrained ability to capture global features across stages-limitations that tend to worsen as the number of iterations increases.
Approach: In this work, we propose a memory-enhanced and multi-domain learning-based deep unrolling network for interpretable, high-fidelity medical image reconstruction. First, a memory-enhanced module is designed to adaptively integrate historical outputs across stages, reducing information loss. Second, we introduce a cross-stage spatial-domain learning transformer (CS-SLFormer) to extract both local and non-local features within and across stages, improving reconstruction performance. Finally, a frequency-domain consistency learning (FDCL) module ensures alignment between reconstructed and ground truth images in the frequency domain, recovering fine image details.
Main Results: Comprehensive experiments evaluated on three representative medical imaging modalities (PET, MRI, and CT) show that the proposed method consistently outperforms state-of-the-art (SOTA) approaches in both quantitative metrics and visual quality. Specifically, our method achieved a PSNR of 37.835 dB and an SSIM of 0.970 in 1 $\%$ dose PET reconstruction.
Significance: This study expands the use of model-driven deep learning in medical imaging, demonstrating the potential of memory-enhanced deep unrolling frameworks for high-quality reconstructions.

X-UNet:A novel global context-aware collaborative fusion U-shaped network with progressive feature fusion of codec for medical image segmentation.

Xu S, Chen Y, Zhang X, Sun F, Chen S, Ou Y, Luo C

pubmed logopapersAug 7 2025
Due to the inductive bias of convolutions, CNNs perform hierarchical feature extraction efficiently in the field of medical image segmentation. However, the local correlation assumption of inductive bias limits the ability of convolutions to focus on global information, which has led to the performance of Transformer-based methods surpassing that of CNNs in some segmentation tasks in recent years. Although combining with Transformers can solve this problem, it also introduces computational complexity and considerable parameters. In addition, narrowing the encoder-decoder semantic gap for high-quality mask generation is a key challenge, addressed in recent works through feature aggregation from different skip connections. However, this often results in semantic mismatches and additional noise. In this paper, we propose a novel segmentation method, X-UNet, whose backbones employ the CFGC (Collaborative Fusion with Global Context-aware) module. The CFGC module enables multi-scale feature extraction and effective global context modeling. Simultaneously, we employ the CSPF (Cross Split-channel Progressive Fusion) module to progressively align and fuse features from corresponding encoder and decoder stages through channel-wise operations, offering a novel approach to feature integration. Experimental results demonstrate that X-UNet, with fewer computations and parameters, exhibits superior performance on various medical image datasets.The code and models are available on https://github.com/XSJ0410/X-UNet.

Generative Artificial Intelligence in Medical Imaging: Foundations, Progress, and Clinical Translation

Xuanru Zhou, Cheng Li, Shuqiang Wang, Ye Li, Tao Tan, Hairong Zheng, Shanshan Wang

arxiv logopreprintAug 7 2025
Generative artificial intelligence (AI) is rapidly transforming medical imaging by enabling capabilities such as data synthesis, image enhancement, modality translation, and spatiotemporal modeling. This review presents a comprehensive and forward-looking synthesis of recent advances in generative modeling including generative adversarial networks (GANs), variational autoencoders (VAEs), diffusion models, and emerging multimodal foundation architectures and evaluates their expanding roles across the clinical imaging continuum. We systematically examine how generative AI contributes to key stages of the imaging workflow, from acquisition and reconstruction to cross-modality synthesis, diagnostic support, and treatment planning. Emphasis is placed on both retrospective and prospective clinical scenarios, where generative models help address longstanding challenges such as data scarcity, standardization, and integration across modalities. To promote rigorous benchmarking and translational readiness, we propose a three-tiered evaluation framework encompassing pixel-level fidelity, feature-level realism, and task-level clinical relevance. We also identify critical obstacles to real-world deployment, including generalization under domain shift, hallucination risk, data privacy concerns, and regulatory hurdles. Finally, we explore the convergence of generative AI with large-scale foundation models, highlighting how this synergy may enable the next generation of scalable, reliable, and clinically integrated imaging systems. By charting technical progress and translational pathways, this review aims to guide future research and foster interdisciplinary collaboration at the intersection of AI, medicine, and biomedical engineering.

HiFi-Mamba: Dual-Stream W-Laplacian Enhanced Mamba for High-Fidelity MRI Reconstruction

Hongli Chen, Pengcheng Fang, Yuxia Chen, Yingxuan Ren, Jing Hao, Fangfang Tang, Xiaohao Cai, Shanshan Shan, Feng Liu

arxiv logopreprintAug 7 2025
Reconstructing high-fidelity MR images from undersampled k-space data remains a challenging problem in MRI. While Mamba variants for vision tasks offer promising long-range modeling capabilities with linear-time complexity, their direct application to MRI reconstruction inherits two key limitations: (1) insensitivity to high-frequency anatomical details; and (2) reliance on redundant multi-directional scanning. To address these limitations, we introduce High-Fidelity Mamba (HiFi-Mamba), a novel dual-stream Mamba-based architecture comprising stacked W-Laplacian (WL) and HiFi-Mamba blocks. Specifically, the WL block performs fidelity-preserving spectral decoupling, producing complementary low- and high-frequency streams. This separation enables the HiFi-Mamba block to focus on low-frequency structures, enhancing global feature modeling. Concurrently, the HiFi-Mamba block selectively integrates high-frequency features through adaptive state-space modulation, preserving comprehensive spectral details. To eliminate the scanning redundancy, the HiFi-Mamba block adopts a streamlined unidirectional traversal strategy that preserves long-range modeling capability with improved computational efficiency. Extensive experiments on standard MRI reconstruction benchmarks demonstrate that HiFi-Mamba consistently outperforms state-of-the-art CNN-based, Transformer-based, and other Mamba-based models in reconstruction accuracy while maintaining a compact and efficient model design.

FedMP: Tackling Medical Feature Heterogeneity in Federated Learning from a Manifold Perspective

Zhekai Zhou, Shudong Liu, Zhaokun Zhou, Yang Liu, Qiang Yang, Yuesheng Zhu, Guibo Luo

arxiv logopreprintAug 7 2025
Federated learning (FL) is a decentralized machine learning paradigm in which multiple clients collaboratively train a shared model without sharing their local private data. However, real-world applications of FL frequently encounter challenges arising from the non-identically and independently distributed (non-IID) local datasets across participating clients, which is particularly pronounced in the field of medical imaging, where shifts in image feature distributions significantly hinder the global model's convergence and performance. To address this challenge, we propose FedMP, a novel method designed to enhance FL under non-IID scenarios. FedMP employs stochastic feature manifold completion to enrich the training space of individual client classifiers, and leverages class-prototypes to guide the alignment of feature manifolds across clients within semantically consistent subspaces, facilitating the construction of more distinct decision boundaries. We validate the effectiveness of FedMP on multiple medical imaging datasets, including those with real-world multi-center distributions, as well as on a multi-domain natural image dataset. The experimental results demonstrate that FedMP outperforms existing FL algorithms. Additionally, we analyze the impact of manifold dimensionality, communication efficiency, and privacy implications of feature exposure in our method.

AI-Guided Cardiac Computer Tomography in Type 1 Diabetes Patients with Low Coronary Artery Calcium Score.

Wohlfahrt P, Pazderník M, Marhefková N, Roland R, Adla T, Earls J, Haluzík M, Dubský M

pubmed logopapersAug 6 2025
<b><i>Objective:</i></b> Cardiovascular risk stratification based on traditional risk factors lacks precision at the individual level. While coronary artery calcium (CAC) scoring enhances risk prediction by detecting calcified atherosclerotic plaques, it may underestimate risk in individuals with noncalcified plaques-a pattern common in younger type 1 diabetes (T1D) patients. Understanding the prevalence of noncalcified atherosclerosis in T1D is crucial for developing more effective screening strategies. Therefore, this study aimed to assess the burden of clinically significant atherosclerosis in T1D patients with CAC <100 using artificial intelligence (AI)-guided quantitative coronary computed tomographic angiography (AI-QCT). <b><i>Methods:</i></b> This study enrolled T1D patients aged ≥30 years with disease duration ≥10 years and no manifest or symptomatic atherosclerotic cardiovascular disease (ASCVD). CAC and carotid ultrasound were assessed in all participants. AI-QCT was performed in patients with CAC 0 and at least one plaque in the carotid arteries or those with CAC 1-99. <b><i>Results:</i></b> Among the 167 participants (mean age 52 ± 10 years; 44% women; T1D duration 29 ± 11 years), 93 (56%) had CAC = 0, 46 (28%) had CAC 1-99, 8 (5%) had CAC 100-299, and 20 (12%) had CAC ≥300. AI-QCT was performed in a subset of 52 patients. Only 11 (21%) had no evidence of coronary artery disease. Significant coronary stenosis was identified in 17% of patients, and 30 (73%) presented with at least one high-risk plaque. Compared with CAC-based risk categories, AI-QCT reclassified 58% of patients, and 21% compared with the STENO1 risk categories. There was only fair agreement between AI-QCT and CAC (κ = 0.25), and a slight agreement between AI-QCT and STENO1 risk categories (κ = 0.02). <b><i>Conclusion:</i></b> AI-QCT may reveal subclinical atherosclerotic burden and high-risk features that remain undetected by traditional risk models or CAC. These findings challenge the assumption that a low CAC score equates to a low cardiovascular risk in T1D.

Quantum Federated Learning in Healthcare: The Shift from Development to Deployment and from Models to Data.

Bhatia AS, Kais S, Alam MA

pubmed logopapersAug 6 2025
Healthcare organizations have a high volume of sensitive data and traditional technologies have limited storage capacity and computational resources. The prospect of sharing healthcare data for machine learning is more arduous due to firm regulations related to patient privacy. In recent years, federated learning has offered a solution to accelerate distributed machine learning addressing concerns related to data privacy and governance. Currently, the blend of quantum computing and machine learning has experienced significant attention from academic institutions and research communities. The ultimate objective of this work is to develop a federated quantum machine learning framework (FQML) to tackle the optimization, security, and privacy challenges in the healthcare industry for medical imaging tasks. In this work, we proposed federated quantum convolutional neural networks (QCNNs) with distributed training across edge devices. To demonstrate the feasibility of the proposed FQML framework, we performed extensive experiments on two benchmark medical datasets (Pneumonia MNIST, and CT kidney disease analysis), which are non-independently and non-identically partitioned among the healthcare institutions/clients. The proposed framework is validated and assessed via large-scale simulations. Based on our results, the quantum simulation experiments achieve performance levels on par with well-known classical CNN models, 86.3% accuracy on the pneumonia dataset and 92.8% on the CT-kidney dataset, while requiring fewer model parameters and consuming less data. Moreover, the client selection mechanism is proposed to reduce the computation overhead at each communication round, which effectively improves the convergence rate.

Segmenting Whole-Body MRI and CT for Multiorgan Anatomic Structure Delineation.

Häntze H, Xu L, Mertens CJ, Dorfner FJ, Donle L, Busch F, Kader A, Ziegelmayer S, Bayerl N, Navab N, Rueckert D, Schnabel J, Aerts HJWL, Truhn D, Bamberg F, Weiss J, Schlett CL, Ringhof S, Niendorf T, Pischon T, Kauczor HU, Nonnenmacher T, Kröncke T, Völzke H, Schulz-Menger J, Maier-Hein K, Hering A, Prokop M, van Ginneken B, Makowski MR, Adams LC, Bressem KK

pubmed logopapersAug 6 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To develop and validate MRSegmentator, a retrospective cross-modality deep learning model for multiorgan segmentation of MRI scans. Materials and Methods This retrospective study trained MRSegmentator on 1,200 manually annotated UK Biobank Dixon MRI sequences (50 participants), 221 in-house abdominal MRI sequences (177 patients), and 1228 CT scans from the TotalSegmentator-CT dataset. A human-in-the-loop annotation workflow leveraged cross-modality transfer learning from an existing CT segmentation model to segment 40 anatomic structures. The model's performance was evaluated on 900 MRI sequences from 50 participants in the German National Cohort (NAKO), 60 MRI sequences from AMOS22 dataset, and 29 MRI sequences from TotalSegmentator-MRI. Reference standard manual annotations were used for comparison. Metrics to assess segmentation quality included Dice Similarity Coefficient (DSC). Statistical analyses included organ-and sequence-specific mean ± SD reporting and two-sided <i>t</i> tests for demographic effects. Results 139 participants were evaluated; demographic information was available for 70 (mean age 52.7 years ± 14.0 [SD], 36 female). Across all test datasets, MRSegmentator demonstrated high class wise DSC for well-defined organs (lungs: 0.81-0.96, heart: 0.81-0.94) and organs with anatomic variability (liver: 0.82-0.96, kidneys: 0.77-0.95). Smaller structures showed lower DSC (portal/splenic veins: 0.64-0.78, adrenal glands: 0.56-0.69). The average DSC on the external testing using NAKO data, ranged from 0.85 ± 0.08 for T2-HASTE to 0.91 ± 0.05 for in-phase sequences. The model generalized well to CT, achieving mean DSC of 0.84 ± 0.12 on AMOS CT data. Conclusion MRSegmentator accurately segmented 40 anatomic structures on MRI and generalized to CT; outperforming existing open-source tools. Published under a CC BY 4.0 license.
Page 59 of 1421420 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.