Sort by:
Page 295 of 6626611 results

Herold A, Mercaldo ND, Anderson MA, Mojtahed A, Kilcoyne A, Lo WC, Sellers RM, Clifford B, Nickel MD, Nakrour N, Huang SY, Tsai LL, Catalano OA, Harisinghani MG

pubmed logopapersAug 7 2025
To validate a deep learning (DL) reconstruction technique for faster post-contrast enhanced coronal Volume Interpolated Breath-hold Examination (VIBE) sequences and assess its image quality compared to conventionally acquired coronal VIBE sequences. This prospective study included 151 patients undergoing clinically indicated upper abdominal MRI acquired on 3 T scanners. Two coronal T1 fat-suppressed VIBE sequences were acquired: a DL-reconstructed sequence (VIBE<sub>DL</sub>) and a standard sequence (VIBE<sub>SD</sub>). Three radiologists independently evaluated six image quality parameters: overall image quality, perceived signal-to-noise ratio, severity of artifacts, liver edge sharpness, liver vessel sharpness, and lesion conspicuity, using a 4-point Likert scale. Inter-reader agreement was assessed using Gwet's AC2. Ordinal mixed-effects regression models were used to compare VIBE<sub>DL</sub> and VIBE<sub>SD</sub>. Acquisition times were 10.2 s for VIBE<sub>DL</sub> compared to 22.3 s for VIBE<sub>SD</sub>. VIBE<sub>DL</sub> demonstrated superior overall image quality (OR 1.95, 95 % CI: 1.44-2.65, p < 0.001), reduced image noise (OR 3.02, 95 % CI: 2.26-4.05, p < 0.001), enhanced liver edge sharpness (OR 3.68, 95 % CI: 2.63-5.15, p < 0.001), improved liver vessel sharpness (OR 4.43, 95 % CI: 3.13-6.27, p < 0.001), and better lesion conspicuity (OR 9.03, 95 % CI: 6.34-12.85, p < 0.001) compared to VIBE<sub>SD</sub>. However, VIBE<sub>DL</sub> showed increased severity of peripheral artifacts (OR 0.13, p < 0.001). VIBE<sub>DL</sub> detected 137/138 (99.3 %) focal liver lesions, while VIBE<sub>SD</sub> detected 131/138 (94.9 %). Inter-reader agreement ranged from good to very good for both sequences. The DL-reconstructed VIBE sequence significantly outperformed the standard breath-hold VIBE in image quality and lesion detection, while reducing acquisition time. This technique shows promise for enhancing the diagnostic capabilities of contrast-enhanced abdominal MRI.

Xu S, Chen Y, Zhang X, Sun F, Chen S, Ou Y, Luo C

pubmed logopapersAug 7 2025
Due to the inductive bias of convolutions, CNNs perform hierarchical feature extraction efficiently in the field of medical image segmentation. However, the local correlation assumption of inductive bias limits the ability of convolutions to focus on global information, which has led to the performance of Transformer-based methods surpassing that of CNNs in some segmentation tasks in recent years. Although combining with Transformers can solve this problem, it also introduces computational complexity and considerable parameters. In addition, narrowing the encoder-decoder semantic gap for high-quality mask generation is a key challenge, addressed in recent works through feature aggregation from different skip connections. However, this often results in semantic mismatches and additional noise. In this paper, we propose a novel segmentation method, X-UNet, whose backbones employ the CFGC (Collaborative Fusion with Global Context-aware) module. The CFGC module enables multi-scale feature extraction and effective global context modeling. Simultaneously, we employ the CSPF (Cross Split-channel Progressive Fusion) module to progressively align and fuse features from corresponding encoder and decoder stages through channel-wise operations, offering a novel approach to feature integration. Experimental results demonstrate that X-UNet, with fewer computations and parameters, exhibits superior performance on various medical image datasets.The code and models are available on https://github.com/XSJ0410/X-UNet.

Xuanru Zhou, Cheng Li, Shuqiang Wang, Ye Li, Tao Tan, Hairong Zheng, Shanshan Wang

arxiv logopreprintAug 7 2025
Generative artificial intelligence (AI) is rapidly transforming medical imaging by enabling capabilities such as data synthesis, image enhancement, modality translation, and spatiotemporal modeling. This review presents a comprehensive and forward-looking synthesis of recent advances in generative modeling including generative adversarial networks (GANs), variational autoencoders (VAEs), diffusion models, and emerging multimodal foundation architectures and evaluates their expanding roles across the clinical imaging continuum. We systematically examine how generative AI contributes to key stages of the imaging workflow, from acquisition and reconstruction to cross-modality synthesis, diagnostic support, and treatment planning. Emphasis is placed on both retrospective and prospective clinical scenarios, where generative models help address longstanding challenges such as data scarcity, standardization, and integration across modalities. To promote rigorous benchmarking and translational readiness, we propose a three-tiered evaluation framework encompassing pixel-level fidelity, feature-level realism, and task-level clinical relevance. We also identify critical obstacles to real-world deployment, including generalization under domain shift, hallucination risk, data privacy concerns, and regulatory hurdles. Finally, we explore the convergence of generative AI with large-scale foundation models, highlighting how this synergy may enable the next generation of scalable, reliable, and clinically integrated imaging systems. By charting technical progress and translational pathways, this review aims to guide future research and foster interdisciplinary collaboration at the intersection of AI, medicine, and biomedical engineering.

Hongli Chen, Pengcheng Fang, Yuxia Chen, Yingxuan Ren, Jing Hao, Fangfang Tang, Xiaohao Cai, Shanshan Shan, Feng Liu

arxiv logopreprintAug 7 2025
Reconstructing high-fidelity MR images from undersampled k-space data remains a challenging problem in MRI. While Mamba variants for vision tasks offer promising long-range modeling capabilities with linear-time complexity, their direct application to MRI reconstruction inherits two key limitations: (1) insensitivity to high-frequency anatomical details; and (2) reliance on redundant multi-directional scanning. To address these limitations, we introduce High-Fidelity Mamba (HiFi-Mamba), a novel dual-stream Mamba-based architecture comprising stacked W-Laplacian (WL) and HiFi-Mamba blocks. Specifically, the WL block performs fidelity-preserving spectral decoupling, producing complementary low- and high-frequency streams. This separation enables the HiFi-Mamba block to focus on low-frequency structures, enhancing global feature modeling. Concurrently, the HiFi-Mamba block selectively integrates high-frequency features through adaptive state-space modulation, preserving comprehensive spectral details. To eliminate the scanning redundancy, the HiFi-Mamba block adopts a streamlined unidirectional traversal strategy that preserves long-range modeling capability with improved computational efficiency. Extensive experiments on standard MRI reconstruction benchmarks demonstrate that HiFi-Mamba consistently outperforms state-of-the-art CNN-based, Transformer-based, and other Mamba-based models in reconstruction accuracy while maintaining a compact and efficient model design.

Zhekai Zhou, Shudong Liu, Zhaokun Zhou, Yang Liu, Qiang Yang, Yuesheng Zhu, Guibo Luo

arxiv logopreprintAug 7 2025
Federated learning (FL) is a decentralized machine learning paradigm in which multiple clients collaboratively train a shared model without sharing their local private data. However, real-world applications of FL frequently encounter challenges arising from the non-identically and independently distributed (non-IID) local datasets across participating clients, which is particularly pronounced in the field of medical imaging, where shifts in image feature distributions significantly hinder the global model's convergence and performance. To address this challenge, we propose FedMP, a novel method designed to enhance FL under non-IID scenarios. FedMP employs stochastic feature manifold completion to enrich the training space of individual client classifiers, and leverages class-prototypes to guide the alignment of feature manifolds across clients within semantically consistent subspaces, facilitating the construction of more distinct decision boundaries. We validate the effectiveness of FedMP on multiple medical imaging datasets, including those with real-world multi-center distributions, as well as on a multi-domain natural image dataset. The experimental results demonstrate that FedMP outperforms existing FL algorithms. Additionally, we analyze the impact of manifold dimensionality, communication efficiency, and privacy implications of feature exposure in our method.

Rahi, A.

medrxiv logopreprintAug 7 2025
Brain tumor classification using MRI scans is crucial for early diagnosis and treatment planning. In this study, we first train a single Convolutional Neural Network (CNN) based on VGG16 [1], achieving a strong standalone test accuracy of 99.24% on a balanced dataset of 7,023 MRI images across four classes: glioma, meningioma, pituitary, and no tumor. To further improve classification performance, we implement three ensemble strategies: stacking, soft voting, and XGBoost-based ensembling [4], each trained on individually fine-tuned models. These ensemble methods significantly enhance prediction accuracy, with XGBoost achieving a perfect 100% accuracy, and voting reaching 99.54%. Evaluation metrics such as precision, recall, and F1-score confirm the robustness of the approach. This work demonstrates the power of combining fine-tuned deep learning models [5] for highly reliable brain tumor classification enhance prediction accuracy, with XGBoost achieving a perfect 100% accuracy, and voting reaching 99.54%. Evaluation metrics such as precision, recall, and F1-score confirm the robustness of the approach. This work demonstrates the power of combining fine-tuned deep learning models for highly reliable brain tumor classification.

Kong G, Zhang Q, Liu D, Pan J, Liu K

pubmed logopapersAug 6 2025
The assessment of osteonecrosis of the femoral head (ONFH) often presents challenges in accuracy and efficiency. Traditional methods rely on imaging studies and clinical judgment, prompting the need for advanced approaches. This study aims to use deep learning algorithms to enhance disease assessment and prediction in ONFH, optimizing treatment strategies. The primary objective of this research is to analyze pathological images of ONFH using advanced deep learning algorithms to evaluate treatment response, vascular reconstruction, and disease progression. By identifying the most effective algorithm, this study seeks to equip clinicians with precise tools for disease assessment and prediction. Magnetic resonance imaging (MRI) data from 30 patients diagnosed with ONFH were collected, totaling 1200 slices, which included 675 slices with lesions and 225 normal slices. The dataset was divided into training (630 slices), validation (135 slices), and test (135 slices) sets. A total of 10 deep learning algorithms were tested for training and optimization, and MobileNetV3_Large was identified as the optimal model for subsequent analyses. This model was applied for quantifying vascular reconstruction, evaluating treatment responses, and assessing lesion progression. In addition, a long short-term memory (LSTM) model was integrated for the dynamic prediction of time-series data. The MobileNetV3_Large model demonstrated an accuracy of 96.5% (95% CI 95.1%-97.8%) and a recall of 94.8% (95% CI 93.2%-96.4%) in ONFH diagnosis, significantly outperforming DenseNet201 (87.3%; P<.05). Quantitative evaluation of treatment responses showed that vascularized bone grafting resulted in an average increase of 12.4 mm in vascular length (95% CI 11.2-13.6 mm; P<.01) and an increase of 2.7 in branch count (95% CI 2.3-3.1; P<.01) among the 30 patients. The model achieved an AUC of 0.92 (95% CI 0.90-0.94) for predicting lesion progression, outperforming traditional methods like ResNet50 (AUC=0.85; P<.01). Predictions were consistent with clinical observations in 92.5% of cases (24/26). The application of deep learning algorithms in examining treatment response, vascular reconstruction, and disease progression in ONFH presents notable advantages. This study offers clinicians a precise tool for disease assessment and highlights the significance of using advanced technological solutions in health care practice.

Janse MHA, Janssen LM, Wolters-van der Ben EJM, Moman MR, Viergever MA, van Diest PJ, Gilhuijs KGA

pubmed logopapersAug 6 2025
This study aimed to evaluate the potential additional value of deep radiomics for assessing residual cancer burden (RCB) in locally advanced breast cancer, after neoadjuvant chemotherapy (NAC) but before surgery, compared to standard predictors: tumor volume and subtype. This retrospective study used a 105-patient single-institution training set and a 41-patient external test set from three institutions in the LIMA trial. DCE-MRI was performed before and after NAC, and RCB was determined post-surgery. Three networks (nnU-Net, Attention U-net and vector-quantized encoder-decoder) were trained for tumor segmentation. For each network, deep features were extracted from the bottleneck layer and used to train random forest regression models to predict RCB score. Models were compared to (1) a model trained on tumor volume and (2) a model combining tumor volume and subtype. The potential complementary performance of combining deep radiomics with a clinical-radiological model was assessed. From the predicted RCB score, three metrics were calculated: area under the curve (AUC) for categories RCB-0/RCB-I versus RCB-II/III, pathological complete response (pCR) versus non-pCR, and Spearman's correlation. Deep radiomics models had an AUC between 0.68-0.74 for pCR and 0.68-0.79 for RCB, while the volume-only model had an AUC of 0.74 and 0.70 for pCR and RCB, respectively. Spearman's correlation varied from 0.45-0.51 (deep radiomics) to 0.53 (combined model). No statistical difference between models was observed. Segmentation network-derived deep radiomics contain similar information to tumor volume and subtype for inferring pCR and RCB after NAC, but do not complement standard clinical predictors in the LIMA trial. Question It is unknown if and which deep radiomics approach is most suitable to extract relevant features to assess neoadjuvant chemotherapy response on breast MRI. Findings Radiomic features extracted from deep-learning networks yield similar results in predicting neoadjuvant chemotherapy response as tumor volume and subtype in the LIMA study. However, they do not provide complementary information. Clinical relevance For predicting response to neoadjuvant chemotherapy in breast cancer patients, tumor volume on MRI and subtype remain important predictors of treatment outcome; deep radiomics might be an alternative when determining tumor volume and/or subtype is not feasible.

Simon Baur, Alexandra Benova, Emilio Dolgener Cantú, Jackie Ma

arxiv logopreprintAug 6 2025
Deploying deep learning models in clinical practice often requires leveraging multiple data modalities, such as images, text, and structured data, to achieve robust and trustworthy decisions. However, not all modalities are always available at inference time. In this work, we propose multimodal privileged knowledge distillation (MMPKD), a training strategy that utilizes additional modalities available solely during training to guide a unimodal vision model. Specifically, we used a text-based teacher model for chest radiographs (MIMIC-CXR) and a tabular metadata-based teacher model for mammography (CBIS-DDSM) to distill knowledge into a vision transformer student model. We show that MMPKD can improve the resulting attention maps' zero-shot capabilities of localizing ROI in input images, while this effect does not generalize across domains, as contrarily suggested by prior research.

Wohlfahrt P, Pazderník M, Marhefková N, Roland R, Adla T, Earls J, Haluzík M, Dubský M

pubmed logopapersAug 6 2025
<b><i>Objective:</i></b> Cardiovascular risk stratification based on traditional risk factors lacks precision at the individual level. While coronary artery calcium (CAC) scoring enhances risk prediction by detecting calcified atherosclerotic plaques, it may underestimate risk in individuals with noncalcified plaques-a pattern common in younger type 1 diabetes (T1D) patients. Understanding the prevalence of noncalcified atherosclerosis in T1D is crucial for developing more effective screening strategies. Therefore, this study aimed to assess the burden of clinically significant atherosclerosis in T1D patients with CAC <100 using artificial intelligence (AI)-guided quantitative coronary computed tomographic angiography (AI-QCT). <b><i>Methods:</i></b> This study enrolled T1D patients aged ≥30 years with disease duration ≥10 years and no manifest or symptomatic atherosclerotic cardiovascular disease (ASCVD). CAC and carotid ultrasound were assessed in all participants. AI-QCT was performed in patients with CAC 0 and at least one plaque in the carotid arteries or those with CAC 1-99. <b><i>Results:</i></b> Among the 167 participants (mean age 52 ± 10 years; 44% women; T1D duration 29 ± 11 years), 93 (56%) had CAC = 0, 46 (28%) had CAC 1-99, 8 (5%) had CAC 100-299, and 20 (12%) had CAC ≥300. AI-QCT was performed in a subset of 52 patients. Only 11 (21%) had no evidence of coronary artery disease. Significant coronary stenosis was identified in 17% of patients, and 30 (73%) presented with at least one high-risk plaque. Compared with CAC-based risk categories, AI-QCT reclassified 58% of patients, and 21% compared with the STENO1 risk categories. There was only fair agreement between AI-QCT and CAC (κ = 0.25), and a slight agreement between AI-QCT and STENO1 risk categories (κ = 0.02). <b><i>Conclusion:</i></b> AI-QCT may reveal subclinical atherosclerotic burden and high-risk features that remain undetected by traditional risk models or CAC. These findings challenge the assumption that a low CAC score equates to a low cardiovascular risk in T1D.
Page 295 of 6626611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.