Sort by:
Page 2 of 59584 results

Inter-slice Complementarity Enhanced Ring Artifact Removal using Central Region Reinforced Neural Network.

Zhang Y, Liu G, Chen Z, Huang Z, Kan S, Ji X, Luo S, Zhu S, Yang J, Chen Y

pubmed logopapersSep 30 2025
In computed tomography (CT), non-uniform detector responses often lead to ring artifacts in reconstructed images. For conventional energy-integrating detectors (EIDs), such artifacts can be effectively addressed through dead-pixel correction and flat-dark field calibration. However, the response characteristics of photon-counting detectors (PCDs) are more complex, and standard calibration procedures can only partially mitigate ring artifacts. Consequently, developing high-performance ring artifact removal algorithms is essential for PCD-based CT systems. To this end, we propose the Inter-slice Complementarity Enhanced Ring Artifact Removal (ICE-RAR) algorithm. Since artifact removal in the central region is particularly challenging, ICE-RAR utilizes a dual-branch neural network that could simultaneously perform global artifact removal and enhance the central region restoration. Moreover, recognizing that the detector response is also non-uniform in the vertical direction, ICE-RAR suggests extracting and utilizing inter-slice complementarity to enhance its performance in artifact elimination and image restoration. Experiments on simulated data and two real datasets acquired from PCD-based CT systems demonstrate the effectiveness of ICE-RAR in reducing ring artifacts while preserving structural details. More importantly, since the system-specific characteristics are incorporated into the data simulation process, models trained on the simulated data can be directly applied to unseen real data from the target PCD-based CT system, demonstrating ICE-RAR's potential to address the ring artifact removal problem in practical CT systems. The implementation is publicly available at https://github.com/DarkBreakerZero/ICE-RAR.

Mixed prototype correction for causal inference in medical image classification.

Hong ZL, Yang JC, Peng XR, Wu SS

pubmed logopapersSep 29 2025
The heterogeneity of medical images poses significant challenges to accurate disease diagnosis. To tackle this issue, the impact of such heterogeneity on the causal relationship between image features and diagnostic labels should be incorporated into model design, which however remains under explored. In this paper, we propose a mixed prototype correction for causal inference (MPCCI) method, aimed at mitigating the impact of unseen confounding factors on the causal relationships between medical images and disease labels, so as to enhance the diagnostic accuracy of deep learning models. The MPCCI comprises a causal inference component based on front-door adjustment and an adaptive training strategy. The causal inference component employs a multi-view feature extraction (MVFE) module to establish mediators, and a mixed prototype correction (MPC) module to execute causal interventions. Moreover, the adaptive training strategy incorporates both information purity and maturity metrics to maintain stable model training. Experimental evaluations on four medical image datasets, encompassing CT and ultrasound modalities, demonstrate the superior diagnostic accuracy and reliability of the proposed MPCCI. The code will be available at https://github.com/Yajie-Zhang/MPCCI .

Democratizing AI in Healthcare with Open Medical Inference (OMI): Protocols, Data Exchange, and AI Integration.

Pelka O, Sigle S, Werner P, Schweizer ST, Iancu A, Scherer L, Kamzol NA, Eil JH, Apfelbacher T, Seletkov D, Susetzky T, May MS, Bucher AM, Fegeler C, Boeker M, Braren R, Prokosch HU, Nensa F

pubmed logopapersSep 29 2025
The integration of artificial intelligence (AI) into healthcare is transforming clinical decision-making, patient outcomes, and workflows. AI inference, applying trained models to new data, is central to this evolution, with cloud-based infrastructures enabling scalable AI deployment. The Open Medical Inference (OMI) platform democratizes AI access through open protocols and standardized data formats for seamless, interoperable healthcare data exchange. By integrating standards like FHIR and DICOMweb, OMI ensures interoperability between healthcare institutions and AI services while fostering ethical AI use through a governance framework addressing privacy, transparency, and fairness.OMI's implementation is structured into work packages, each addressing technical and ethical aspects. These include expanding the Medical Informatics Initiative (MII) Core Dataset for medical imaging, developing infrastructure for AI inference, and creating an open-source DICOMweb adapter for legacy systems. Standardized data formats ensure interoperability, while the AI Governance Framework promotes trust and responsible AI use.The project aims to establish an interoperable AI network across healthcare institutions, connecting existing infrastructures and AI services to enhance clinical outcomes. · OMI develops open protocols and standardized data formats for seamless healthcare data exchange.. · Integration with FHIR and DICOMweb ensures interoperability between healthcare systems and AI services.. · A governance framework addresses privacy, transparency, and fairness in AI usage.. · Work packages focus on expanding datasets, creating infrastructure, and enabling legacy system integration.. · The project aims to create a scalable, secure, and interoperable AI network in healthcare.. · Pelka O, Sigle S, Werner P et al. Democratizing AI in Healthcare with Open Medical Inference (OMI): Protocols, Data Exchange, and AI Integration. Rofo 2025; DOI 10.1055/a-2651-6653.

Survey of AI-Powered Approaches for Osteoporosis Diagnosis in Medical Imaging

Abdul Rahman, Bumshik Lee

arxiv logopreprintSep 29 2025
Osteoporosis silently erodes skeletal integrity worldwide; however, early detection through imaging can prevent most fragility fractures. Artificial intelligence (AI) methods now mine routine Dual-energy X-ray Absorptiometry (DXA), X-ray, Computed Tomography (CT), and Magnetic Resonance Imaging (MRI) scans for subtle, clinically actionable markers, but the literature is fragmented. This survey unifies the field through a tri-axial framework that couples imaging modalities with clinical tasks and AI methodologies (classical machine learning, convolutional neural networks (CNNs), transformers, self-supervised learning, and explainable AI). Following a concise clinical and technical primer, we detail our Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)-guided search strategy, introduce the taxonomy via a roadmap figure, and synthesize cross-study insights on data scarcity, external validation, and interpretability. By identifying emerging trends, open challenges, and actionable research directions, this review provides AI scientists, medical imaging researchers, and musculoskeletal clinicians with a clear compass to accelerate rigorous, patient-centered innovation in osteoporosis care. The project page of this survey can also be found on Github.

RAX-NET: Residual Attention Xception Network for Brain Ischemic Stroke Segmentation in T1-Weighted MRI

Mousavi, A.

medrxiv logopreprintSep 29 2025
Ischemic stroke, caused by arterial occlusion, leads to hypoxia and cellular necrosis. Rapid and accurate delineation of ischemic lesions is essential for treatment planning but remains challenging due to variations in lesion size, shape, and appearance. We propose Residual Attention Xception Network, a deep learning architecture that integrates residual attention connections with Xception for three-dimensional magnetic resonance imaging lesion segmentation. The framework includes three stages: (i) decomposition of three-dimensional scans into axial, sagittal, and coronal planes, (ii) independent model training on each plane, and (iii) voxel-wise majority voting to generate the final three-dimensional segmentation. In addition, we introduce a variant of the focal Tversky loss designed to mitigate class imbalance and improve sensitivity to small or irregular lesion boundaries. Experiments on the ATLAS v2.0 dataset with five-fold cross-validation demonstrate that Residual Attention Xception Network achieves a Dice coefficient of 0.61, precision of 0.68, and recall of 0.63. These results surpass baseline models while requiring fewer trainable parameters and enabling faster inference, highlighting both accuracy and efficiency. source codehttps://github.com/liamirpy/RAX-NET_ISCHEMIC_STROKE_SEGMENTATION.

Adversarial Versus Federated: An Adversarial Learning based Multi-Modality Cross-Domain Federated Medical Segmentation

You Zhou, Lijiang Chen, Shuchang Lyu, Guangxia Cui, Wenpei Bai, Zheng Zhou, Meng Li, Guangliang Cheng, Huiyu Zhou, Qi Zhao

arxiv logopreprintSep 28 2025
Federated learning enables collaborative training of machine learning models among different clients while ensuring data privacy, emerging as the mainstream for breaking data silos in the healthcare domain. However, the imbalance of medical resources, data corruption or improper data preservation may lead to a situation where different clients possess medical images of different modality. This heterogeneity poses a significant challenge for cross-domain medical image segmentation within the federated learning framework. To address this challenge, we propose a new Federated Domain Adaptation (FedDA) segmentation training framework. Specifically, we propose a feature-level adversarial learning among clients by aligning feature maps across clients through embedding an adversarial training mechanism. This design can enhance the model's generalization on multiple domains and alleviate the negative impact from domain-shift. Comprehensive experiments on three medical image datasets demonstrate that our proposed FedDA substantially achieves cross-domain federated aggregation, endowing single modality client with cross-modality processing capabilities, and consistently delivers robust performance compared to state-of-the-art federated aggregation algorithms in objective and subjective assessment. Our code are available at https://github.com/GGbond-study/FedDA.

FedAgentBench: Towards Automating Real-world Federated Medical Image Analysis with Server-Client LLM Agents

Pramit Saha, Joshua Strong, Divyanshu Mishra, Cheng Ouyang, J. Alison Noble

arxiv logopreprintSep 28 2025
Federated learning (FL) allows collaborative model training across healthcare sites without sharing sensitive patient data. However, real-world FL deployment is often hindered by complex operational challenges that demand substantial human efforts. This includes: (a) selecting appropriate clients (hospitals), (b) coordinating between the central server and clients, (c) client-level data pre-processing, (d) harmonizing non-standardized data and labels across clients, and (e) selecting FL algorithms based on user instructions and cross-client data characteristics. However, the existing FL works overlook these practical orchestration challenges. These operational bottlenecks motivate the need for autonomous, agent-driven FL systems, where intelligent agents at each hospital client and the central server agent collaboratively manage FL setup and model training with minimal human intervention. To this end, we first introduce an agent-driven FL framework that captures key phases of real-world FL workflows from client selection to training completion and a benchmark dubbed FedAgentBench that evaluates the ability of LLM agents to autonomously coordinate healthcare FL. Our framework incorporates 40 FL algorithms, each tailored to address diverse task-specific requirements and cross-client characteristics. Furthermore, we introduce a diverse set of complex tasks across 201 carefully curated datasets, simulating 6 modality-specific real-world healthcare environments, viz., Dermatoscopy, Ultrasound, Fundus, Histopathology, MRI, and X-Ray. We assess the agentic performance of 14 open-source and 10 proprietary LLMs spanning small, medium, and large model scales. While some agent cores such as GPT-4.1 and DeepSeek V3 can automate various stages of the FL pipeline, our results reveal that more complex, interdependent tasks based on implicit goals remain challenging for even the strongest models.

Benchmarking DINOv3 for Multi-Task Stroke Analysis on Non-Contrast CT

Donghao Zhang, Yimin Chen, Kauê TN Duarte, Taha Aslan, Mohamed AlShamrani, Brij Karmur, Yan Wan, Shengcai Chen, Bo Hu, Bijoy K Menon, Wu Qiu

arxiv logopreprintSep 27 2025
Non-contrast computed tomography (NCCT) is essential for rapid stroke diagnosis but is limited by low image contrast and signal to noise ratio. We address this challenge by leveraging DINOv3, a state-of-the-art self-supervised vision transformer, to generate powerful feature representations for a comprehensive set of stroke analysis tasks. Our evaluation encompasses infarct and hemorrhage segmentation, anomaly classification (normal vs. stroke and normal vs. infarct vs. hemorrhage), hemorrhage subtype classification (EDH, SDH, SAH, IPH, IVH), and dichotomized ASPECTS classification (<=6 vs. >6) on multiple public and private datasets. This study establishes strong benchmarks for these tasks and demonstrates the potential of advanced self-supervised models to improve automated stroke diagnosis from NCCT, providing a clear analysis of both the advantages and current constraints of the approach. The code is available at https://github.com/Zzz0251/DINOv3-stroke.

Enhanced CoAtNet based hybrid deep learning architecture for automated tuberculosis detection in human chest X-rays.

Siddharth G, Ambekar A, Jayakumar N

pubmed logopapersSep 26 2025
Tuberculosis (TB) is a serious infectious disease that remains a global health challenge. While chest X-rays (CXRs) are widely used for TB detection, manual interpretation can be subjective and time-consuming. Automated classification of CXRs into TB and non-TB cases can significantly support healthcare professionals in timely and accurate diagnosis. This paper introduces a hybrid deep learning approach for classifying CXR images. The solution is based on the CoAtNet framework, which combines the strengths of Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs). The model is pre-trained on the large-scale ImageNet dataset to ensure robust generalization across diverse images. The evaluation is conducted on the IN-CXR tuberculosis dataset from ICMR-NIRT, which contains a comprehensive collection of CXR images of both normal and abnormal categories. The hybrid model achieves a binary classification accuracy of 86.39% and an ROC-AUC score of 93.79%, outperforming tested baseline models that rely exclusively on either CNNs or ViTs when trained on this dataset. Furthermore, the integration of Local Interpretable Model-agnostic Explanations (LIME) enhances the interpretability of the model's predictions. This combination of reliable performance and transparent, interpretable results strengthens the model's role in AI-driven medical imaging research. Code will be made available upon request.

A novel open-source ultrasound dataset with deep learning benchmarks for spinal cord injury localization and anatomical segmentation.

Kumar A, Kotkar K, Jiang K, Bhimreddy M, Davidar D, Weber-Levine C, Krishnan S, Kerensky MJ, Liang R, Leadingham KK, Routkevitch D, Hersh AM, Ashayeri K, Tyler B, Suk I, Son J, Theodore N, Thakor N, Manbachi A

pubmed logopapersSep 26 2025
While deep learning has catalyzed breakthroughs across numerous domains, its broader adoption in clinical settings is inhibited by the costly and time-intensive nature of data acquisition and annotation. To further facilitate medical machine learning, we present an ultrasound dataset of 10,223 brightness-mode (B-mode) images consisting of sagittal slices of porcine spinal cords (N = 25) before and after a contusion injury. We additionally benchmark the performance metrics of several state-of-the-art object detection algorithms to localize the site of injury and semantic segmentation models to label the anatomy for comparison and creation of task-specific architectures. Finally, we evaluate the zero-shot generalization capabilities of the segmentation models on human ultrasound spinal cord images to determine whether training on our porcine dataset is sufficient for accurately interpreting human data. Our results show that the YOLOv8 detection model outperforms all evaluated models for injury localization, achieving a mean Average Precision (mAP50-95) score of 0.606. Segmentation metrics indicate that the DeepLabv3 segmentation model achieves the highest accuracy on unseen porcine anatomy, with a Mean Dice score of 0.587, while SAMed achieves the highest mean Dice score generalizing to human anatomy (0.445). To the best of our knowledge, this is the largest annotated dataset of spinal cord ultrasound images made publicly available to researchers and medical professionals, as well as the first public report of object detection and segmentation architectures to assess anatomical markers in the spinal cord for methodology development and clinical applications.
Page 2 of 59584 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.