Sort by:
Page 3 of 98978 results

U-DFA: A Unified DINOv2-Unet with Dual Fusion Attention for Multi-Dataset Medical Segmentation

Zulkaif Sajjad, Furqan Shaukat, Junaid Mir

arxiv logopreprintOct 1 2025
Accurate medical image segmentation plays a crucial role in overall diagnosis and is one of the most essential tasks in the diagnostic pipeline. CNN-based models, despite their extensive use, suffer from a local receptive field and fail to capture the global context. A common approach that combines CNNs with transformers attempts to bridge this gap but fails to effectively fuse the local and global features. With the recent emergence of VLMs and foundation models, they have been adapted for downstream medical imaging tasks; however, they suffer from an inherent domain gap and high computational cost. To this end, we propose U-DFA, a unified DINOv2-Unet encoder-decoder architecture that integrates a novel Local-Global Fusion Adapter (LGFA) to enhance segmentation performance. LGFA modules inject spatial features from a CNN-based Spatial Pattern Adapter (SPA) module into frozen DINOv2 blocks at multiple stages, enabling effective fusion of high-level semantic and spatial features. Our method achieves state-of-the-art performance on the Synapse and ACDC datasets with only 33\% of the trainable model parameters. These results demonstrate that U-DFA is a robust and scalable framework for medical image segmentation across multiple modalities.

New insights into pathogenesis, diagnosis and management of cardiac allograft vasculopathy.

David P, Roquero P, Coutance G

pubmed logopapersOct 1 2025
Despite major advances in short-term outcomes after heart transplantation, long-term survival remains limited by chronic allograft dysfunction, with cardiac allograft vasculopathy (CAV) being the leading cause of late graft failure and an important cause of all-cause mortality. CAV is a unique and multifactorial form of transplant coronary vasculopathy, driven by a complex interplay of alloimmune responses, innate immune activation, and traditional cardiovascular risk factors. Recent insights from deep profiling of human allograft tissue have revealed the key roles of locally sustained T- and B-cell-mediated inflammation, macrophage-natural killer cell interactions, and chronic immune activation within the graft. These discoveries challenge prior models of systemic immune monitoring and highlight the importance of spatially organized, intragraft immune processes. In parallel, the diagnostic landscape of CAV is rapidly evolving. High-resolution imaging techniques such as optical coherence tomography, and advanced non-invasive tools including coronary computed tomography angiography and positron emission tomography, not only enable earlier and more precise detection of disease but also redefine the usual landscape of CAV diagnosis. New methods for individualized risk stratification, including trajectory modeling and machine learning-enhanced biopsy analysis, are paving the way for more personalized surveillance strategies. While current management remains focused on prevention, novel therapeutic targets are emerging, informed by a deeper understanding of CAV immunopathogenesis. This review provides an up-to-date synthesis of recent advances in CAV, with a focus on pathophysiology, individualized risk assessment, diagnostic innovation, and therapeutic perspectives, underscoring a paradigm shift toward more precise and proactive care in heart transplant recipients.

Advances in Medical Image Segmentation: A Comprehensive Survey with a Focus on Lumbar Spine Applications

Ahmed Kabil, Ghada Khoriba, Mina Yousef, Essam A. Rashed

arxiv logopreprintOct 1 2025
Medical Image Segmentation (MIS) stands as a cornerstone in medical image analysis, playing a pivotal role in precise diagnostics, treatment planning, and monitoring of various medical conditions. This paper presents a comprehensive and systematic survey of MIS methodologies, bridging the gap between traditional image processing techniques and modern deep learning approaches. The survey encompasses thresholding, edge detection, region-based segmentation, clustering algorithms, and model-based techniques while also delving into state-of-the-art deep learning architectures such as Convolutional Neural Networks (CNNs), Fully Convolutional Networks (FCNs), and the widely adopted U-Net and its variants. Moreover, integrating attention mechanisms, semi-supervised learning, generative adversarial networks (GANs), and Transformer-based models is thoroughly explored. In addition to covering established methods, this survey highlights emerging trends, including hybrid architectures, cross-modality learning, federated and distributed learning frameworks, and active learning strategies, which aim to address challenges such as limited labeled datasets, computational complexity, and model generalizability across diverse imaging modalities. Furthermore, a specialized case study on lumbar spine segmentation is presented, offering insights into the challenges and advancements in this relatively underexplored anatomical region. Despite significant progress in the field, critical challenges persist, including dataset bias, domain adaptation, interpretability of deep learning models, and integration into real-world clinical workflows.

LMOD+: A Comprehensive Multimodal Dataset and Benchmark for Developing and Evaluating Multimodal Large Language Models in Ophthalmology

Zhenyue Qin, Yang Liu, Yu Yin, Jinyu Ding, Haoran Zhang, Anran Li, Dylan Campbell, Xuansheng Wu, Ke Zou, Tiarnan D. L. Keenan, Emily Y. Chew, Zhiyong Lu, Yih-Chung Tham, Ninghao Liu, Xiuzhen Zhang, Qingyu Chen

arxiv logopreprintSep 30 2025
Vision-threatening eye diseases pose a major global health burden, with timely diagnosis limited by workforce shortages and restricted access to specialized care. While multimodal large language models (MLLMs) show promise for medical image interpretation, advancing MLLMs for ophthalmology is hindered by the lack of comprehensive benchmark datasets suitable for evaluating generative models. We present a large-scale multimodal ophthalmology benchmark comprising 32,633 instances with multi-granular annotations across 12 common ophthalmic conditions and 5 imaging modalities. The dataset integrates imaging, anatomical structures, demographics, and free-text annotations, supporting anatomical structure recognition, disease screening, disease staging, and demographic prediction for bias evaluation. This work extends our preliminary LMOD benchmark with three major enhancements: (1) nearly 50% dataset expansion with substantial enlargement of color fundus photography; (2) broadened task coverage including binary disease diagnosis, multi-class diagnosis, severity classification with international grading standards, and demographic prediction; and (3) systematic evaluation of 24 state-of-the-art MLLMs. Our evaluations reveal both promise and limitations. Top-performing models achieved ~58% accuracy in disease screening under zero-shot settings, and performance remained suboptimal for challenging tasks like disease staging. We will publicly release the dataset, curation pipeline, and leaderboard to potentially advance ophthalmic AI applications and reduce the global burden of vision-threatening diseases.

Automated detection of bottom-of-sulcus dysplasia on magnetic resonance imaging-positron emission tomography in patients with drug-resistant focal epilepsy.

Macdonald-Laurs E, Warren AEL, Mito R, Genc S, Alexander B, Barton S, Yang JY, Francis P, Pardoe HR, Jackson G, Harvey AS

pubmed logopapersSep 30 2025
Bottom-of-sulcus dysplasia (BOSD) is a diagnostically challenging subtype of focal cortical dysplasia, 60% being missed on magnetic resonance imaging (MRI). Automated MRI-based detection methods have been developed for focal cortical dysplasia, but not BOSD specifically, and few methods incorporate fluorodeoxyglucose positron emission tomography (FDG-PET) alongside MRI features. We report the development and performance of an automated BOSD detector using combined MRI + PET. The training set comprised 54 patients with focal epilepsy and BOSD. The test sets comprised 17 subsequently diagnosed patients with BOSD from the same center, and 12 published patients from a different center. Across training and test sets, 81% of patients had normal initial MRIs and most BOSDs were <1.5 cm<sup>3</sup>. In the training set, 12 features from T1-MRI, fluid-attenuated inversion recovery-MRI, and FDG-PET were evaluated to determine which features best distinguished dysplastic from normal-appearing cortex. Using the Multi-centre Epilepsy Lesion Detection group's machine-learning detection method with the addition of FDG-PET, neural network classifiers were then trained and tested on MRI + PET, MRI-only, and PET-only features. The proportion of patients whose BOSD was overlapped by the top output cluster, and the top five output clusters, were determined. Cortical and subcortical hypometabolism on FDG-PET was superior in discriminating dysplastic from normal-appearing cortex compared to MRI features. When the BOSD detector was trained on MRI + PET features, 87% BOSDs were overlapped by one of the top five clusters (69% top cluster) in the training set, 94% in the prospective test set (88% top cluster), and 75% in the published test set (58% top cluster). Cluster overlap was generally lower when the detector was trained and tested on PET-only or MRI-only features. Detection of BOSD is possible using established MRI-based automated detection methods, supplemented with FDG-PET features and trained on a BOSD-specific cohort. In clinically appropriate patients with seemingly negative MRI, the detector could suggest MRI regions to scrutinize for possible BOSD.

Enhancing Microscopic Image Quality With DiffusionFormer and Crow Search Optimization.

Patel SC, Kamath RN, Murthy TSN, Subash K, Avanija J, Sangeetha M

pubmed logopapersSep 30 2025
Medical Image plays a vital role in diagnosis, but noise in patient scans severely affects the accuracy and quality of images. Denoising methods are important to increase the clarity of these images, particularly in low-resource settings where current diagnostic roles are inaccessible. Pneumonia is a widespread disease that presents significant diagnostic challenges due to the high similarity between its various types and the lack of medical images for emerging variants. This study introduces a novel Diffusion with swin transformer-based Optimized Crow Search algorithm to increase the image's quality and reliability. This technique utilizes four datasets such as brain tumor MRI dataset, chest X-ray image, chest CT-scan image, and BUSI. The preprocessing steps involve conversion to grayscale, resizing, and normalization to improve image quality in medical image (MI) datasets. Gaussian noise is introduced to further enhance image quality. The method incorporates a diffusion process, swin transformer networks, and optimized crow search algorithm to improve the denoising of medical images. The diffusion process reduces noise by iteratively refining images while swin transformer captures complex image features that help differentiate between noise and essential diagnostic information. The crow search optimization algorithm fine-tunes the hyperparameters, which minimizes the fitness function for optimal denoising performance. The method is tested across four datasets, indicating its optimal effectiveness against other techniques. The proposed method achieves a peak signal-to-noise ratio of 38.47 dB, a structural similarity index measure of 98.14%, a mean squared error of 0.55, and a feature similarity index measure of 0.980, which outperforms existing techniques. These outcomes reflect that the proposed approach effectively enhances the quality of images, resulting in precise and dependable diagnoses.

Causally Guided Gaussian Perturbations for Out-Of-Distribution Generalization in Medical Imaging

Haoran Pei, Yuguang Yang, Kexin Liu, Baochang Zhang

arxiv logopreprintSep 30 2025
Out-of-distribution (OOD) generalization remains a central challenge in deploying deep learning models to real-world scenarios, particularly in domains such as biomedical images, where distribution shifts are both subtle and pervasive. While existing methods often pursue domain invariance through complex generative models or adversarial training, these approaches may overlook the underlying causal mechanisms of generalization.In this work, we propose Causally-Guided Gaussian Perturbations (CGP)-a lightweight framework that enhances OOD generalization by injecting spatially varying noise into input images, guided by soft causal masks derived from Vision Transformers. By applying stronger perturbations to background regions and weaker ones to foreground areas, CGP encourages the model to rely on causally relevant features rather than spurious correlations.Experimental results on the challenging WILDS benchmark Camelyon17 demonstrate consistent performance gains over state-of-the-art OOD baselines, highlighting the potential of causal perturbation as a tool for reliable and interpretable generalization.

petBrain: a new pipeline for amyloid, Tau tangles and neurodegeneration quantification using PET and MRI.

Coupé P, Mansencal B, Morandat F, Morell-Ortega S, Villain N, Manjón JV, Planche V

pubmed logopapersSep 30 2025
Quantification of amyloid plaques (A), neurofibrillary tangles (T<sub>2</sub>), and neurodegeneration (N) using PET and MRI is critical for Alzheimer's disease (AD) diagnosis and prognosis. Existing pipelines face limitations regarding processing time, tracer variability handling, and multimodal integration. We developed petBrain, a novel end-to-end processing pipeline for amyloid-PET, tau-PET, and structural MRI. It leverages deep learning-based segmentation, standardized biomarker quantification (Centiloid, CenTauR, HAVAs), and simultaneous estimation of A, T<sub>2</sub>, and N biomarkers. It is implemented in a web-based format, requiring no local computational infrastructure and software usage knowledge. petBrain provides reliable, rapid quantification with results comparable to existing pipelines for A and T<sub>2</sub>, showing strong concordance with data processed in ADNI databases. The staging and quantification of A/T<sub>2</sub>/N by petBrain demonstrated good agreements with CSF/plasma biomarkers, clinical status and cognitive performance. petBrain represents a powerful open platform for standardized AD biomarker analysis, facilitating clinical research applications.

Deep transfer learning based feature fusion model with Bonobo optimization algorithm for enhanced brain tumor segmentation and classification through biomedical imaging.

Gurunathan P, Srinivasan PS, S R

pubmed logopapersSep 30 2025
The brain tumour (BT) is an aggressive disease among others, which leads to a very short life expectancy. Therefore, early and prompt treatment is the main stage in enhancing patients' quality of life. Biomedical imaging permits the non-invasive evaluation of diseases, depending upon visual assessments that lead to better medical outcome expectations and therapeutic planning. Numerous image techniques like computed tomography (CT), magnetic resonance imaging (MRI), etc., are employed for evaluating cancer in the brain. The detection, segmentation and extraction of diseased tumour regions from biomedical images are a primary concern, but are tiresome and time-consuming tasks done by clinical specialists, and their outcome depends on their experience only. Therefore, the use of computer-aided technologies is essential to overcoming these limitations. Recently, artificial intelligence (AI) models have been very effective in enhancing performance and improving the method of medical image diagnosis. This paper proposes an Enhanced Brain Tumour Segmentation through Biomedical Imaging and Feature Model Fusion with Bonobo Optimiser (EBTS-BIFMFBO) model. The main intention of the EBTS-BIFMFBO model relies on enhancing the segmentation and classification model of BTs utilizing advanced models. Initially, the EBTS-BIFMFBO technique follows bilateral filter (BF)-based noise elimination and CLAHE-based contrast enhancement. Furthermore, the proposed EBTS-BIFMFBO model involves a segmentation process by the DeepLabV3 + model to identify tumour regions for accurate diagnosis. Moreover, the fusion models such as InceptionResNetV2, MobileNet, and DenseNet201 are employed for the feature extraction. Additionally, the convolutional sparse autoencoder (CSAE) method is implemented for the classification process of BT. Finally, the hyper-parameter selection of CSAE is performed by the bonobo optimizer (BO) method. A vast experiment is conducted to highlight the performance of the EBTS-BIFMFBO approach under the Figshare BT dataset. The comparison results of the EBTS-BIFMFBO approach portrayed a superior accuracy value of 99.16% over existing models.

Cerebral perfusion imaging predicts levodopa-induced dyskinesia in Parkinsonian rat model.

Perron J, Krak S, Booth S, Zhang D, Ko JH

pubmed logopapersSep 30 2025
Many Parkinson's disease (PD) patients manifest complications related to treatment called levodopa-induced dyskinesia (LID). Preventing the onset of LID is crucial to the management of PD, but the reasons why some patients develop LID are unclear. The ability to prognosticate predisposition to LID would be valuable for the investigation of mitigation strategies. Thirty rats received 6-hydroxydopamine to induce Parkinsonism-like behaviors before treatment with levodopa (2 mg/kg) daily for 22 days. Fourteen developed LID-like behaviors. Fluorodeoxyglucose PET, T<sub>2</sub>-weighted MRI and cerebral perfusion imaging were collected before treatment. Support vector machines were trained to classify prospective LID vs. non-LID animals from treatment-naïve baseline imaging. Volumetric perfusion imaging performed best overall with 86.16% area-under-curve, 86.67% accuracy, 92.86% sensitivity, and 81.25% specificity for classifying animals with LID vs. non-LID in leave-one-out cross-validation. We have demonstrated proof-of-concept for imaging-based classification of susceptibility to LID of a Parkinsonian rat model using perfusion-based imaging and a machine learning model.
Page 3 of 98978 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.