Sort by:
Page 364 of 3703697 results

ÆMMamba: An Efficient Medical Segmentation Model With Edge Enhancement.

Dong X, Zhou B, Yin C, Liao IY, Jin Z, Xu Z, Pu B

pubmed logopapersMay 21 2025
Medical image segmentation is critical for disease diagnosis, treatment planning, and prognosis assessment, yet the complexity and diversity of medical images pose significant challenges to accurate segmentation. While Convolutional Neural Networks capture local features and Vision Transformers excel in the global context, both struggle with efficient long-range dependency modeling. Inspired by Mamba's State Space Modeling efficiency, we propose ÆMMamba, a novel multi-scale feature extraction framework built on the Mamba backbone network. AÆMMamba integrates several innovative modules: the Efficient Fusion Bridge (EFB) module, which employs a bidirectional state-space model and attention mechanisms to fuse multi-scale features; the Edge-Aware Module (EAM), which enhances low-level edge representation using Sobel-based edge extraction; and the Boundary Sensitive Decoder (BSD), which leverages inverse attention and residual convolutional layers to handle cross-level complex boundaries. ÆMMamba achieves state-of-the-art performance across 8 medical segmentation datasets. On polyp segmentation datasets (Kvasir, ClinicDB, ColonDB, EndoScene, ETIS), it records the highest mDice and mIoU scores, outperforming methods like MADGNet and Swin-UMamba, with a standout mDice of 72.22 on ETIS, the most challenging dataset in this domain. For lung and breast segmentation, ÆMMamba surpasses competitors such as H2Former and SwinUnet, achieving Dice scores of 84.24 on BUSI and 79.83 on COVID-19 Lung. And on the LGG brain MRI dataset, ÆMMamba attains an mDice of 87.25 and an mIoU of 79.31, outperforming all compared methods. The source code will be released at https://github.com/xingbod/eMMamba.

Right Ventricular Strain as a Key Feature in Interpretable Machine Learning for Identification of Takotsubo Syndrome: A Multicenter CMR-based Study.

Du Z, Hu H, Shen C, Mei J, Feng Y, Huang Y, Chen X, Guo X, Hu Z, Jiang L, Su Y, Biekan J, Lyv L, Chong T, Pan C, Liu K, Ji J, Lu C

pubmed logopapersMay 21 2025
To develop an interpretable machine learning (ML) model based on cardiac magnetic resonance (CMR) multimodal parameters and clinical data to discriminate Takotsubo syndrome (TTS), acute myocardial infarction (AMI), and acute myocarditis (AM), and to further assess the diagnostic value of right ventricular (RV) strain in TTS. This study analyzed CMR and clinical data of 130 patients from three centers. Key features were selected using least absolute shrinkage and selection operator regression and random forest. Data were split into a training cohort and an internal testing cohort (ITC) in the ratio 7:3, with overfitting avoided using leave-one-out cross-validation and bootstrap methods. Nine ML models were evaluated using standard performance metrics, with Shapley additive explanations (SHAP) analysis used for model interpretation. A total of 11 key features were identified. The extreme gradient boosting model showed the best performance, with an area under the curve (AUC) value of 0.94 (95% CI: 0.85-0.97) in the ITC. Right ventricular basal circumferential strain (RVCS-basal) was the most important feature for identifying TTS. Its absolute value was significantly higher in TTS patients than in AMI and AM patients (-9.93%, -5.21%, and -6.18%, respectively, p < 0.001), with values above -6.55% contributing to a diagnosis of TTS. This study developed an interpretable ternary classification ML model for identifying TTS and used SHAP analysis to elucidate the significant value of RVCS-basal in TTS diagnosis. An online calculator (https://lsszxyy.shinyapps.io/XGboost/) based on this model was developed to provide immediate decision support for clinical use.

Machine Learning Derived Blood Input for Dynamic PET Images of Rat Heart

Shubhrangshu Debsarkar, Bijoy Kundu

arxiv logopreprintMay 21 2025
Dynamic FDG PET imaging study of n = 52 rats including 26 control Wistar-Kyoto (WKY) rats and 26 experimental spontaneously hypertensive rats (SHR) were performed using a Siemens microPET and Albira trimodal scanner longitudinally at 1, 2, 3, 5, 9, 12 and 18 months of age. A 15-parameter dual output model correcting for spill over contamination and partial volume effects with peak fitting cost functions was developed for simultaneous estimation of model corrected blood input function (MCIF) and kinetic rate constants for dynamic FDG PET images of rat heart in vivo. Major drawbacks of this model are its dependence on manual annotations for the Image Derived Input Function (IDIF) and manual determination of crucial model parameters to compute MCIF. To overcome these limitations, we performed semi-automated segmentation and then formulated a Long-Short-Term Memory (LSTM) cell network to train and predict MCIF in test data using a concatenation of IDIFs and myocardial inputs and compared them with reference-modeled MCIF. Thresholding along 2D plane slices with two thresholds, with T1 representing high-intensity myocardium, and T2 representing lower-intensity rings, was used to segment the area of the LV blood pool. The resultant IDIF and myocardial TACs were used to compute the corresponding reference (model) MCIF for all data sets. The segmented IDIF and the myocardium formed the input for the LSTM network. A k-fold cross validation structure with a 33:8:11 split and 5 folds was utilized to create the model and evaluate the performance of the LSTM network for all datasets. To overcome the sparseness of data as time steps increase, midpoint interpolation was utilized to increase the density of datapoints beyond time = 10 minutes. The model utilizing midpoint interpolation was able to achieve a 56.4% improvement over previous Mean Squared Error (MSE).

An Exploratory Approach Towards Investigating and Explaining Vision Transformer and Transfer Learning for Brain Disease Detection

Shuvashis Sarker, Shamim Rahim Refat, Faika Fairuj Preotee, Shifat Islam, Tashreef Muhammad, Mohammad Ashraful Hoque

arxiv logopreprintMay 21 2025
The brain is a highly complex organ that manages many important tasks, including movement, memory and thinking. Brain-related conditions, like tumors and degenerative disorders, can be hard to diagnose and treat. Magnetic Resonance Imaging (MRI) serves as a key tool for identifying these conditions, offering high-resolution images of brain structures. Despite this, interpreting MRI scans can be complicated. This study tackles this challenge by conducting a comparative analysis of Vision Transformer (ViT) and Transfer Learning (TL) models such as VGG16, VGG19, Resnet50V2, MobilenetV2 for classifying brain diseases using MRI data from Bangladesh based dataset. ViT, known for their ability to capture global relationships in images, are particularly effective for medical imaging tasks. Transfer learning helps to mitigate data constraints by fine-tuning pre-trained models. Furthermore, Explainable AI (XAI) methods such as GradCAM, GradCAM++, LayerCAM, ScoreCAM, and Faster-ScoreCAM are employed to interpret model predictions. The results demonstrate that ViT surpasses transfer learning models, achieving a classification accuracy of 94.39%. The integration of XAI methods enhances model transparency, offering crucial insights to aid medical professionals in diagnosing brain diseases with greater precision.

X-GRM: Large Gaussian Reconstruction Model for Sparse-view X-rays to Computed Tomography

Yifan Liu, Wuyang Li, Weihao Yu, Chenxin Li, Alexandre Alahi, Max Meng, Yixuan Yuan

arxiv logopreprintMay 21 2025
Computed Tomography serves as an indispensable tool in clinical workflows, providing non-invasive visualization of internal anatomical structures. Existing CT reconstruction works are limited to small-capacity model architecture, inflexible volume representation, and small-scale training data. In this paper, we present X-GRM (X-ray Gaussian Reconstruction Model), a large feedforward model for reconstructing 3D CT from sparse-view 2D X-ray projections. X-GRM employs a scalable transformer-based architecture to encode an arbitrary number of sparse X-ray inputs, where tokens from different views are integrated efficiently. Then, tokens are decoded into a new volume representation, named Voxel-based Gaussian Splatting (VoxGS), which enables efficient CT volume extraction and differentiable X-ray rendering. To support the training of X-GRM, we collect ReconX-15K, a large-scale CT reconstruction dataset containing around 15,000 CT/X-ray pairs across diverse organs, including the chest, abdomen, pelvis, and tooth etc. This combination of a high-capacity model, flexible volume representation, and large-scale training data empowers our model to produce high-quality reconstructions from various testing inputs, including in-domain and out-domain X-ray projections. Project Page: https://github.com/CUHK-AIM-Group/X-GRM.

SAMA-UNet: Enhancing Medical Image Segmentation with Self-Adaptive Mamba-Like Attention and Causal-Resonance Learning

Saqib Qamar, Mohd Fazil, Parvez Ahmad, Ghulam Muhammad

arxiv logopreprintMay 21 2025
Medical image segmentation plays an important role in various clinical applications, but existing models often struggle with the computational inefficiencies and challenges posed by complex medical data. State Space Sequence Models (SSMs) have demonstrated promise in modeling long-range dependencies with linear computational complexity, yet their application in medical image segmentation remains hindered by incompatibilities with image tokens and autoregressive assumptions. Moreover, it is difficult to achieve a balance in capturing both local fine-grained information and global semantic dependencies. To address these challenges, we introduce SAMA-UNet, a novel architecture for medical image segmentation. A key innovation is the Self-Adaptive Mamba-like Aggregated Attention (SAMA) block, which integrates contextual self-attention with dynamic weight modulation to prioritise the most relevant features based on local and global contexts. This approach reduces computational complexity and improves the representation of complex image features across multiple scales. We also suggest the Causal-Resonance Multi-Scale Module (CR-MSM), which enhances the flow of information between the encoder and decoder by using causal resonance learning. This mechanism allows the model to automatically adjust feature resolution and causal dependencies across scales, leading to better semantic alignment between the low-level and high-level features in U-shaped architectures. Experiments on MRI, CT, and endoscopy images show that SAMA-UNet performs better in segmentation accuracy than current methods using CNN, Transformer, and Mamba. The implementation is publicly available at GitHub.

Customized GPT-4V(ision) for radiographic diagnosis: can large language model detect supernumerary teeth?

Aşar EM, İpek İ, Bi Lge K

pubmed logopapersMay 21 2025
With the growing capabilities of language models like ChatGPT to process text and images, this study evaluated their accuracy in detecting supernumerary teeth on periapical radiographs. A customized GPT-4V model (CGPT-4V) was also developed to assess whether domain-specific training could improve diagnostic performance compared to standard GPT-4V and GPT-4o models. One hundred eighty periapical radiographs (90 with and 90 without supernumerary teeth) were evaluated using GPT-4 V, GPT-4o, and a fine-tuned CGPT-4V model. Each image was assessed separately with the standardized prompt "Are there any supernumerary teeth in the radiograph above?" to avoid contextual bias. Three dental experts scored the responses using a three-point Likert scale for positive cases and a binary scale for negatives. Chi-square tests and ROC analysis were used to compare model performances (p < 0.05). Among the three models, CGPT-4 V exhibited the highest accuracy, detecting supernumerary teeth correctly in 91% of cases, compared to 77% for GPT-4o and 63% for GPT-4V. The CGPT-4V model also demonstrated a significantly lower false positive rate (16%) than GPT-4V (42%). A statistically significant difference was found between CGPT-4V and GPT-4o (p < 0.001), while no significant difference was observed between GPT-4V and CGPT-4V or between GPT-4V and GPT-4o. Additionally, CGPT-4V successfully identified multiple supernumerary teeth in radiographs where present. These findings highlight the diagnostic potential of customized GPT models in dental radiology. Future research should focus on multicenter validation, seamless clinical integration, and cost-effectiveness to support real-world implementation.

X-GRM: Large Gaussian Reconstruction Model for Sparse-view X-rays to Computed Tomography

Yifan Liu, Wuyang Li, Weihao Yu, Chenxin Li, Alexandre Alahi, Max Meng, Yixuan Yuan

arxiv logopreprintMay 21 2025
Computed Tomography serves as an indispensable tool in clinical workflows, providing non-invasive visualization of internal anatomical structures. Existing CT reconstruction works are limited to small-capacity model architecture and inflexible volume representation. In this work, we present X-GRM (X-ray Gaussian Reconstruction Model), a large feedforward model for reconstructing 3D CT volumes from sparse-view 2D X-ray projections. X-GRM employs a scalable transformer-based architecture to encode sparse-view X-ray inputs, where tokens from different views are integrated efficiently. Then, these tokens are decoded into a novel volume representation, named Voxel-based Gaussian Splatting (VoxGS), which enables efficient CT volume extraction and differentiable X-ray rendering. This combination of a high-capacity model and flexible volume representation, empowers our model to produce high-quality reconstructions from various testing inputs, including in-domain and out-domain X-ray projections. Our codes are available at: https://github.com/CUHK-AIM-Group/X-GRM.

"DCSLK: Combined Large Kernel Shared Convolutional Model with Dynamic Channel Sampling".

Li Z, Luo S, Li H, Li Y

pubmed logopapersMay 20 2025
This study centers around the competition between Convolutional Neural Networks (CNNs) with large convolutional kernels and Vision Transformers in the domain of computer vision, delving deeply into the issues pertaining to parameters and computational complexity that stem from the utilization of large convolutional kernels. Even though the size of the convolutional kernels has been extended up to 51×51, the enhancement of performance has hit a plateau, and moreover, striped convolution incurs a performance degradation. Enlightened by the hierarchical visual processing mechanism inherent in humans, this research innovatively incorporates a shared parameter mechanism for large convolutional kernels. It synergizes the expansion of the receptive field enabled by large convolutional kernels with the extraction of fine-grained features facilitated by small convolutional kernels. To address the surging number of parameters, a meticulously designed parameter sharing mechanism is employed, featuring fine-grained processing in the central region of the convolutional kernel and wide-ranging parameter sharing in the periphery. This not only curtails the parameter count and mitigates the model complexity but also sustains the model's capacity to capture extensive spatial relationships. Additionally, in light of the problems of spatial feature information loss and augmented memory access during the 1×1 convolutional channel compression phase, this study further puts forward a dynamic channel sampling approach, which markedly elevates the accuracy of tumor subregion segmentation. To authenticate the efficacy of the proposed methodology, a comprehensive evaluation has been conducted on three brain tumor segmentation datasets, namely BraTs2020, BraTs2024, and Medical Segmentation Decathlon Brain 2018. The experimental results evince that the proposed model surpasses the current mainstream ConvNet and Transformer architectures across all performance metrics, proffering novel research perspectives and technical stratagems for the realm of medical image segmentation.

End-to-end Cortical Surface Reconstruction from Clinical Magnetic Resonance Images

Jesper Duemose Nielsen, Karthik Gopinath, Andrew Hoopes, Adrian Dalca, Colin Magdamo, Steven Arnold, Sudeshna Das, Axel Thielscher, Juan Eugenio Iglesias, Oula Puonti

arxiv logopreprintMay 20 2025
Surface-based cortical analysis is valuable for a variety of neuroimaging tasks, such as spatial normalization, parcellation, and gray matter (GM) thickness estimation. However, most tools for estimating cortical surfaces work exclusively on scans with at least 1 mm isotropic resolution and are tuned to a specific magnetic resonance (MR) contrast, often T1-weighted (T1w). This precludes application using most clinical MR scans, which are very heterogeneous in terms of contrast and resolution. Here, we use synthetic domain-randomized data to train the first neural network for explicit estimation of cortical surfaces from scans of any contrast and resolution, without retraining. Our method deforms a template mesh to the white matter (WM) surface, which guarantees topological correctness. This mesh is further deformed to estimate the GM surface. We compare our method to recon-all-clinical (RAC), an implicit surface reconstruction method which is currently the only other tool capable of processing heterogeneous clinical MR scans, on ADNI and a large clinical dataset (n=1,332). We show a approximately 50 % reduction in cortical thickness error (from 0.50 to 0.24 mm) with respect to RAC and better recovery of the aging-related cortical thinning patterns detected by FreeSurfer on high-resolution T1w scans. Our method enables fast and accurate surface reconstruction of clinical scans, allowing studies (1) with sample sizes far beyond what is feasible in a research setting, and (2) of clinical populations that are difficult to enroll in research studies. The code is publicly available at https://github.com/simnibs/brainnet.
Page 364 of 3703697 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.