Sort by:
Page 9 of 983 results

Improved Brain Tumor Detection in MRI: Fuzzy Sigmoid Convolution in Deep Learning

Muhammad Irfan, Anum Nawaz, Riku Klen, Abdulhamit Subasi, Tomi Westerlund, Wei Chen

arxiv logopreprintMay 8 2025
Early detection and accurate diagnosis are essential to improving patient outcomes. The use of convolutional neural networks (CNNs) for tumor detection has shown promise, but existing models often suffer from overparameterization, which limits their performance gains. In this study, fuzzy sigmoid convolution (FSC) is introduced along with two additional modules: top-of-the-funnel and middle-of-the-funnel. The proposed methodology significantly reduces the number of trainable parameters without compromising classification accuracy. A novel convolutional operator is central to this approach, effectively dilating the receptive field while preserving input data integrity. This enables efficient feature map reduction and enhances the model's tumor detection capability. In the FSC-based model, fuzzy sigmoid activation functions are incorporated within convolutional layers to improve feature extraction and classification. The inclusion of fuzzy logic into the architecture improves its adaptability and robustness. Extensive experiments on three benchmark datasets demonstrate the superior performance and efficiency of the proposed model. The FSC-based architecture achieved classification accuracies of 99.17%, 99.75%, and 99.89% on three different datasets. The model employs 100 times fewer parameters than large-scale transfer learning architectures, highlighting its computational efficiency and suitability for detecting brain tumors early. This research offers lightweight, high-performance deep-learning models for medical imaging applications.

An imageless magnetic resonance framework for fast and cost-effective decision-making

Alba González-Cebrián, Pablo García-Cristóbal, Fernando Galve, Efe Ilıcak, Viktor Van Der Valk, Marius Staring, Andrew Webb, Joseba Alonso

arxiv logopreprintMay 7 2025
Magnetic Resonance Imaging (MRI) is the gold standard in countless diagnostic procedures, yet hardware complexity, long scans, and cost preclude rapid screening and point-of-care use. We introduce Imageless Magnetic Resonance Diagnosis (IMRD), a framework that bypasses k-space sampling and image reconstruction by analyzing raw one-dimensional MR signals. We identify potentially impactful embodiments where IMRD requires only optimized pulse sequences for time-domain contrast, minimal low-field hardware, and pattern recognition algorithms to answer clinical closed queries and quantify lesion burden. As a proof of concept, we simulate multiple sclerosis lesions in silico within brain phantoms and deploy two extremely fast protocols (approximately 3 s), with and without spatial information. A 1D convolutional neural network achieves AUC close to 0.95 for lesion detection and R2 close to 0.99 for volume estimation. We also perform robustness tests under reduced signal-to-noise ratio, partial signal omission, and relaxation-time variability. By reframing MR signals as direct diagnostic metrics, IMRD paves the way for fast, low-cost MR screening and monitoring in resource-limited environments.

Cross-organ all-in-one parallel compressed sensing magnetic resonance imaging

Baoshun Shi, Zheng Liu, Xin Meng, Yan Yang

arxiv logopreprintMay 7 2025
Recent advances in deep learning-based parallel compressed sensing magnetic resonance imaging (p-CSMRI) have significantly improved reconstruction quality. However, current p-CSMRI methods often require training separate deep neural network (DNN) for each organ due to anatomical variations, creating a barrier to developing generalized medical image reconstruction systems. To address this, we propose CAPNet (cross-organ all-in-one deep unfolding p-CSMRI network), a unified framework that implements a p-CSMRI iterative algorithm via three specialized modules: auxiliary variable module, prior module, and data consistency module. Recognizing that p-CSMRI systems often employ varying sampling ratios for different organs, resulting in organ-specific artifact patterns, we introduce an artifact generation submodule, which extracts and integrates artifact features into the data consistency module to enhance the discriminative capability of the overall network. For the prior module, we design an organ structure-prompt generation submodule that leverages structural features extracted from the segment anything model (SAM) to create cross-organ prompts. These prompts are strategically incorporated into the prior module through an organ structure-aware Mamba submodule. Comprehensive evaluations on a cross-organ dataset confirm that CAPNet achieves state-of-the-art reconstruction performance across multiple anatomical structures using a single unified model. Our code will be published at https://github.com/shibaoshun/CAPNet.
Page 9 of 983 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.