Sort by:
Page 268 of 3423416 results

Multiclass ensemble framework for enhanced prostate gland Segmentation: Integrating Self-ONN decoders with EfficientNet.

Islam Sumon MS, Chowdhury MEH, Bhuiyan EH, Rahman MS, Khan MM, Al-Hashimi I, Mushtak A, Zoghoul SB

pubmed logopapersMay 30 2025
Digital pathology relies on the morphological architecture of prostate glands to recognize cancerous tissue. Prostate cancer (PCa) originates in walnut shaped prostate gland in the male reproductive system. Deep learning (DL) pipelines can assist in identifying these regions with advanced segmentation techniques which are effective in diagnosing and treating prostate diseases. This facilitates early detection, targeted biopsy, and accurate treatment planning, ensuring consistent, reproducible results while minimizing human error. Automated segmentation techniques trained on MRI datasets can aid in monitoring disease progression which leads to clinical support by developing patient-specific models for personalized medicine. In this study, we present multiclass segmentation models designed to localize the prostate gland and its zonal regions-specifically the peripheral zone (PZ), transition zone (TZ), and the whole gland-by combining EfficientNetB4 encoders with Self-organized Operational Neural Network (Self-ONN)-based decoders. Traditional convolutional neural networks (CNNs) rely on linear neuron models, which limit their ability to capture the complex dynamics of biological neural systems. In contrast, Operational Neural Networks (ONNs), particularly Self-ONNs, address this limitation by incorporating nonlinear and adaptive operations at the neuron level. We evaluated various encoder-decoder configurations and identified that the combination of an EfficientNet-based encoder with a Self-ONN-based decoder yielded the best performance. To further enhance segmentation accuracy, we employed the STAPLE method to ensemble the top three performing models. Our approach was tested on the large-scale, recently updated PI-CAI Challenge dataset using 5-fold cross-validation, achieving Dice scores of 95.33 % for the whole gland and 92.32 % for the combined PZ and TZ regions. These advanced segmentation techniques significantly improve the quality of PCa diagnosis and treatment, contributing to better patient care and outcomes.

Three-dimensional automated segmentation of adolescent idiopathic scoliosis on computed tomography driven by deep learning: A retrospective study.

Ji Y, Mei X, Tan R, Zhang W, Ma Y, Peng Y, Zhang Y

pubmed logopapersMay 30 2025
Accurate vertebrae segmentation is crucial for modern surgical technologies, and deep learning networks provide valuable tools for this task. This study explores the application of advanced deep learning-based methods for segmenting vertebrae in computed tomography (CT) images of adolescent idiopathic scoliosis (AIS) patients. In this study, we collected a dataset of 31 samples from AIS patients, covering a wide range of spinal regions from cervical to lumbar vertebrae. High-resolution CT images were obtained for each sample, forming the basis of our segmentation analysis. We utilized 2 popular neural networks, U-Net and Attention U-Net, to segment the vertebrae in these CT images. Segmentation performance was rigorously evaluated using 2 key metrics: the Dice Coefficient Score to measure overlap between segmented and ground truth regions, and the Hausdorff distance (HD) to assess boundary dissimilarity. Both networks performed well, with U-Net achieving an average Dice coefficient of 92.2 ± 2.4% and an HD of 9.80 ± 1.34 mm. Attention U-Net showed similar results, with a Dice coefficient of 92.3 ± 2.9% and an HD of 8.67 ± 3.38 mm. When applied to the challenging anatomy of AIS, our findings align with literature results from advanced 3D U-Nets on healthy spines. Although no significant overall difference was observed between the 2 networks (P > .05), Attention U-Net exhibited an improved Dice coefficient (91.5 ± 0.0% vs 88.8 ± 0.1%, P = .151) and a significantly better HD (9.04 ± 4.51 vs. 13.60 ± 2.26 mm, P = .027) in critical scoliosis sites (mid-thoracic region), suggesting enhanced suitability for complex anatomy. Our study indicates that U-Net neural networks are feasible and effective for automated vertebrae segmentation in AIS patients using clinical 3D CT images. Attention U-Net demonstrated improved performance in thoracic levels, which are primary sites of scoliosis and may be more suitable for challenging anatomical regions.

ACM-UNet: Adaptive Integration of CNNs and Mamba for Efficient Medical Image Segmentation

Jing Huang, Yongkang Zhao, Yuhan Li, Zhitao Dai, Cheng Chen, Qiying Lai

arxiv logopreprintMay 30 2025
The U-shaped encoder-decoder architecture with skip connections has become a prevailing paradigm in medical image segmentation due to its simplicity and effectiveness. While many recent works aim to improve this framework by designing more powerful encoders and decoders, employing advanced convolutional neural networks (CNNs) for local feature extraction, Transformers or state space models (SSMs) such as Mamba for global context modeling, or hybrid combinations of both, these methods often struggle to fully utilize pretrained vision backbones (e.g., ResNet, ViT, VMamba) due to structural mismatches. To bridge this gap, we introduce ACM-UNet, a general-purpose segmentation framework that retains a simple UNet-like design while effectively incorporating pretrained CNNs and Mamba models through a lightweight adapter mechanism. This adapter resolves architectural incompatibilities and enables the model to harness the complementary strengths of CNNs and SSMs-namely, fine-grained local detail extraction and long-range dependency modeling. Additionally, we propose a hierarchical multi-scale wavelet transform module in the decoder to enhance feature fusion and reconstruction fidelity. Extensive experiments on the Synapse and ACDC benchmarks demonstrate that ACM-UNet achieves state-of-the-art performance while remaining computationally efficient. Notably, it reaches 85.12% Dice Score and 13.89mm HD95 on the Synapse dataset with 17.93G FLOPs, showcasing its effectiveness and scalability. Code is available at: https://github.com/zyklcode/ACM-UNet.

pyMEAL: A Multi-Encoder Augmentation-Aware Learning for Robust and Generalizable Medical Image Translation

Abdul-mojeed Olabisi Ilyas, Adeleke Maradesa, Jamal Banzi, Jianpan Huang, Henry K. F. Mak, Kannie W. Y. Chan

arxiv logopreprintMay 30 2025
Medical imaging is critical for diagnostics, but clinical adoption of advanced AI-driven imaging faces challenges due to patient variability, image artifacts, and limited model generalization. While deep learning has transformed image analysis, 3D medical imaging still suffers from data scarcity and inconsistencies due to acquisition protocols, scanner differences, and patient motion. Traditional augmentation uses a single pipeline for all transformations, disregarding the unique traits of each augmentation and struggling with large data volumes. To address these challenges, we propose a Multi-encoder Augmentation-Aware Learning (MEAL) framework that leverages four distinct augmentation variants processed through dedicated encoders. Three fusion strategies such as concatenation (CC), fusion layer (FL), and adaptive controller block (BD) are integrated to build multi-encoder models that combine augmentation-specific features before decoding. MEAL-BD uniquely preserves augmentation-aware representations, enabling robust, protocol-invariant feature learning. As demonstrated in a Computed Tomography (CT)-to-T1-weighted Magnetic Resonance Imaging (MRI) translation study, MEAL-BD consistently achieved the best performance on both unseen- and predefined-test data. On both geometric transformations (like rotations and flips) and non-augmented inputs, MEAL-BD outperformed other competing methods, achieving higher mean peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) scores. These results establish MEAL as a reliable framework for preserving structural fidelity and generalizing across clinically relevant variability. By reframing augmentation as a source of diverse, generalizable features, MEAL supports robust, protocol-invariant learning, advancing clinically reliable medical imaging solutions.

Comparative analysis of natural language processing methodologies for classifying computed tomography enterography reports in Crohn's disease patients.

Dai J, Kim MY, Sutton RT, Mitchell JR, Goebel R, Baumgart DC

pubmed logopapersMay 30 2025
Imaging is crucial to assess disease extent, activity, and outcomes in inflammatory bowel disease (IBD). Artificial intelligence (AI) image interpretation requires automated exploitation of studies at scale as an initial step. Here we evaluate natural language processing to classify Crohn's disease (CD) on CTE. From our population representative IBD registry a sample of CD patients (male: 44.6%, median age: 50 IQR37-60) and controls (n = 981 each) CTE reports were extracted and split into training- (n = 1568), development- (n = 196), and testing (n = 198) datasets each with around 200 words and balanced numbers of labels, respectively. Predictive classification was evaluated with CNN, Bi-LSTM, BERT-110M, LLaMA-3.3-70B-Instruct and DeepSeek-R1-Distill-LLaMA-70B. While our custom IBDBERT finetuned on expert IBD knowledge (i.e. ACG, AGA, ECCO guidelines), outperformed rule- and rationale extraction-based classifiers (accuracy 88.6% with pre-tuning learning rate 0.00001, AUC 0.945) in predictive performance, LLaMA, but not DeepSeek achieved overall superior results (accuracy 91.2% vs. 88.9%, F1 0.907 vs. 0.874).

Pretraining Deformable Image Registration Networks with Random Images

Junyu Chen, Shuwen Wei, Yihao Liu, Aaron Carass, Yong Du

arxiv logopreprintMay 30 2025
Recent advances in deep learning-based medical image registration have shown that training deep neural networks~(DNNs) does not necessarily require medical images. Previous work showed that DNNs trained on randomly generated images with carefully designed noise and contrast properties can still generalize well to unseen medical data. Building on this insight, we propose using registration between random images as a proxy task for pretraining a foundation model for image registration. Empirical results show that our pretraining strategy improves registration accuracy, reduces the amount of domain-specific data needed to achieve competitive performance, and accelerates convergence during downstream training, thereby enhancing computational efficiency.

The Impact of Model-based Deep-learning Reconstruction Compared with that of Compressed Sensing-Sensitivity Encoding on the Image Quality and Precision of Cine Cardiac MR in Evaluating Left-ventricular Volume and Strain: A Study on Healthy Volunteers.

Tsuneta S, Aono S, Kimura R, Kwon J, Fujima N, Ishizaka K, Nishioka N, Yoneyama M, Kato F, Minowa K, Kudo K

pubmed logopapersMay 30 2025
To evaluate the effect of model-based deep-learning reconstruction (DLR) compared with that of compressed sensing-sensitivity encoding (CS) on cine cardiac magnetic resonance (CMR). Cine CMR images of 10 healthy volunteers were obtained with reduction factors of 2, 4, 6, and 8 and reconstructed using CS and DLR. The visual image quality scores assessed sharpness, image noise, and artifacts. Left-ventricular (LV) end-diastolic volume (EDV), end-systolic volume (ESV), stroke volume (SV), and ejection fraction (EF) were manually measured. LV global circumferential strain (GCS) was automatically measured using the software. The precision of EDV, ESV, SV, EF, and GCS measurements was compared between CS and DLR using Bland-Altman analysis with full-sampling data as the gold standard. Compared with CS, DLR significantly improved image quality with reduction factors of 6 and 8. The precision of EDV and ESV with a reduction factor of 8, and GCS with reduction factors of 6 and 8 measurements improved with DLR compared with CS, whereas those of SV and EF measurements were not different between DLR and CS. The effect of DLR on cine CMR's image quality and precision in evaluating quantitative volume and strain was equal or superior to that of CS. DLR may replace CS for cine CMR.

The value of artificial intelligence in PSMA PET: a pathway to improved efficiency and results.

Dadgar H, Hong X, Karimzadeh R, Ibragimov B, Majidpour J, Arabi H, Al-Ibraheem A, Khalaf AN, Anwar FM, Marafi F, Haidar M, Jafari E, Zarei A, Assadi M

pubmed logopapersMay 30 2025
This systematic review investigates the potential of artificial intelligence (AI) in improving the accuracy and efficiency of prostate-specific membrane antigen positron emission tomography (PSMA PET) scans for detecting metastatic prostate cancer. A comprehensive literature search was conducted across Medline, Embase, and Web of Science, adhering to PRISMA guidelines. Key search terms included "artificial intelligence," "machine learning," "deep learning," "prostate cancer," and "PSMA PET." The PICO framework guided the selection of studies focusing on AI's application in evaluating PSMA PET scans for staging lymph node and distant metastasis in prostate cancer patients. Inclusion criteria prioritized original English-language articles published up to October 2024, excluding studies using non-PSMA radiotracers, those analyzing only the CT component of PSMA PET-CT, studies focusing solely on intra-prostatic lesions, and non-original research articles. The review included 22 studies, with a mix of prospective and retrospective designs. AI algorithms employed included machine learning (ML), deep learning (DL), and convolutional neural networks (CNNs). The studies explored various applications of AI, including improving diagnostic accuracy, sensitivity, differentiation from benign lesions, standardization of reporting, and predicting treatment response. Results showed high sensitivity (62% to 97%) and accuracy (AUC up to 98%) in detecting metastatic disease, but also significant variability in positive predictive value (39.2% to 66.8%). AI demonstrates significant promise in enhancing PSMA PET scan analysis for metastatic prostate cancer, offering improved efficiency and potentially better diagnostic accuracy. However, the variability in performance and the "black box" nature of some algorithms highlight the need for larger prospective studies, improved model interpretability, and the continued involvement of experienced nuclear medicine physicians in interpreting AI-assisted results. AI should be considered a valuable adjunct, not a replacement, for expert clinical judgment.

Beyond the LUMIR challenge: The pathway to foundational registration models

Junyu Chen, Shuwen Wei, Joel Honkamaa, Pekka Marttinen, Hang Zhang, Min Liu, Yichao Zhou, Zuopeng Tan, Zhuoyuan Wang, Yi Wang, Hongchao Zhou, Shunbo Hu, Yi Zhang, Qian Tao, Lukas Förner, Thomas Wendler, Bailiang Jian, Benedikt Wiestler, Tim Hable, Jin Kim, Dan Ruan, Frederic Madesta, Thilo Sentker, Wiebke Heyer, Lianrui Zuo, Yuwei Dai, Jing Wu, Jerry L. Prince, Harrison Bai, Yong Du, Yihao Liu, Alessa Hering, Reuben Dorent, Lasse Hansen, Mattias P. Heinrich, Aaron Carass

arxiv logopreprintMay 30 2025
Medical image challenges have played a transformative role in advancing the field, catalyzing algorithmic innovation and establishing new performance standards across diverse clinical applications. Image registration, a foundational task in neuroimaging pipelines, has similarly benefited from the Learn2Reg initiative. Building on this foundation, we introduce the Large-scale Unsupervised Brain MRI Image Registration (LUMIR) challenge, a next-generation benchmark designed to assess and advance unsupervised brain MRI registration. Distinct from prior challenges that leveraged anatomical label maps for supervision, LUMIR removes this dependency by providing over 4,000 preprocessed T1-weighted brain MRIs for training without any label maps, encouraging biologically plausible deformation modeling through self-supervision. In addition to evaluating performance on 590 held-out test subjects, LUMIR introduces a rigorous suite of zero-shot generalization tasks, spanning out-of-domain imaging modalities (e.g., FLAIR, T2-weighted, T2*-weighted), disease populations (e.g., Alzheimer's disease), acquisition protocols (e.g., 9.4T MRI), and species (e.g., macaque brains). A total of 1,158 subjects and over 4,000 image pairs were included for evaluation. Performance was assessed using both segmentation-based metrics (Dice coefficient, 95th percentile Hausdorff distance) and landmark-based registration accuracy (target registration error). Across both in-domain and zero-shot tasks, deep learning-based methods consistently achieved state-of-the-art accuracy while producing anatomically plausible deformation fields. The top-performing deep learning-based models demonstrated diffeomorphic properties and inverse consistency, outperforming several leading optimization-based methods, and showing strong robustness to most domain shifts, the exception being a drop in performance on out-of-domain contrasts.

Sparsity-Driven Parallel Imaging Consistency for Improved Self-Supervised MRI Reconstruction

Yaşar Utku Alçalar, Mehmet Akçakaya

arxiv logopreprintMay 30 2025
Physics-driven deep learning (PD-DL) models have proven to be a powerful approach for improved reconstruction of rapid MRI scans. In order to train these models in scenarios where fully-sampled reference data is unavailable, self-supervised learning has gained prominence. However, its application at high acceleration rates frequently introduces artifacts, compromising image fidelity. To mitigate this shortcoming, we propose a novel way to train PD-DL networks via carefully-designed perturbations. In particular, we enhance the k-space masking idea of conventional self-supervised learning with a novel consistency term that assesses the model's ability to accurately predict the added perturbations in a sparse domain, leading to more reliable and artifact-free reconstructions. The results obtained from the fastMRI knee and brain datasets show that the proposed training strategy effectively reduces aliasing artifacts and mitigates noise amplification at high acceleration rates, outperforming state-of-the-art self-supervised methods both visually and quantitatively.
Page 268 of 3423416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.