Sort by:
Page 70 of 1341333 results

PanTS: The Pancreatic Tumor Segmentation Dataset

Wenxuan Li, Xinze Zhou, Qi Chen, Tianyu Lin, Pedro R. A. S. Bassi, Szymon Plotka, Jaroslaw B. Cwikla, Xiaoxi Chen, Chen Ye, Zheren Zhu, Kai Ding, Heng Li, Kang Wang, Yang Yang, Yucheng Tang, Daguang Xu, Alan L. Yuille, Zongwei Zhou

arxiv logopreprintJul 2 2025
PanTS is a large-scale, multi-institutional dataset curated to advance research in pancreatic CT analysis. It contains 36,390 CT scans from 145 medical centers, with expert-validated, voxel-wise annotations of over 993,000 anatomical structures, covering pancreatic tumors, pancreas head, body, and tail, and 24 surrounding anatomical structures such as vascular/skeletal structures and abdominal/thoracic organs. Each scan includes metadata such as patient age, sex, diagnosis, contrast phase, in-plane spacing, slice thickness, etc. AI models trained on PanTS achieve significantly better performance in pancreatic tumor detection, localization, and segmentation compared to those trained on existing public datasets. Our analysis indicates that these gains are directly attributable to the 16x larger-scale tumor annotations and indirectly supported by the 24 additional surrounding anatomical structures. As the largest and most comprehensive resource of its kind, PanTS offers a new benchmark for developing and evaluating AI models in pancreatic CT analysis.

[AI-based applications in medical image computing].

Kepp T, Uzunova H, Ehrhardt J, Handels H

pubmed logopapersJul 2 2025
The processing of medical images plays a central role in modern diagnostics and therapy. Automated processing and analysis of medical images can efficiently accelerate clinical workflows and open new opportunities for improved patient care. However, the high variability, complexity, and varying quality of medical image data pose significant challenges. In recent years, the greatest progress in medical image analysis has been achieved through artificial intelligence (AI), particularly by using deep neural networks in the context of deep learning. These methods are successfully applied in medical image analysis, including segmentation, registration, and image synthesis.AI-based segmentation allows for the precise delineation of organs, tissues, or pathological changes. The application of AI-based image registration supports the accelerated creation of 3D planning models for complex surgeries by aligning relevant anatomical structures from different imaging modalities (e.g., CT, MRI, and PET) or time points. Generative AI methods can be used to generate additional image data for the improved training of AI models, thereby expanding the potential applications of deep learning methods in medicine. Examples from radiology, ophthalmology, dermatology, and surgery are described to illustrate their practical relevance and the potential of AI in image-based diagnostics and therapy.

A deep learning-based computed tomography reading system for the diagnosis of lung cancer associated with cystic airspaces.

Hu Z, Zhang X, Yang J, Zhang B, Chen H, Shen W, Li H, Zhou Y, Zhang J, Qiu K, Xie Z, Xu G, Tan J, Pang C

pubmed logopapersJul 2 2025
To propose a deep learning model and explore its performance in the auxiliary diagnosis of lung cancer associated with cystic airspaces (LCCA) in computed tomography (CT) images. This study is a retrospective analysis that incorporated a total of 342 CT series, comprising 272 series from patients diagnosed with LCCA and 70 series from patients with pulmonary bulla. A deep learning model named LungSSFNet, developed based on nnUnet, was utilized for image recognition and segmentation by experienced thoracic surgeons. The dataset was divided into a training set (245 series), a validation set (62 series), and a test set (35 series). The performance of LungSSFNet was compared with other models such as UNet, M2Snet, TANet, MADGNet, and nnUnet to evaluate its effectiveness in recognizing and segmenting LCCA and pulmonary bulla. LungSSFNet achieved an intersection over union of 81.05% and a Dice similarity coefficient of 75.15% for LCCA, and 93.03% and 92.04% for pulmonary bulla, respectively. These outcomes demonstrate that LungSSFNet outperformed many existing models in segmentation tasks. Additionally, it attained an accuracy of 96.77%, a precision of 100%, and a sensitivity of 96.15%. LungSSFNet, a new deep-learning model, substantially improved the diagnosis of early-stage LCCA and is potentially valuable for auxiliary clinical decision-making. Our LungSSFNet code is available at https://github.com/zx0412/LungSSFNet .

Deep learning strategies for semantic segmentation of pediatric brain tumors in multiparametric MRI.

Cariola A, Sibilano E, Guerriero A, Bevilacqua V, Brunetti A

pubmed logopapersJul 2 2025
Automated segmentation of pediatric brain tumors (PBTs) can support precise diagnosis and treatment monitoring, but it is still poorly investigated in literature. This study proposes two different Deep Learning approaches for semantic segmentation of tumor regions in PBTs from MRI scans. Two pipelines were developed for segmenting enhanced tumor (ET), tumor core (TC), and whole tumor (WT) in pediatric gliomas from the BraTS-PEDs 2024 dataset. First, a pre-trained SegResNet model was retrained with a transfer learning approach and tested on the pediatric cohort. Then, two novel multi-encoder architectures leveraging the attention mechanism were designed and trained from scratch. To enhance the performance on ET regions, an ensemble paradigm and post-processing techniques were implemented. Overall, the 3-encoder model achieved the best performance in terms of Dice Score on TC and WT when trained with Dice Loss and on ET when trained with Generalized Dice Focal Loss. SegResNet showed higher recall on TC and WT, and higher precision on ET. After post-processing, we reached Dice Scores of 0.843, 0.869, 0.757 with the pre-trained model and 0.852, 0.876, 0.764 with the ensemble model for TC, WT and ET, respectively. Both strategies yielded state-of-the-art performances, although the ensemble demonstrated significantly superior results. Segmentation of the ET region was improved after post-processing, which increased test metrics while maintaining the integrity of the data.

Multi-scale fusion semantic enhancement network for medical image segmentation.

Zhang Z, Xu C, Li Z, Chen Y, Nie C

pubmed logopapersJul 2 2025
The application of sophisticated computer vision techniques for medical image segmentation (MIS) plays a vital role in clinical diagnosis and treatment. Although Transformer-based models are effective at capturing global context, they are often ineffective at dealing with local feature dependencies. In order to improve this problem, we design a Multi-scale Fusion and Semantic Enhancement Network (MFSE-Net) for endoscopic image segmentation, which aims to capture global information and enhance detailed information. MFSE-Net uses a dual encoder architecture, with PVTv2 as the primary encoder to capture global features and CNNs as the secondary encoder to capture local details. The main encoder includes the LGDA (Large-kernel Grouped Deformable Attention) module for filtering noise and enhancing the semantic extraction of the four hierarchical features. The auxiliary encoder leverages the MLCF (Multi-Layered Cross-attention Fusion) module to integrate high-level semantic data from the deep CNN with fine spatial details from the shallow layers, enhancing the precision of boundaries and positioning. On the decoder side, we have introduced the PSE (Parallel Semantic Enhancement) module, which embeds the boundary and position information of the secondary encoder into the output characteristics of the backbone network. In the multi-scale decoding process, we also add SAM (Scale Aware Module) to recover global semantic information and offset for the loss of boundary details. Extensive experiments have shown that MFSE-Net overwhelmingly outperforms SOTA on the renal tumor and polyp datasets.

AI-driven genetic algorithm-optimized lung segmentation for precision in early lung cancer diagnosis.

Said Y, Ayachi R, Afif M, Saidani T, Alanezi ST, Saidani O, Algarni AD

pubmed logopapersJul 2 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, necessitating accurate and efficient diagnostic tools to improve patient outcomes. Lung segmentation plays a pivotal role in the diagnostic pipeline, directly impacting the accuracy of disease detection and treatment planning. This study presents an advanced AI-driven framework, optimized through genetic algorithms, for precise lung segmentation in early cancer diagnosis. The proposed model builds upon the UNET3 + architecture and integrates multi-scale feature extraction with enhanced optimization strategies to improve segmentation accuracy while significantly reducing computational complexity. By leveraging genetic algorithms, the framework identifies optimal neural network configurations within a defined search space, ensuring high segmentation performance with minimal parameters. Extensive experiments conducted on publicly available lung segmentation datasets demonstrated superior results, achieving a dice similarity coefficient of 99.17% with only 26% of the parameters required by the baseline UNET3 + model. This substantial reduction in model size and computational cost makes the system highly suitable for resource-constrained environments, including point-of-care diagnostic devices. The proposed approach exemplifies the transformative potential of AI in medical imaging, enabling earlier and more precise lung cancer diagnosis while reducing healthcare disparities in resource-limited settings.

Multi-scheme cross-level attention embedded U-shape transformer for MRI semantic segmentation.

Wang Q, Xue Y

pubmed logopapersJul 2 2025
Accurate MRI image segmentation is crucial for disease diagnosis, but current Transformer-based methods face two key challenges: limited capability to capture detailed information, leading to blurred boundaries and false localization, and the lack of MRI-specific embedding paradigms for attention modules, which limits their potential and representation capability. To address these challenges, this paper proposes a multi-scheme cross-level attention embedded U-shape Transformer (MSCL-SwinUNet). This model integrates cross-level spatial-wise attention (SW-Attention) to transfer detailed information from encoder to decoder, cross-stage channel-wise attention (CW-Attention) to filter out redundant features and enhance task-related channels, and multi-stage scale-wise attention (ScaleW-Attention) to adaptively process multi-scale features. Extensive experiments on the ACDC, MM-WHS and Synapse datasets demonstrate that the proposed MSCL-SwinUNet surpasses state-of-the-art methods in accuracy and generalizability. Visualization further confirms the superiority of our model in preserving detailed boundaries. This work not only advances Transformer-based segmentation in medical imaging but also provides new insights into designing MRI-specific attention embedding paradigms.Our code is available at https://github.com/waylans/MSCL-SwinUNet .

Enhanced security for medical images using a new 5D hyper chaotic map and deep learning based segmentation.

Subathra S, Thanikaiselvan V

pubmed logopapersJul 2 2025
Medical image encryption is important for maintaining the confidentiality of sensitive medical data and protecting patient privacy. Contemporary healthcare systems store significant patient data in text and graphic form. This research proposes a New 5D hyperchaotic system combined with a customised U-Net architecture. Chaotic maps have become an increasingly popular method for encryption because of their remarkable characteristics, including statistical randomness and sensitivity to initial conditions. The significant region is segmented from the medical images using the U-Net network, and its statistics are utilised as initial conditions to generate the new random sequence. Initially, zig-zag scrambling confuses the pixel position of a medical image and applies further permutation with a new 5D hyperchaotic sequence. Two stages of diffusion are used, such as dynamic DNA flip and dynamic DNA XOR, to enhance the encryption algorithm's security against various attacks. The randomness of the New 5D hyperchaotic system is verified using the NIST SP800-22 statistical test, calculating the Lyapunov exponent and plotting the attractor diagram of the chaotic sequence. The algorithm validates with statistical measures such as PSNR, MSE, NPCR, UACI, entropy, and Chi-square values. Evaluation is performed for test images yields average horizontal, vertical, and diagonal correlation coefficients of -0.0018, -0.0002, and 0.0007, respectively, Shannon entropy of 7.9971, Kolmogorov Entropy value of 2.9469, NPCR of 99.61%, UACI of 33.49%, Chi-square "PASS" at both the 5% (293.2478) and 1% (310.4574) significance levels, key space is 2<sup>500</sup> and an average encryption time of approximately 2.93 s per 256 × 256 image on a standard desktop CPU. The performance comparisons use various encryption methods and demonstrate that the proposed method ensures secure reliability against various challenges.

Calibrated Self-supervised Vision Transformers Improve Intracranial Arterial Calcification Segmentation from Clinical CT Head Scans

Benjamin Jin, Grant Mair, Joanna M. Wardlaw, Maria del C. Valdés Hernández

arxiv logopreprintJul 2 2025
Vision Transformers (ViTs) have gained significant popularity in the natural image domain but have been less successful in 3D medical image segmentation. Nevertheless, 3D ViTs are particularly interesting for large medical imaging volumes due to their efficient self-supervised training within the masked autoencoder (MAE) framework, which enables the use of imaging data without the need for expensive manual annotations. intracranial arterial calcification (IAC) is an imaging biomarker visible on routinely acquired CT scans linked to neurovascular diseases such as stroke and dementia, and automated IAC quantification could enable their large-scale risk assessment. We pre-train ViTs with MAE and fine-tune them for IAC segmentation for the first time. To develop our models, we use highly heterogeneous data from a large clinical trial, the third International Stroke Trial (IST-3). We evaluate key aspects of MAE pre-trained ViTs in IAC segmentation, and analyse the clinical implications. We show: 1) our calibrated self-supervised ViT beats a strong supervised nnU-Net baseline by 3.2 Dice points, 2) low patch sizes are crucial for ViTs for IAC segmentation and interpolation upsampling with regular convolutions is preferable to transposed convolutions for ViT-based models, and 3) our ViTs increase robustness to higher slice thicknesses and improve risk group classification in a clinical scenario by 46%. Our code is available online.

A Novel Two-step Classification Approach for Differentiating Bone Metastases From Benign Bone Lesions in SPECT/CT Imaging.

Xie W, Wang X, Liu M, Mai L, Shangguan H, Pan X, Zhan Y, Zhang J, Wu X, Dai Y, Pei Y, Zhang G, Yao Z, Wang Z

pubmed logopapersJul 2 2025
This study aims to develop and validate a novel two-step deep learning framework for the automated detection, segmentation, and classification of bone metastases in SPECT/CT imaging, accurately distinguishing malignant from benign lesions to improve early diagnosis and facilitate personalized treatment planning. A segmentation model, BL-Seg, was developed to automatically segment lesion regions in SPECT/CT images, utilizing a multi-scale attention fusion module and a triple attention mechanism to capture metabolic variations and refine lesion boundaries. A radiomics-based ensemble learning classifier was subsequently applied to integrate metabolic and texture features for benign-malignant differentiation. The framework was trained and evaluated using a proprietary dataset of SPECT/CT images collected from our institution. Performance metrics, including Dice coefficient, sensitivity, specificity, and AUC, were compared against conventional methods. The study utilized a dataset of SPECT/CT cases from our institution, divided into training and test sets acquired on Siemens SPECT/CT scanners with minor protocol differences. BL-Seg achieved a Dice coefficient of 0.8797, surpassing existing segmentation models. The classification model yielded an AUC of 0.8502, with improved sensitivity and specificity compared to traditional approaches. The proposed framework, with BL-Seg's automated lesion segmentation, demonstrates superior accuracy in detecting, segmenting, and classifying bone metastases, offering a robust tool for early diagnosis and personalized treatment planning in metastatic bone disease.
Page 70 of 1341333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.