Sort by:
Page 243 of 3623619 results

Convolutional neural network for maxillary sinus segmentation based on the U-Net architecture at different planes in the Chinese population: a semantic segmentation study.

Chen J

pubmed logopapersJul 1 2025
The development of artificial intelligence has revolutionized the field of dentistry. Medical image segmentation is a vital part of AI applications in dentistry. This technique can assist medical practitioners in accurately diagnosing diseases. The detection of the maxillary sinus (MS), such as dental implants, tooth extraction, and endoscopic surgery, is important in the surgical field. The accurate segmentation of MS in radiological images is a prerequisite for diagnosis and treatment planning. This study aims to investigate the feasibility of applying a CNN algorithm based on the U-Net architecture to facilitate MS segmentation of individuals from the Chinese population. A total of 300 CBCT images in the axial, coronal, and sagittal planes were used in this study. These images were divided into a training set and a test set at a ratio of 8:2. The marked regions (maxillary sinus) were labelled for training and testing in the original images. The training process was performed for 40 epochs using a learning rate of 0.00001. Computation was performed on an RTX GeForce 3060 GPU. The best model was retained for predicting MS in the test set and calculating the model parameters. The trained U-Net model achieved yield segmentation accuracy across the three imaging planes. The IoU values were 0.942, 0.937 and 0.916 in the axial, sagittal and coronal planes, respectively, with F1 scores across all planes exceeding 0.95. The accuracies of the U-Net model were 0.997, 0.998, and 0.995 in the axial, sagittal and coronal planes, respectively. The trained U-Net model achieved highly accurate segmentation of MS across three planes on the basis of 2D CBCT images among the Chinese population. The AI model has shown promising application potential for daily clinical practice. Not applicable.

Automated classification of chondroid tumor using 3D U-Net and radiomics with deep features.

Le Dinh T, Lee S, Park H, Lee S, Choi H, Chun KS, Jung JY

pubmed logopapersJul 1 2025
Classifying chondroid tumors is an essential step for effective treatment planning. Recently, with the advances in computer-aided diagnosis and the increasing availability of medical imaging data, automated tumor classification using deep learning shows promise in assisting clinical decision-making. In this study, we propose a hybrid approach that integrates deep learning and radiomics for chondroid tumor classification. First, we performed tumor segmentation using the nnUNetv2 framework, which provided three-dimensional (3D) delineation of tumor regions of interest (ROIs). From these ROIs, we extracted a set of radiomics features and deep learning-derived features. After feature selection, we identified 15 radiomics and 15 deep features to build classification models. We developed 5 machine learning classifiers including Random Forest, XGBoost, Gradient Boosting, LightGBM, and CatBoost for the classification models. The approach integrating features from radiomics, ROI-originated deep learning features, and clinical variables yielded the best overall classification results. Among the classifiers, CatBoost classifier achieved the highest accuracy of 0.90 (95% CI 0.90-0.93), a weighted kappa of 0.85, and an AUC of 0.91. These findings highlight the potential of integrating 3D U-Net-assisted segmentation with radiomics and deep learning features to improve classification of chondroid tumors.

Establishment and evaluation of an automatic multi?sequence MRI segmentation model of primary central nervous system lymphoma based on the nnU?Net deep learning network method.

Wang T, Tang X, Du J, Jia Y, Mou W, Lu G

pubmed logopapersJul 1 2025
Accurate quantitative assessment using gadolinium-contrast magnetic resonance imaging (MRI) is crucial in therapy planning, surveillance and prognostic assessment of primary central nervous system lymphoma (PCNSL). The present study aimed to develop a multimodal artificial intelligence deep learning segmentation model to address the challenges associated with traditional 2D measurements and manual volume assessments in MRI. Data from 49 pathologically-confirmed patients with PCNSL from six Chinese medical centers were analyzed, and regions of interest were manually segmented on contrast-enhanced T1-weighted and T2-weighted MRI scans for each patient, followed by fully automated voxel-wise segmentation of tumor components using a 3-dimenstional convolutional deep neural network. Furthermore, the efficiency of the model was evaluated using practical indicators and its consistency and accuracy was compared with traditional methods. The performance of the models were assessed using the Dice similarity coefficient (DSC). The Mann-Whitney U test was used to compare continuous clinical variables and the χ<sup>2</sup> test was used for comparisons between categorical clinical variables. T1WI sequences exhibited the optimal performance (training dice: 0.923, testing dice: 0.830, outer validation dice: 0.801), while T2WI showed a relatively poor performance (training dice of 0.761, a testing dice of 0.647, and an outer validation dice of 0.643. In conclusion, the automatic multi-sequences MRI segmentation model for PCNSL in the present study displayed high spatial overlap ratio and similar tumor volume with routine manual segmentation, indicating its significant potential.

Deep Learning for Detecting and Subtyping Renal Cell Carcinoma on Contrast-Enhanced CT Scans Using 2D Neural Network with Feature Consistency Techniques.

Gupta A, Dhanakshirur RR, Jain K, Garg S, Yadav N, Seth A, Das CJ

pubmed logopapersJul 1 2025
<b>Objective</b>  The aim of this study was to explore an innovative approach for developing deep learning (DL) algorithm for renal cell carcinoma (RCC) detection and subtyping on computed tomography (CT): clear cell RCC (ccRCC) versus non-ccRCC using two-dimensional (2D) neural network architecture and feature consistency modules. <b>Materials and Methods</b>  This retrospective study included baseline CT scans from 196 histopathologically proven RCC patients: 143 ccRCCs and 53 non-ccRCCs. Manual tumor annotations were performed on axial slices of corticomedullary phase images, serving as ground truth. After image preprocessing, the dataset was divided into training, validation, and testing subsets. The study tested multiple 2D DL architectures, with the FocalNet-DINO demonstrating highest effectiveness in detecting and classifying RCC. The study further incorporated spatial and class consistency modules to enhance prediction accuracy. Models' performance was evaluated using free-response receiver operating characteristic curves, recall rates, specificity, accuracy, F1 scores, and area under the curve (AUC) scores. <b>Results</b>  The FocalNet-DINO architecture achieved the highest recall rate of 0.823 at 0.025 false positives per image (FPI) for RCC detection. The integration of spatial and class consistency modules into the architecture led to 0.2% increase in recall rate at 0.025 FPI, along with improvements of 0.1% in both accuracy and AUC scores for RCC classification. These enhancements allowed detection of cancer in an additional 21 slices and reduced false positives in 126 slices. <b>Conclusion</b>  This study demonstrates high performance for RCC detection and classification using DL algorithm leveraging 2D neural networks and spatial and class consistency modules, to offer a novel, computationally simpler, and accurate DL approach to RCC characterization.

Diffusion Model-based Data Augmentation Method for Fetal Head Ultrasound Segmentation

Fangyijie Wang, Kevin Whelan, Félix Balado, Guénolé Silvestre, Kathleen M. Curran

arxiv logopreprintJun 30 2025
Medical image data is less accessible than in other domains due to privacy and regulatory constraints. In addition, labeling requires costly, time-intensive manual image annotation by clinical experts. To overcome these challenges, synthetic medical data generation offers a promising solution. Generative AI (GenAI), employing generative deep learning models, has proven effective at producing realistic synthetic images. This study proposes a novel mask-guided GenAI approach using diffusion models to generate synthetic fetal head ultrasound images paired with segmentation masks. These synthetic pairs augment real datasets for supervised fine-tuning of the Segment Anything Model (SAM). Our results show that the synthetic data captures real image features effectively, and this approach reaches state-of-the-art fetal head segmentation, especially when trained with a limited number of real image-mask pairs. In particular, the segmentation reaches Dice Scores of 94.66\% and 94.38\% using a handful of ultrasound images from the Spanish and African cohorts, respectively. Our code, models, and data are available on GitHub.

VAP-Diffusion: Enriching Descriptions with MLLMs for Enhanced Medical Image Generation

Peng Huang, Junhu Fu, Bowen Guo, Zeju Li, Yuanyuan Wang, Yi Guo

arxiv logopreprintJun 30 2025
As the appearance of medical images is influenced by multiple underlying factors, generative models require rich attribute information beyond labels to produce realistic and diverse images. For instance, generating an image of skin lesion with specific patterns demands descriptions that go beyond diagnosis, such as shape, size, texture, and color. However, such detailed descriptions are not always accessible. To address this, we explore a framework, termed Visual Attribute Prompts (VAP)-Diffusion, to leverage external knowledge from pre-trained Multi-modal Large Language Models (MLLMs) to improve the quality and diversity of medical image generation. First, to derive descriptions from MLLMs without hallucination, we design a series of prompts following Chain-of-Thoughts for common medical imaging tasks, including dermatologic, colorectal, and chest X-ray images. Generated descriptions are utilized during training and stored across different categories. During testing, descriptions are randomly retrieved from the corresponding category for inference. Moreover, to make the generator robust to unseen combination of descriptions at the test time, we propose a Prototype Condition Mechanism that restricts test embeddings to be similar to those from training. Experiments on three common types of medical imaging across four datasets verify the effectiveness of VAP-Diffusion.

GUSL: A Novel and Efficient Machine Learning Model for Prostate Segmentation on MRI

Jiaxin Yang, Vasileios Magoulianitis, Catherine Aurelia Christie Alexander, Jintang Xue, Masatomo Kaneko, Giovanni Cacciamani, Andre Abreu, Vinay Duddalwar, C. -C. Jay Kuo, Inderbir S. Gill, Chrysostomos Nikias

arxiv logopreprintJun 30 2025
Prostate and zonal segmentation is a crucial step for clinical diagnosis of prostate cancer (PCa). Computer-aided diagnosis tools for prostate segmentation are based on the deep learning (DL) paradigm. However, deep neural networks are perceived as "black-box" solutions by physicians, thus making them less practical for deployment in the clinical setting. In this paper, we introduce a feed-forward machine learning model, named Green U-shaped Learning (GUSL), suitable for medical image segmentation without backpropagation. GUSL introduces a multi-layer regression scheme for coarse-to-fine segmentation. Its feature extraction is based on a linear model, which enables seamless interpretability during feature extraction. Also, GUSL introduces a mechanism for attention on the prostate boundaries, which is an error-prone region, by employing regression to refine the predictions through residue correction. In addition, a two-step pipeline approach is used to mitigate the class imbalance, an issue inherent in medical imaging problems. After conducting experiments on two publicly available datasets and one private dataset, in both prostate gland and zonal segmentation tasks, GUSL achieves state-of-the-art performance among other DL-based models. Notably, GUSL features a very energy-efficient pipeline, since it has a model size several times smaller and less complexity than the rest of the solutions. In all datasets, GUSL achieved a Dice Similarity Coefficient (DSC) performance greater than $0.9$ for gland segmentation. Considering also its lightweight model size and transparency in feature extraction, it offers a competitive and practical package for medical imaging applications.

Automatic Multiclass Tissue Segmentation Using Deep Learning in Brain MR Images of Tumor Patients.

Kandpal A, Kumar P, Gupta RK, Singh A

pubmed logopapersJun 30 2025
Precise delineation of brain tissues, including lesions, in MR images is crucial for data analysis and objectively assessing conditions like neurological disorders and brain tumors. Existing methods for tissue segmentation often fall short in addressing patients with lesions, particularly those with brain tumors. This study aimed to develop and evaluate a robust pipeline utilizing convolutional neural networks for rapid and automatic segmentation of whole brain tissues, including tumor lesions. The proposed pipeline was developed using BraTS'21 data (1251 patients) and tested on local hospital data (100 patients). Ground truth masks for lesions as well as brain tissues were generated. Two convolutional neural networks based on deep residual U-Net framework were trained for segmenting brain tissues and tumor lesions. The performance of the pipeline was evaluated on independent test data using dice similarity coefficient (DSC) and volume similarity (VS). The proposed pipeline achieved a mean DSC of 0.84 and a mean VS of 0.93 on the BraTS'21 test data set. On the local hospital test data set, it attained a mean DSC of 0.78 and a mean VS of 0.91. The proposed pipeline also generated satisfactory masks in cases where the SPM12 software performed inadequately. The proposed pipeline offers a reliable and automatic solution for segmenting brain tissues and tumor lesions in MR images. Its adaptability makes it a valuable tool for both research and clinical applications, potentially streamlining workflows and enhancing the precision of analyses in neurological and oncological studies.

Limited-angle SPECT image reconstruction using deep image prior.

Hori K, Hashimoto F, Koyama K, Hashimoto T

pubmed logopapersJun 30 2025
[Objective] In single-photon emission computed tomography (SPECT) image reconstruction, limited-angle conditions lead to a loss of frequency components, which distort the reconstructed tomographic image along directions corresponding to the non-collected projection angle range. Although conventional iterative image reconstruction methods have been used to improve the reconstructed images in limited-angle conditions, the image quality is still unsuitable for clinical use. We propose a limited-angle SPECT image reconstruction method that uses an end-to-end deep image prior (DIP) framework to improve reconstructed image quality.&#xD;[Approach] The proposed limited-angle SPECT image reconstruction is an end-to-end DIP framework which incorporates a forward projection model into the loss function to optimise the neural network. By also incorporating a binary mask that indicates whether each data point in the measured projection data has been collected, the proposed method restores the non-collected projection data and reconstructs a less distorted image.&#xD;[Main results] The proposed method was evaluated using 20 numerical phantoms and clinical patient data. In numerical simulations, the proposed method outperformed existing back-projection-based methods in terms of peak signal-to-noise ratio and structural similarity index measure. We analysed the reconstructed tomographic images in the frequency domain using an object-specific modulation transfer function, in simulations and on clinical patient data, to evaluate the response of the reconstruction method to different frequencies of the object. The proposed method significantly improved the response to almost all spatial frequencies, even in the non-collected projection angle range. The results demonstrate that the proposed method reconstructs a less distorted tomographic image.&#xD;[Significance] The proposed end-to-end DIP-based reconstruction method restores lost frequency components and mitigates image distortion under limited-angle conditions by incorporating a binary mask into the loss function.

In-silico CT simulations of deep learning generated heterogeneous phantoms.

Salinas CS, Magudia K, Sangal A, Ren L, Segars PW

pubmed logopapersJun 30 2025
Current virtual imaging phantoms primarily emphasize geometric&#xD;accuracy of anatomical structures. However, to enhance realism, it is also important&#xD;to incorporate intra-organ detail. Because biological tissues are heterogeneous in&#xD;composition, virtual phantoms should reflect this by including realistic intra-organ&#xD;texture and material variation.&#xD;We propose training two 3D Double U-Net conditional generative adversarial&#xD;networks (3D DUC-GAN) to generate sixteen unique textures that encompass organs&#xD;found within the torso. The model was trained on 378 CT image-segmentation&#xD;pairs taken from a publicly available dataset with 18 additional pairs reserved for&#xD;testing. Textured phantoms were generated and imaged using DukeSim, a virtual CT&#xD;simulation platform.&#xD;Results showed that the deep learning model was able to synthesize realistic&#xD;heterogeneous phantoms from a set of homogeneous phantoms. These phantoms were&#xD;compared with original CT scans and had a mean absolute difference of 46.15 ± 1.06&#xD;HU. The structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR)&#xD;were 0.86 ± 0.004 and 28.62 ± 0.14, respectively. The maximum mean discrepancy&#xD;between the generated and actual distribution was 0.0016. These metrics marked&#xD;an improvement of 27%, 5.9%, 6.2%, and 28% respectively, compared to current&#xD;homogeneous texture methods. The generated phantoms that underwent a virtual&#xD;CT scan had a closer visual resemblance to the true CT scan compared to the previous&#xD;method.&#xD;The resulting heterogeneous phantoms offer a significant step toward more realistic&#xD;in silico trials, enabling enhanced simulation of imaging procedures with greater fidelity&#xD;to true anatomical variation.
Page 243 of 3623619 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.