Sort by:
Page 78 of 1341332 results

Automatic recognition and differentiation of pulmonary contusion and bacterial pneumonia based on deep learning and radiomics.

Deng T, Feng J, Le X, Xia Y, Shi F, Yu F, Zhan Y, Liu X, Li C

pubmed logopapersJul 1 2025
In clinical work, there are difficulties in distinguishing pulmonary contusion(PC) from bacterial pneumonia(BP) on CT images by the naked eye alone when the history of trauma is unknown. Artificial intelligence is widely used in medical imaging, but its diagnostic performance for pulmonary contusion is unclear. In this study, artificial intelligence was used for the first time to identify lung contusion and bacterial pneumonia, and its diagnostic performance was compared with that of manual. In this retrospective study, 2179 patients between April 2016 and July 2022 from two hospitals were collected and divided into a training set, an internal validation set, an external validation set. PC and BP were automatically recognized, segmented using VB-net and radiomics features were automatically extracted. Four machine learning algorithms including Decision Trees, Logistic Regression, Random Forests and Support Vector Machines(SVM) were using to built the models. De-long test was used to compare the performance among models. The best performing model and four radiologists diagnosed the external validation set, and compare the diagnostic efficacy of human and artificial intelligence. VB-net automatically detected and segmented PC and BP. Among the four machine learning models we've built, De-long test showed that SVM model had the best performance, with AUC, accuracy, sensitivity, and specificity of 0.998 (95% CI: 0.995-1), 0.980, 0.979, 0.982 in the training set, 0.891 (95% CI: 0.854-0.928), 0.979, 0.750, 0.860 in the internal validation set, 0.885 (95% CI: 0.850-0.920), 0.903, 0.976, 0.794 in the external validation set. The diagnostic ability of the SVM model was superior to that of human (P < 0.05). Our VB-net automatically recognizes and segments PC and BP in chest CT images. SVM model based on radiomics features can quickly and accurately differentiate between them with higher accuracy than experienced radiologist.

Automatic segmentation of the midfacial bone surface from ultrasound images using deep learning methods.

Yuan M, Jie B, Han R, Wang J, Zhang Y, Li Z, Zhu J, Zhang R, He Y

pubmed logopapersJul 1 2025
With developments in computer science and technology, great progress has been made in three-dimensional (3D) ultrasound. Recently, ultrasound-based 3D bone modelling has attracted much attention, and its accuracy has been studied for the femur, tibia, and spine. The use of ultrasound allows data for bone surface to be acquired non-invasively and without radiation. Freehand 3D ultrasound of the bone surface can be roughly divided into two steps: segmentation of the bone surface from two-dimensional (2D) ultrasound images and 3D reconstruction of the bone surface using the segmented images. The aim of this study was to develop an automatic algorithm to segment the midface bone surface from 2D ultrasound images based on deep learning methods. Six deep learning networks were trained (nnU-Net, U-Net, ConvNeXt, Mask2Former, SegFormer, and DDRNet). The performance of the algorithms was compared with that of the ground truth and evaluated by Dice coefficient (DC), intersection over union (IoU), 95th percentile Hausdorff distance (HD95), average symmetric surface distance (ASSD), precision, recall, and time. nnU-Net yielded the highest DC of 89.3% ± 13.6% and the lowest ASSD of 0.11 ± 0.40 mm. This study showed that nnU-Net can automatically and effectively segment the midfacial bone surface from 2D ultrasound images.

Integrating multi-scale information and diverse prompts in large model SAM-Med2D for accurate left ventricular ejection fraction estimation.

Wu Y, Zhao T, Hu S, Wu Q, Chen Y, Huang X, Zheng Z

pubmed logopapersJul 1 2025
Left ventricular ejection fraction (LVEF) is a critical indicator of cardiac function, aiding in the assessment of heart conditions. Accurate segmentation of the left ventricle (LV) is essential for LVEF calculation. However, current methods are often limited by small datasets and exhibit poor generalization. While leveraging large models can address this issue, many fail to capture multi-scale information and introduce additional burdens on users to generate prompts. To overcome these challenges, we propose LV-SAM, a model based on the large model SAM-Med2D, for accurate LV segmentation. It comprises three key components: an image encoder with a multi-scale adapter (MSAd), a multimodal prompt encoder (MPE), and a multi-scale decoder (MSD). The MSAd extracts multi-scale information at the encoder level and fine-tunes the model, while the MSD employs skip connections to effectively utilize multi-scale information at the decoder level. Additionally, we introduce an automated pipeline for generating self-extracted dense prompts and use a large language model to generate text prompts, reducing the user burden. The MPE processes these prompts, further enhancing model performance. Evaluations on the CAMUS dataset show that LV-SAM outperforms existing SOAT methods in LV segmentation, achieving the lowest MAE of 5.016 in LVEF estimation.

Robust and generalizable artificial intelligence for multi-organ segmentation in ultra-low-dose total-body PET imaging: a multi-center and cross-tracer study.

Wang H, Qiao X, Ding W, Chen G, Miao Y, Guo R, Zhu X, Cheng Z, Xu J, Li B, Huang Q

pubmed logopapersJul 1 2025
Positron Emission Tomography (PET) is a powerful molecular imaging tool that visualizes radiotracer distribution to reveal physiological processes. Recent advances in total-body PET have enabled low-dose, CT-free imaging; however, accurate organ segmentation using PET-only data remains challenging. This study develops and validates a deep learning model for multi-organ PET segmentation across varied imaging conditions and tracers, addressing critical needs for fully PET-based quantitative analysis. This retrospective study employed a 3D deep learning-based model for automated multi-organ segmentation on PET images acquired under diverse conditions, including low-dose and non-attenuation-corrected scans. Using a dataset of 798 patients from multiple centers with varied tracers, model robustness and generalizability were evaluated via multi-center and cross-tracer tests. Ground-truth labels for 23 organs were generated from CT images, and segmentation accuracy was assessed using the Dice similarity coefficient (DSC). In the multi-center dataset from four different institutions, our model achieved average DSC values of 0.834, 0.825, 0.819, and 0.816 across varying dose reduction factors and correction conditions for FDG PET images. In the cross-tracer dataset, the model reached average DSC values of 0.737, 0.573, 0.830, 0.661, and 0.708 for DOTATATE, FAPI, FDG, Grazytracer, and PSMA, respectively. The proposed model demonstrated effective, fully PET-based multi-organ segmentation across a range of imaging conditions, centers, and tracers, achieving high robustness and generalizability. These findings underscore the model's potential to enhance clinical diagnostic workflows by supporting ultra-low dose PET imaging. Not applicable. This is a retrospective study based on collected data, which has been approved by the Research Ethics Committee of Ruijin Hospital affiliated to Shanghai Jiao Tong University School of Medicine.

Measuring kidney stone volume - practical considerations and current evidence from the EAU endourology section.

Grossmann NC, Panthier F, Afferi L, Kallidonis P, Somani BK

pubmed logopapersJul 1 2025
This narrative review provides an overview of the use, differences, and clinical impact of current methods for kidney stone volume assessment. The different approaches to volume measurement are based on noncontrast computed tomography (NCCT). While volume measurement using formulas is sufficient for smaller stones, it tends to overestimate volume for larger or irregularly shaped calculi. In contrast, software-based segmentation significantly improves accuracy and reproducibility, and artificial intelligence based volumetry additionally shows excellent agreement with reference standards while reducing observer variability and measurement time. Moreover, specific CT preparation protocols may further enhance image quality and thus improve measurement accuracy. Clinically, stone volume has proven to be a superior predictor of stone-related events during follow-up, spontaneous stone passage under conservative management, and stone-free rates after shockwave lithotripsy (SWL) and ureteroscopy (URS) compared to linear measurements. Although manual measurement remains practical, its accuracy diminishes for complex or larger stones. Software-based segmentation and volumetry offer higher precision and efficiency but require established standards and broader access to dedicated software for routine clinical use.

Dynamic glucose enhanced imaging using direct water saturation.

Knutsson L, Yadav NN, Mohammed Ali S, Kamson DO, Demetriou E, Seidemo A, Blair L, Lin DD, Laterra J, van Zijl PCM

pubmed logopapersJul 1 2025
Dynamic glucose enhanced (DGE) MRI studies employ CEST or spin lock (CESL) to study glucose uptake. Currently, these methods are hampered by low effect size and sensitivity to motion. To overcome this, we propose to utilize exchange-based linewidth (LW) broadening of the direct water saturation (DS) curve of the water saturation spectrum (Z-spectrum) during and after glucose infusion (DS-DGE MRI). To estimate the glucose-infusion-induced LW changes (ΔLW), Bloch-McConnell simulations were performed for normoglycemia and hyperglycemia in blood, gray matter (GM), white matter (WM), CSF, and malignant tumor tissue. Whole-brain DS-DGE imaging was implemented at 3 T using dynamic Z-spectral acquisitions (1.2 s per offset frequency, 38 s per spectrum) and assessed on four brain tumor patients using infusion of 35 g of D-glucose. To assess ΔLW, a deep learning-based Lorentzian fitting approach was used on voxel-based DS spectra acquired before, during, and post-infusion. Area-under-the-curve (AUC) images, obtained from the dynamic ΔLW time curves, were compared qualitatively to perfusion-weighted imaging parametric maps. In simulations, ΔLW was 1.3%, 0.30%, 0.29/0.34%, 7.5%, and 13% in arterial blood, venous blood, GM/WM, malignant tumor tissue, and CSF, respectively. In vivo, ΔLW was approximately 1% in GM/WM, 5% to 20% for different tumor types, and 40% in CSF. The resulting DS-DGE AUC maps clearly outlined lesion areas. DS-DGE MRI is highly promising for assessing D-glucose uptake. Initial results in brain tumor patients show high-quality AUC maps of glucose-induced line broadening and DGE-based lesion enhancement similar and/or complementary to perfusion-weighted imaging.

Segmentation of the nasopalatine canal and detection of canal furcation status with artificial intelligence on cone-beam computed tomography images.

Deniz HA, Bayrakdar İŞ, Nalçacı R, Orhan K

pubmed logopapersJul 1 2025
The nasopalatine canal (NPC) is an anatomical formation with varying morphology. NPC can be visualized using the cone-beam computed tomography (CBCT). Also, CBCT has been used in many studies on artificial intelligence (AI). The "You only look once" (YOLO) is an AI framework that stands out with its speed. This study compared the observer and AI regarding the NPC segmentation and assessment of the NPC furcation status in CBCT images. In this study, axial sections of 200 CBCT images were used. These images were labeled and evaluated for the absence or presence of the NPC furcation. These images were then divided into three; 160 images were used as the training dataset, 20 as the validation dataset, and 20 as the test dataset. The training was performed by making 800 epochs using the YOLOv5x-seg model. Sensitivity, Precision, F1 score, IoU, mAP, and AUC values were determined for NPC detection, segmentation, and classification of the YOLOv5x-seg model. The values were found to be 0.9680, 0.9953, 0.9815, 0.9636, 0.7930, and 0.8841, respectively, for the group with the absence of the NPC furcation; and 0.9827, 0.9975, 0.9900, 0.9803, 0.9637, and 0.9510, for the group with the presence of the NPC furcation. Our results showed that even when the YOLOv5x-seg model is trained with the NPC furcation and fewer datasets, it achieves sufficient prediction accuracy. The segmentation feature of the YOLOv5 algorithm, which is based on an object detection algorithm, has achieved quite successful results despite its recent development.

Efficient Brain Tumor Detection and Segmentation Using DN-MRCNN With Enhanced Imaging Technique.

N JS, Ayothi S

pubmed logopapersJul 1 2025
This article proposes a method called DenseNet 121-Mask R-CNN (DN-MRCNN) for the detection and segmentation of brain tumors. The main objective is to reduce the execution time and accurately locate and segment the tumor, including its subareas. The input images undergo preprocessing techniques such as median filtering and Gaussian filtering to reduce noise and artifacts, as well as improve image quality. Histogram equalization is used to enhance the tumor regions, and image augmentation is employed to improve the model's diversity and robustness. To capture important patterns, a gated axial self-attention layer is added to the DenseNet 121 model, allowing for increased attention during the analysis of the input images. For accurate segmentation, boundary boxes are generated using a Regional Proposal Network with anchor customization. Post-processing techniques, specifically nonmaximum suppression, are performed to neglect redundant bounding boxes caused by overlapping regions. The Mask R-CNN model is used to accurately detect and segment the entire tumor (WT), tumor core (TC), and enhancing tumor (ET). The proposed model is evaluated using the BraTS 2019 dataset, the UCSF-PDGM dataset, and the UPENN-GBM dataset, which are commonly used for brain tumor detection and segmentation.

Liver lesion segmentation in ultrasound: A benchmark and a baseline network.

Li J, Zhu L, Shen G, Zhao B, Hu Y, Zhang H, Wang W, Wang Q

pubmed logopapersJul 1 2025
Accurate liver lesion segmentation in ultrasound is a challenging task due to high speckle noise, ambiguous lesion boundaries, and inhomogeneous intensity distribution inside the lesion regions. This work first collected and annotated a dataset for liver lesion segmentation in ultrasound. In this paper, we propose a novel convolutional neural network to learn dual self-attentive transformer features for boosting liver lesion segmentation by leveraging the complementary information among non-local features encoded at different layers of the transformer architecture. To do so, we devise a dual self-attention refinement (DSR) module to synergistically utilize self-attention and reverse self-attention mechanisms to extract complementary lesion characteristics between cascaded multi-layer feature maps, assisting the model to produce more accurate segmentation results. Moreover, we propose a False-Positive-Negative loss to enable our network to further suppress the non-liver-lesion noise at shallow transformer layers and enhance more target liver lesion details into CNN features at deep transformer layers. Experimental results show that our network outperforms state-of-the-art methods quantitatively and qualitatively.

CQENet: A segmentation model for nasopharyngeal carcinoma based on confidence quantitative evaluation.

Qi Y, Wei L, Yang J, Xu J, Wang H, Yu Q, Shen G, Cao Y

pubmed logopapersJul 1 2025
Accurate segmentation of the tumor regions of nasopharyngeal carcinoma (NPC) is of significant importance for radiotherapy of NPC. However, the precision of existing automatic segmentation methods for NPC remains inadequate, primarily manifested in the difficulty of tumor localization and the challenges in delineating blurred boundaries. Additionally, the black-box nature of deep learning models leads to insufficient quantification of the confidence in the results, preventing users from directly understanding the model's confidence in its predictions, which severely impacts the clinical application of deep learning models. This paper proposes an automatic segmentation model for NPC based on confidence quantitative evaluation (CQENet). To address the issue of insufficient confidence quantification in NPC segmentation results, we introduce a confidence assessment module (CAM) that enables the model to output not only the segmentation results but also the confidence in those results, aiding users in understanding the uncertainty risks associated with model outputs. To address the difficulty in localizing the position and extent of tumors, we propose a tumor feature adjustment module (FAM) for precise tumor localization and extent determination. To address the challenge of delineating blurred tumor boundaries, we introduce a variance attention mechanism (VAM) to assist in edge delineation during fine segmentation. We conducted experiments on a multicenter NPC dataset, validating that our proposed method is effective and superior to existing state-of-the-art models, possessing considerable clinical application value.
Page 78 of 1341332 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.