Sort by:
Page 59 of 2922917 results

Interstitial-guided automatic clinical tumor volume segmentation network for cervical cancer brachytherapy.

Tan S, He J, Cui M, Gao Y, Sun D, Xie Y, Cai J, Zaki N, Qin W

pubmed logopapersJul 1 2025
Automatic clinical tumor volume (CTV) delineation is pivotal to improving outcomes for interstitial brachytherapy cervical cancer. However, the prominent differences in gray values due to the interstitial needles bring great challenges on deep learning-based segmentation model. In this study, we proposed a novel interstitial-guided segmentation network termed advance reverse guided network (ARGNet) for cervical tumor segmentation with interstitial brachytherapy. Firstly, the location information of interstitial needles was integrated into the deep learning framework via multi-task by a cross-stitch way to share encoder feature learning. Secondly, a spatial reverse attention mechanism is introduced to mitigate the distraction characteristic of needles on tumor segmentation. Furthermore, an uncertainty area module is embedded between the skip connections and the encoder of the tumor segmentation task, which is to enhance the model's capability in discerning ambiguous boundaries between the tumor and the surrounding tissue. Comprehensive experiments were conducted retrospectively on 191 CT scans under multi-course interstitial brachytherapy. The experiment results demonstrated that the characteristics of interstitial needles play a role in enhancing the segmentation, achieving the state-of-the-art performance, which is anticipated to be beneficial in radiotherapy planning.

CQENet: A segmentation model for nasopharyngeal carcinoma based on confidence quantitative evaluation.

Qi Y, Wei L, Yang J, Xu J, Wang H, Yu Q, Shen G, Cao Y

pubmed logopapersJul 1 2025
Accurate segmentation of the tumor regions of nasopharyngeal carcinoma (NPC) is of significant importance for radiotherapy of NPC. However, the precision of existing automatic segmentation methods for NPC remains inadequate, primarily manifested in the difficulty of tumor localization and the challenges in delineating blurred boundaries. Additionally, the black-box nature of deep learning models leads to insufficient quantification of the confidence in the results, preventing users from directly understanding the model's confidence in its predictions, which severely impacts the clinical application of deep learning models. This paper proposes an automatic segmentation model for NPC based on confidence quantitative evaluation (CQENet). To address the issue of insufficient confidence quantification in NPC segmentation results, we introduce a confidence assessment module (CAM) that enables the model to output not only the segmentation results but also the confidence in those results, aiding users in understanding the uncertainty risks associated with model outputs. To address the difficulty in localizing the position and extent of tumors, we propose a tumor feature adjustment module (FAM) for precise tumor localization and extent determination. To address the challenge of delineating blurred tumor boundaries, we introduce a variance attention mechanism (VAM) to assist in edge delineation during fine segmentation. We conducted experiments on a multicenter NPC dataset, validating that our proposed method is effective and superior to existing state-of-the-art models, possessing considerable clinical application value.

Liver lesion segmentation in ultrasound: A benchmark and a baseline network.

Li J, Zhu L, Shen G, Zhao B, Hu Y, Zhang H, Wang W, Wang Q

pubmed logopapersJul 1 2025
Accurate liver lesion segmentation in ultrasound is a challenging task due to high speckle noise, ambiguous lesion boundaries, and inhomogeneous intensity distribution inside the lesion regions. This work first collected and annotated a dataset for liver lesion segmentation in ultrasound. In this paper, we propose a novel convolutional neural network to learn dual self-attentive transformer features for boosting liver lesion segmentation by leveraging the complementary information among non-local features encoded at different layers of the transformer architecture. To do so, we devise a dual self-attention refinement (DSR) module to synergistically utilize self-attention and reverse self-attention mechanisms to extract complementary lesion characteristics between cascaded multi-layer feature maps, assisting the model to produce more accurate segmentation results. Moreover, we propose a False-Positive-Negative loss to enable our network to further suppress the non-liver-lesion noise at shallow transformer layers and enhance more target liver lesion details into CNN features at deep transformer layers. Experimental results show that our network outperforms state-of-the-art methods quantitatively and qualitatively.

The impact of multi-modality fusion and deep learning on adult age estimation based on bone mineral density.

Cao Y, Zhang J, Ma Y, Zhang S, Li C, Liu S, Chen F, Huang P

pubmed logopapersJul 1 2025
Age estimation, especially in adults, presents substantial challenges in different contexts ranging from forensic to clinical applications. Bone mineral density (BMD), with its distinct age-related variations, has emerged as a critical marker in this domain. This study aims to enhance chronological age estimation accuracy using deep learning (DL) incorporating a multi-modality fusion strategy based on BMD. We conducted a retrospective analysis of 4296 CT scans from a Chinese population, covering August 2015 to November 2022, encompassing lumbar, femur, and pubis modalities. Our DL approach, integrating multi-modality fusion, was applied to predict chronological age automatically. The model's performance was evaluated using an internal real-world clinical cohort of 644 scans (December 2022 to May 2023) and an external cadaver validation cohort of 351 scans. In single-modality assessments, the lumbar modality excelled. However, multi-modality models demonstrated superior performance, evidenced by lower mean absolute errors (MAEs) and higher Pearson's R² values. The optimal multi-modality model exhibited outstanding R² values of 0.89 overall, 0.88 in females, 0.90 in males, with the MAEs of 4.05 overall, 3.69 in females, 4.33 in males in the internal validation cohort. In the external cadaver validation, the model maintained favourable R² values (0.84 overall, 0.89 in females, 0.82 in males) and MAEs (5.01 overall, 4.71 in females, 5.09 in males), highlighting its generalizability across diverse scenarios. The integration of multi-modalities fusion with DL significantly refines the accuracy of adult age estimation based on BMD. The AI-based system that effectively combines multi-modalities BMD data, presenting a robust and innovative tool for accurate AAE, poised to significantly improve both geriatric diagnostics and forensic investigations.

Estimating Periodontal Stability Using Computer Vision.

Feher B, Werdich AA, Chen CY, Barrow J, Lee SJ, Palmer N, Feres M

pubmed logopapersJul 1 2025
Periodontitis is a severe infection affecting oral and systemic health and is traditionally diagnosed through clinical probing-a process that is time-consuming, uncomfortable for patients, and subject to variability based on the operator's skill. We hypothesized that computer vision can be used to estimate periodontal stability from radiographs alone. At the tooth level, we used intraoral radiographs to detect and categorize individual teeth according to their periodontal stability and corresponding treatment needs: healthy (prevention), stable (maintenance), and unstable (active treatment). At the patient level, we assessed full-mouth series and classified patients as stable or unstable by the presence of at least 1 unstable tooth. Our 3-way tooth classification model achieved an area under the receiver operating characteristic curve of 0.71 for healthy teeth, 0.56 for stable, and 0.67 for unstable. The model achieved an F<sub>1</sub> score of 0.45 for healthy teeth, 0.57 for stable, and 0.54 for unstable (recall, 0.70). Saliency maps generated by gradient-weighted class activation mapping primarily showed highly activated areas corresponding to clinically probed regions around teeth. Our binary patient classifier achieved an area under the receiver operating characteristic curve of 0.68 and an F<sub>1</sub> score of 0.74 (recall, 0.70). Taken together, our results suggest that it is feasible to estimate periodontal stability, which traditionally requires clinical and radiographic examination, from radiographic signal alone using computer vision. Variations in model performance across different classes at the tooth level indicate the necessity of further refinement.

Efficient Brain Tumor Detection and Segmentation Using DN-MRCNN With Enhanced Imaging Technique.

N JS, Ayothi S

pubmed logopapersJul 1 2025
This article proposes a method called DenseNet 121-Mask R-CNN (DN-MRCNN) for the detection and segmentation of brain tumors. The main objective is to reduce the execution time and accurately locate and segment the tumor, including its subareas. The input images undergo preprocessing techniques such as median filtering and Gaussian filtering to reduce noise and artifacts, as well as improve image quality. Histogram equalization is used to enhance the tumor regions, and image augmentation is employed to improve the model's diversity and robustness. To capture important patterns, a gated axial self-attention layer is added to the DenseNet 121 model, allowing for increased attention during the analysis of the input images. For accurate segmentation, boundary boxes are generated using a Regional Proposal Network with anchor customization. Post-processing techniques, specifically nonmaximum suppression, are performed to neglect redundant bounding boxes caused by overlapping regions. The Mask R-CNN model is used to accurately detect and segment the entire tumor (WT), tumor core (TC), and enhancing tumor (ET). The proposed model is evaluated using the BraTS 2019 dataset, the UCSF-PDGM dataset, and the UPENN-GBM dataset, which are commonly used for brain tumor detection and segmentation.

Segmentation of the nasopalatine canal and detection of canal furcation status with artificial intelligence on cone-beam computed tomography images.

Deniz HA, Bayrakdar İŞ, Nalçacı R, Orhan K

pubmed logopapersJul 1 2025
The nasopalatine canal (NPC) is an anatomical formation with varying morphology. NPC can be visualized using the cone-beam computed tomography (CBCT). Also, CBCT has been used in many studies on artificial intelligence (AI). The "You only look once" (YOLO) is an AI framework that stands out with its speed. This study compared the observer and AI regarding the NPC segmentation and assessment of the NPC furcation status in CBCT images. In this study, axial sections of 200 CBCT images were used. These images were labeled and evaluated for the absence or presence of the NPC furcation. These images were then divided into three; 160 images were used as the training dataset, 20 as the validation dataset, and 20 as the test dataset. The training was performed by making 800 epochs using the YOLOv5x-seg model. Sensitivity, Precision, F1 score, IoU, mAP, and AUC values were determined for NPC detection, segmentation, and classification of the YOLOv5x-seg model. The values were found to be 0.9680, 0.9953, 0.9815, 0.9636, 0.7930, and 0.8841, respectively, for the group with the absence of the NPC furcation; and 0.9827, 0.9975, 0.9900, 0.9803, 0.9637, and 0.9510, for the group with the presence of the NPC furcation. Our results showed that even when the YOLOv5x-seg model is trained with the NPC furcation and fewer datasets, it achieves sufficient prediction accuracy. The segmentation feature of the YOLOv5 algorithm, which is based on an object detection algorithm, has achieved quite successful results despite its recent development.

Super-resolution deep learning reconstruction for improved quality of myocardial CT late enhancement.

Takafuji M, Kitagawa K, Mizutani S, Hamaguchi A, Kisou R, Sasaki K, Funaki Y, Iio K, Ichikawa K, Izumi D, Okabe S, Nagata M, Sakuma H

pubmed logopapersJul 1 2025
Myocardial computed tomography (CT) late enhancement (LE) allows assessment of myocardial scarring. Super-resolution deep learning image reconstruction (SR-DLR) trained on data acquired from ultra-high-resolution CT may improve image quality for CT-LE. Therefore, this study investigated image noise and image quality with SR-DLR compared with conventional DLR (C-DLR) and hybrid iterative reconstruction (hybrid IR). We retrospectively analyzed 30 patients who underwent CT-LE using 320-row CT. The CT protocol comprised stress dynamic CT perfusion, coronary CT angiography, and CT-LE. CT-LE images were reconstructed using three different algorithms: SR-DLR, C-DLR, and hybrid IR. Image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and qualitative image quality scores are in terms of noise reduction, sharpness, visibility of scar and myocardial boarder, and overall image quality. Inter-observer differences in myocardial scar sizing in CT-LE by the three algorithms were also compared. SR-DLR significantly decreased image noise by 35% compared to C-DLR (median 6.2 HU, interquartile range [IQR] 5.6-7.2 HU vs 9.6 HU, IQR 8.4-10.7 HU; p < 0.001) and by 37% compared to hybrid IR (9.8 HU, IQR 8.5-12.0 HU; p < 0.001). SNR and CNR of CT-LE reconstructed using SR-DLR were significantly higher than with C-DLR (both p < 0.001) and hybrid IR (both p < 0.05). All qualitative image quality scores were higher with SR-DLR than those with C-DLR and hybrid IR (all p < 0.001). The inter-observer differences in scar sizing were reduced with SR-DLR and C-DLR compared with hybrid IR (both p = 0.02). SR-DLR reduces image noise and improves image quality of myocardial CT-LE compared with C-DLR and hybrid IR techniques and improves inter-observer reproducibility of scar sizing compared to hybrid IR. The SR-DLR approach has the potential to improve the assessment of myocardial scar by CT late enhancement.

Evaluation of radiology residents' reporting skills using large language models: an observational study.

Atsukawa N, Tatekawa H, Oura T, Matsushita S, Horiuchi D, Takita H, Mitsuyama Y, Omori A, Shimono T, Miki Y, Ueda D

pubmed logopapersJul 1 2025
Large language models (LLMs) have the potential to objectively evaluate radiology resident reports; however, research on their use for feedback in radiology training and assessment of resident skill development remains limited. This study aimed to assess the effectiveness of LLMs in revising radiology reports by comparing them with reports verified by board-certified radiologists and to analyze the progression of resident's reporting skills over time. To identify the LLM that best aligned with human radiologists, 100 reports were randomly selected from 7376 reports authored by nine first-year radiology residents. The reports were evaluated based on six criteria: (1) addition of missing positive findings, (2) deletion of findings, (3) addition of negative findings, (4) correction of the expression of findings, (5) correction of the diagnosis, and (6) proposal of additional examinations or treatments. Reports were segmented into four time-based terms, and 900 reports (450 CT and 450 MRI) were randomly chosen from the initial and final terms of the residents' first year. The revised rates for each criterion were compared between the first and last terms using the Wilcoxon Signed-Rank test. Among the three LLMs-ChatGPT-4 Omni (GPT-4o), Claude-3.5 Sonnet, and Claude-3 Opus-GPT-4o demonstrated the highest level of agreement with board-certified radiologists. Significant improvements were noted in Criteria 1-3 when comparing reports from the first and last terms (Criteria 1, 2, and 3; P < 0.001, P = 0.023, and P = 0.004, respectively) using GPT-4o. No significant changes were observed for Criteria 4-6. Despite this, all criteria except for Criteria 6 showed progressive enhancement over time. LLMs can effectively provide feedback on commonly corrected areas in radiology reports, enabling residents to objectively identify and improve their weaknesses and monitor their progress. Additionally, LLMs may help reduce the workload of radiologists' mentors.

The Chest X- Ray: The Ship has Sailed, But Has It?

Iacovino JR

pubmed logopapersJul 1 2025
In the past, the chest X-ray (CXR) was a traditional age and amount requirement used to assess potential mortality risk in life insurance applicants. It fell out of favor due to inconvenience to the applicant, cost, and lack of protective value. With the advent of deep learning techniques, can the results of the CXR, as a requirement, now add additional value to underwriting risk analysis?
Page 59 of 2922917 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.