Sort by:
Page 163 of 3433427 results

Radiomics-based MRI model to predict hypoperfusion in lacunar infarction.

Chang CP, Huang YC, Tsai YH, Lin LC, Yang JT, Wu KH, Wu PH, Peng SJ

pubmed logopapersJul 1 2025
Approximately 20-30 % of patients with acute ischemic stroke due to lacunar infarction experience early neurological deterioration (END) within the first three days after onset, leading to disability or more severe sequelae. Hemodynamic perfusion deficits may play a crucial role in END, causing growth in the infarcted area and functional impairments, and even poor long-term prognosis. Therefore, it is vitally important to predict which patients may be at risk of perfusion deficits to initiate treatment and close monitoring early, preparing for potential reperfusion. Our goal is to utilize radiomic features from magnetic resonance imaging (MRI) and machine learning techniques to develop a predictive model for hypoperfusion. During January 2011 to December 2020, a retrospective collection of 92 patients with lacunar stroke was conducted, who underwent MRI within 48 h, had clinical laboratory values, follow-up prognosis records, and advanced perfusion image to confirm the presence of hypoperfusion. Using the initial MRI of these patients, radiomics features were extracted and selected from Diffusion Weighted Imaging (DWI), Apparent Diffusion Coefficient (ADC), and Fluid Attenuated Inversion Recovery (FLAIR) sequences. The data was divided into an 80 % training set and a 20 % testing set, and a hypoperfusion prediction model was developed using machine learning. Tthe model trained on DWI + FLAIR sequence showed superior performance with an accuracy of 84.1 %, AUC 0.92, recall 79.5 %, specificity 87.8 %, precision 83.8 %, and F1 score 81.2. Statistically significant clinical factors between patients with and without hypoperfusion included the NIHSS scores and the size of the lacunar infarction. Combining these two features with the top seven weighted radiomics features from DWI + FLAIR sequence, a total of nine features were used to develop a new prediction model through machine learning. This model in test set achieved an accuracy of 88.9 %, AUC 0.91, recall 87.5 %, specificity 90.0 %, precision 87.5 %, and F1 score 87.5. Utilizing radiomics techniques on DWI and FLAIR sequences from MRI of patients with lacunar stroke, it is possible to predict the presence of hypoperfusion, necessitating close monitoring to prevent the deterioration of clinical symptoms. Incorporating stroke volume and NIHSS scores into the prediction model enhances its performance. Future studies of a larger scale are required to validate these findings.

CZT-based photon-counting-detector CT with deep-learning reconstruction: image quality and diagnostic confidence for lung tumor assessment.

Sasaki T, Kuno H, Nomura K, Muramatsu Y, Aokage K, Samejima J, Taki T, Goto E, Wakabayashi M, Furuya H, Taguchi H, Kobayashi T

pubmed logopapersJul 1 2025
This is a preliminary analysis of one of the secondary endpoints in the prospective study cohort. The aim of this study is to assess the image quality and diagnostic confidence for lung cancer of CT images generated by using cadmium-zinc-telluride (CZT)-based photon-counting-detector-CT (PCD-CT) and comparing these super-high-resolution (SHR) images with conventional normal-resolution (NR) CT images. Twenty-five patients (median age 75 years, interquartile range 66-78 years, 18 men and 7 women) with 29 lung nodules overall (including two patients with 4 and 2 nodules, respectively) were enrolled to undergo PCD-CT. Three types of images were reconstructed: a 512 × 512 matrix with adaptive iterative dose reduction 3D (AIDR 3D) as the NR<sub>AIDR3D</sub> image, a 1024 × 1024 matrix with AIDR 3D as the SHR<sub>AIDR3D</sub> image, and a 1024 × 1024 matrix with deep-learning reconstruction (DLR) as the SHR<sub>DLR</sub> image. For qualitative analysis, two radiologists evaluated the matched reconstructed series twice (NR<sub>AIDR3D</sub> vs. SHR<sub>AIDR3D</sub> and SHR<sub>AIDR3D</sub> vs. SHR<sub>DLR</sub>) and scored the presence of imaging findings, such as spiculation, lobulation, appearance of ground-glass opacity or air bronchiologram, image quality, and diagnostic confidence, using a 5-point Likert scale. For quantitative analysis, contrast-to-noise ratios (CNRs) of the three images were compared. In the qualitative analysis, compared to NR<sub>AIDR3D</sub>, SHR<sub>AIDR3D</sub> yielded higher image quality and diagnostic confidence, except for image noise (all P < 0.01). In comparison with SHR<sub>AIDR3D</sub>, SHR<sub>DLR</sub> yielded higher image quality and diagnostic confidence (all P < 0.01). In the quantitative analysis, CNRs in the modified NR<sub>AIDR3D</sub> and SHR<sub>DLR</sub> groups were higher than those in the SHR<sub>AIDR3D</sub> group (P = 0.003, <0.001, respectively). In PCD-CT, SHR<sub>DLR</sub> images provided the highest image quality and diagnostic confidence for lung tumor evaluation, followed by SHR<sub>AIDR3D</sub> and NR<sub>AIDR3D</sub> images. DLR demonstrated superior noise reduction compared to other reconstruction methods.

Added value of artificial intelligence for the detection of pelvic and hip fractures.

Jaillat A, Cyteval C, Baron Sarrabere MP, Ghomrani H, Maman Y, Thouvenin Y, Pastor M

pubmed logopapersJul 1 2025
To assess the added value of artificial intelligence (AI) for radiologists and emergency physicians in the radiographic detection of pelvic fractures. In this retrospective study, one junior radiologist reviewed 940 X-rays of patients admitted to emergency for a fall with suspicion of pelvic fracture between March 2020 and June 2021. The radiologist analyzed the X-rays alone and then using an AI system (BoneView). In a random sample of 100 exams, the same procedure was repeated alongside five other readers (three radiologists and two emergency physicians with 3-30 years of experience). The reference diagnosis was based on the patient's full set of medical imaging exams and medical records in the months following emergency admission. A total of 633 confirmed pelvic fractures (64.8% from hip and 35.2% from pelvic ring) in 940 patients and 68 pelvic fractures (60% from hip and 40% from pelvic ring) in the 100-patient sample were included. In the whole dataset, the junior radiologist achieved a significant sensitivity improvement with AI assistance (Se<sub>-PELVIC</sub> = 77.25% to 83.73%; p < 0.001, Se<sub>-HIP</sub> 93.24 to 96.49%; p < 0.001 and Se<sub>-PELVIC RING</sub> 54.60% to 64.50%; p < 0.001). However, there was a significant decrease in specificity with AI assistance (Spe<sub>-PELVIC</sub> = 95.24% to 93.25%; p = 0.005 and Spe<sub>-HIP</sub> = 98.30% to 96.90%; p = 0.005). In the 100-patient sample, the two emergency physicians obtained an improvement in fracture detection sensitivity across the pelvic area + 14.70% (p = 0.0011) and + 10.29% (p < 0.007) respectively without a significant decrease in specificity. For hip fractures, E1's sensitivity increased from 59.46% to 70.27% (p = 0.04), and E2's sensitivity increased from 78.38% to 86.49% (p = 0.08). For pelvic ring fractures, E1's sensitivity increased from 12.90% to 32.26% (p = 0.012), and E2's sensitivity increased from 19.35% to 32.26% (p = 0.043). AI improved the diagnostic performance for emergency physicians and radiologists with limited experience in pelvic fracture screening.

TCDE-Net: An unsupervised dual-encoder network for 3D brain medical image registration.

Yang X, Li D, Deng L, Huang S, Wang J

pubmed logopapersJul 1 2025
Medical image registration is a critical task in aligning medical images from different time points, modalities, or individuals, essential for accurate diagnosis and treatment planning. Despite significant progress in deep learning-based registration methods, current approaches still face considerable challenges, such as insufficient capture of local details, difficulty in effectively modeling global contextual information, and limited robustness in handling complex deformations. These limitations hinder the precision of high-resolution registration, particularly when dealing with medical images with intricate structures. To address these issues, this paper presents a novel registration network (TCDE-Net), an unsupervised medical image registration method based on a dual-encoder architecture. The dual encoders complement each other in feature extraction, enabling the model to effectively handle large-scale nonlinear deformations and capture intricate local details, thereby enhancing registration accuracy. Additionally, the detail-enhancement attention module aids in restoring fine-grained features, improving the network's capability to address complex deformations such as those at gray-white matter boundaries. Experimental results on the OASIS, IXI, and Hammers-n30r95 3D brain MR dataset demonstrate that this method outperforms commonly used registration techniques across multiple evaluation metrics, achieving superior performance and robustness. Our code is available at https://github.com/muzidongxue/TCDE-Net.

Interstitial-guided automatic clinical tumor volume segmentation network for cervical cancer brachytherapy.

Tan S, He J, Cui M, Gao Y, Sun D, Xie Y, Cai J, Zaki N, Qin W

pubmed logopapersJul 1 2025
Automatic clinical tumor volume (CTV) delineation is pivotal to improving outcomes for interstitial brachytherapy cervical cancer. However, the prominent differences in gray values due to the interstitial needles bring great challenges on deep learning-based segmentation model. In this study, we proposed a novel interstitial-guided segmentation network termed advance reverse guided network (ARGNet) for cervical tumor segmentation with interstitial brachytherapy. Firstly, the location information of interstitial needles was integrated into the deep learning framework via multi-task by a cross-stitch way to share encoder feature learning. Secondly, a spatial reverse attention mechanism is introduced to mitigate the distraction characteristic of needles on tumor segmentation. Furthermore, an uncertainty area module is embedded between the skip connections and the encoder of the tumor segmentation task, which is to enhance the model's capability in discerning ambiguous boundaries between the tumor and the surrounding tissue. Comprehensive experiments were conducted retrospectively on 191 CT scans under multi-course interstitial brachytherapy. The experiment results demonstrated that the characteristics of interstitial needles play a role in enhancing the segmentation, achieving the state-of-the-art performance, which is anticipated to be beneficial in radiotherapy planning.

CQENet: A segmentation model for nasopharyngeal carcinoma based on confidence quantitative evaluation.

Qi Y, Wei L, Yang J, Xu J, Wang H, Yu Q, Shen G, Cao Y

pubmed logopapersJul 1 2025
Accurate segmentation of the tumor regions of nasopharyngeal carcinoma (NPC) is of significant importance for radiotherapy of NPC. However, the precision of existing automatic segmentation methods for NPC remains inadequate, primarily manifested in the difficulty of tumor localization and the challenges in delineating blurred boundaries. Additionally, the black-box nature of deep learning models leads to insufficient quantification of the confidence in the results, preventing users from directly understanding the model's confidence in its predictions, which severely impacts the clinical application of deep learning models. This paper proposes an automatic segmentation model for NPC based on confidence quantitative evaluation (CQENet). To address the issue of insufficient confidence quantification in NPC segmentation results, we introduce a confidence assessment module (CAM) that enables the model to output not only the segmentation results but also the confidence in those results, aiding users in understanding the uncertainty risks associated with model outputs. To address the difficulty in localizing the position and extent of tumors, we propose a tumor feature adjustment module (FAM) for precise tumor localization and extent determination. To address the challenge of delineating blurred tumor boundaries, we introduce a variance attention mechanism (VAM) to assist in edge delineation during fine segmentation. We conducted experiments on a multicenter NPC dataset, validating that our proposed method is effective and superior to existing state-of-the-art models, possessing considerable clinical application value.

Liver lesion segmentation in ultrasound: A benchmark and a baseline network.

Li J, Zhu L, Shen G, Zhao B, Hu Y, Zhang H, Wang W, Wang Q

pubmed logopapersJul 1 2025
Accurate liver lesion segmentation in ultrasound is a challenging task due to high speckle noise, ambiguous lesion boundaries, and inhomogeneous intensity distribution inside the lesion regions. This work first collected and annotated a dataset for liver lesion segmentation in ultrasound. In this paper, we propose a novel convolutional neural network to learn dual self-attentive transformer features for boosting liver lesion segmentation by leveraging the complementary information among non-local features encoded at different layers of the transformer architecture. To do so, we devise a dual self-attention refinement (DSR) module to synergistically utilize self-attention and reverse self-attention mechanisms to extract complementary lesion characteristics between cascaded multi-layer feature maps, assisting the model to produce more accurate segmentation results. Moreover, we propose a False-Positive-Negative loss to enable our network to further suppress the non-liver-lesion noise at shallow transformer layers and enhance more target liver lesion details into CNN features at deep transformer layers. Experimental results show that our network outperforms state-of-the-art methods quantitatively and qualitatively.

The impact of multi-modality fusion and deep learning on adult age estimation based on bone mineral density.

Cao Y, Zhang J, Ma Y, Zhang S, Li C, Liu S, Chen F, Huang P

pubmed logopapersJul 1 2025
Age estimation, especially in adults, presents substantial challenges in different contexts ranging from forensic to clinical applications. Bone mineral density (BMD), with its distinct age-related variations, has emerged as a critical marker in this domain. This study aims to enhance chronological age estimation accuracy using deep learning (DL) incorporating a multi-modality fusion strategy based on BMD. We conducted a retrospective analysis of 4296 CT scans from a Chinese population, covering August 2015 to November 2022, encompassing lumbar, femur, and pubis modalities. Our DL approach, integrating multi-modality fusion, was applied to predict chronological age automatically. The model's performance was evaluated using an internal real-world clinical cohort of 644 scans (December 2022 to May 2023) and an external cadaver validation cohort of 351 scans. In single-modality assessments, the lumbar modality excelled. However, multi-modality models demonstrated superior performance, evidenced by lower mean absolute errors (MAEs) and higher Pearson's R² values. The optimal multi-modality model exhibited outstanding R² values of 0.89 overall, 0.88 in females, 0.90 in males, with the MAEs of 4.05 overall, 3.69 in females, 4.33 in males in the internal validation cohort. In the external cadaver validation, the model maintained favourable R² values (0.84 overall, 0.89 in females, 0.82 in males) and MAEs (5.01 overall, 4.71 in females, 5.09 in males), highlighting its generalizability across diverse scenarios. The integration of multi-modalities fusion with DL significantly refines the accuracy of adult age estimation based on BMD. The AI-based system that effectively combines multi-modalities BMD data, presenting a robust and innovative tool for accurate AAE, poised to significantly improve both geriatric diagnostics and forensic investigations.

Estimating Periodontal Stability Using Computer Vision.

Feher B, Werdich AA, Chen CY, Barrow J, Lee SJ, Palmer N, Feres M

pubmed logopapersJul 1 2025
Periodontitis is a severe infection affecting oral and systemic health and is traditionally diagnosed through clinical probing-a process that is time-consuming, uncomfortable for patients, and subject to variability based on the operator's skill. We hypothesized that computer vision can be used to estimate periodontal stability from radiographs alone. At the tooth level, we used intraoral radiographs to detect and categorize individual teeth according to their periodontal stability and corresponding treatment needs: healthy (prevention), stable (maintenance), and unstable (active treatment). At the patient level, we assessed full-mouth series and classified patients as stable or unstable by the presence of at least 1 unstable tooth. Our 3-way tooth classification model achieved an area under the receiver operating characteristic curve of 0.71 for healthy teeth, 0.56 for stable, and 0.67 for unstable. The model achieved an F<sub>1</sub> score of 0.45 for healthy teeth, 0.57 for stable, and 0.54 for unstable (recall, 0.70). Saliency maps generated by gradient-weighted class activation mapping primarily showed highly activated areas corresponding to clinically probed regions around teeth. Our binary patient classifier achieved an area under the receiver operating characteristic curve of 0.68 and an F<sub>1</sub> score of 0.74 (recall, 0.70). Taken together, our results suggest that it is feasible to estimate periodontal stability, which traditionally requires clinical and radiographic examination, from radiographic signal alone using computer vision. Variations in model performance across different classes at the tooth level indicate the necessity of further refinement.

Efficient Brain Tumor Detection and Segmentation Using DN-MRCNN With Enhanced Imaging Technique.

N JS, Ayothi S

pubmed logopapersJul 1 2025
This article proposes a method called DenseNet 121-Mask R-CNN (DN-MRCNN) for the detection and segmentation of brain tumors. The main objective is to reduce the execution time and accurately locate and segment the tumor, including its subareas. The input images undergo preprocessing techniques such as median filtering and Gaussian filtering to reduce noise and artifacts, as well as improve image quality. Histogram equalization is used to enhance the tumor regions, and image augmentation is employed to improve the model's diversity and robustness. To capture important patterns, a gated axial self-attention layer is added to the DenseNet 121 model, allowing for increased attention during the analysis of the input images. For accurate segmentation, boundary boxes are generated using a Regional Proposal Network with anchor customization. Post-processing techniques, specifically nonmaximum suppression, are performed to neglect redundant bounding boxes caused by overlapping regions. The Mask R-CNN model is used to accurately detect and segment the entire tumor (WT), tumor core (TC), and enhancing tumor (ET). The proposed model is evaluated using the BraTS 2019 dataset, the UCSF-PDGM dataset, and the UPENN-GBM dataset, which are commonly used for brain tumor detection and segmentation.
Page 163 of 3433427 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.