Sort by:
Page 83 of 1421416 results

CT-Based Machine Learning Radiomics Analysis to Diagnose Dysthyroid Optic Neuropathy.

Ma L, Jiang X, Yang X, Wang M, Hou Z, Zhang J, Li D

pubmed logopapersJul 1 2025
To develop CT-based machine learning radiomics models used for the diagnosis of dysthyroid optic neuropathy (DON). This is a retrospective study included 57 patients (114 orbits) diagnosed with thyroid-associated ophthalmopathy (TAO) at the Beijing Tongren Hospital between December 2019 and June 2023. CT scans, medical history, examination results, and clinical data of the participants were collected. DON was diagnosed based on clinical manifestations and examinations. The DON orbits and non-DON orbits were then divided into a training set and a test set at a ratio of approximately 7:3. The 3D slicer software was used to identify the volumes of interest (VOI). Radiomics features were extracted using the Pyradiomics and selected by t-test and least absolute shrinkage and selection operator (LASSO) regression algorithm with 10-fold cross-validation. Machine-learning models, including random forest (RF) model, support vector machine (SVM) model, and logistic regression (LR) model were built and validated by receiver operating characteristic (ROC) curves, area under the curves (AUC) and confusion matrix-related data. The net benefit of the models is shown by the decision curve analysis (DCA). We extracted 107 features from the imaging data, representing various image information of the optic nerve and surrounding orbital tissues. Using the LASSO method, we identified the five most informative features. The AUC ranged from 0.77 to 0.80 in the training set and the AUC of the RF, SVM and LR models based on the features were 0.86, 0.80 and 0.83 in the test set, respectively. The DeLong test showed there was no significant difference between the three models (RF model vs SVM model: <i>p</i> = .92; RF model vs LR model: <i>p</i> = .94; SVM model vs LR model: <i>p</i> = .98) and the models showed optimal clinical efficacy in DCA. The CT-based machine learning radiomics analysis exhibited excellent ability to diagnose DON and may enhance diagnostic convenience.

Measuring kidney stone volume - practical considerations and current evidence from the EAU endourology section.

Grossmann NC, Panthier F, Afferi L, Kallidonis P, Somani BK

pubmed logopapersJul 1 2025
This narrative review provides an overview of the use, differences, and clinical impact of current methods for kidney stone volume assessment. The different approaches to volume measurement are based on noncontrast computed tomography (NCCT). While volume measurement using formulas is sufficient for smaller stones, it tends to overestimate volume for larger or irregularly shaped calculi. In contrast, software-based segmentation significantly improves accuracy and reproducibility, and artificial intelligence based volumetry additionally shows excellent agreement with reference standards while reducing observer variability and measurement time. Moreover, specific CT preparation protocols may further enhance image quality and thus improve measurement accuracy. Clinically, stone volume has proven to be a superior predictor of stone-related events during follow-up, spontaneous stone passage under conservative management, and stone-free rates after shockwave lithotripsy (SWL) and ureteroscopy (URS) compared to linear measurements. Although manual measurement remains practical, its accuracy diminishes for complex or larger stones. Software-based segmentation and volumetry offer higher precision and efficiency but require established standards and broader access to dedicated software for routine clinical use.

Artificial Intelligence Iterative Reconstruction for Dose Reduction in Pediatric Chest CT: A Clinical Assessment via Below 3 Years Patients With Congenital Heart Disease.

Zhang F, Peng L, Zhang G, Xie R, Sun M, Su T, Ge Y

pubmed logopapersJul 1 2025
To assess the performance of a newly introduced deep learning-based reconstruction algorithm, namely the artificial intelligence iterative reconstruction (AIIR), in reducing the dose of pediatric chest CT by using the image data of below 3-year-old patients with congenital heart disease (CHD). The lung image available from routine-dose cardiac CT angiography (CTA) on below 3 years patients with CHD was employed as a reference for evaluating the paired low-dose chest CT. A total of 191 subjects were prospectively enrolled, where the dose for chest CT was reduced to ~0.1 mSv while the cardiac CTA protocol was kept unchanged. The low-dose chest CT images, obtained with the AIIR and the hybrid iterative reconstruction (HIR), were compared in image quality, ie, overall image quality and lung structure depiction, and in diagnostic performance, ie, severity assessment of pneumonia and airway stenosis. Compared with the reference, lung image quality was not found significantly different on low-dose AIIR images (all P >0.05) but obviously inferior with the HIR (all P <0.05). Compared with the HIR, low-dose AIIR images also achieved a closer pneumonia severity index (AIIR 4.32±3.82 vs. Ref 4.37±3.84, P >0.05; HIR 5.12±4.06 vs. Ref 4.37±3.84, P <0.05) and airway stenosis grading (consistently graded: AIIR 88.5% vs. HIR 56.5% ) to the reference. AIIR has the potential for large dose reduction in chest CT of patients below 3 years of age while preserving image quality and achieving diagnostic results nearly equivalent to routine dose scans.

Super-resolution deep learning reconstruction for improved quality of myocardial CT late enhancement.

Takafuji M, Kitagawa K, Mizutani S, Hamaguchi A, Kisou R, Sasaki K, Funaki Y, Iio K, Ichikawa K, Izumi D, Okabe S, Nagata M, Sakuma H

pubmed logopapersJul 1 2025
Myocardial computed tomography (CT) late enhancement (LE) allows assessment of myocardial scarring. Super-resolution deep learning image reconstruction (SR-DLR) trained on data acquired from ultra-high-resolution CT may improve image quality for CT-LE. Therefore, this study investigated image noise and image quality with SR-DLR compared with conventional DLR (C-DLR) and hybrid iterative reconstruction (hybrid IR). We retrospectively analyzed 30 patients who underwent CT-LE using 320-row CT. The CT protocol comprised stress dynamic CT perfusion, coronary CT angiography, and CT-LE. CT-LE images were reconstructed using three different algorithms: SR-DLR, C-DLR, and hybrid IR. Image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and qualitative image quality scores are in terms of noise reduction, sharpness, visibility of scar and myocardial boarder, and overall image quality. Inter-observer differences in myocardial scar sizing in CT-LE by the three algorithms were also compared. SR-DLR significantly decreased image noise by 35% compared to C-DLR (median 6.2 HU, interquartile range [IQR] 5.6-7.2 HU vs 9.6 HU, IQR 8.4-10.7 HU; p < 0.001) and by 37% compared to hybrid IR (9.8 HU, IQR 8.5-12.0 HU; p < 0.001). SNR and CNR of CT-LE reconstructed using SR-DLR were significantly higher than with C-DLR (both p < 0.001) and hybrid IR (both p < 0.05). All qualitative image quality scores were higher with SR-DLR than those with C-DLR and hybrid IR (all p < 0.001). The inter-observer differences in scar sizing were reduced with SR-DLR and C-DLR compared with hybrid IR (both p = 0.02). SR-DLR reduces image noise and improves image quality of myocardial CT-LE compared with C-DLR and hybrid IR techniques and improves inter-observer reproducibility of scar sizing compared to hybrid IR. The SR-DLR approach has the potential to improve the assessment of myocardial scar by CT late enhancement.

Segmentation of the nasopalatine canal and detection of canal furcation status with artificial intelligence on cone-beam computed tomography images.

Deniz HA, Bayrakdar İŞ, Nalçacı R, Orhan K

pubmed logopapersJul 1 2025
The nasopalatine canal (NPC) is an anatomical formation with varying morphology. NPC can be visualized using the cone-beam computed tomography (CBCT). Also, CBCT has been used in many studies on artificial intelligence (AI). The "You only look once" (YOLO) is an AI framework that stands out with its speed. This study compared the observer and AI regarding the NPC segmentation and assessment of the NPC furcation status in CBCT images. In this study, axial sections of 200 CBCT images were used. These images were labeled and evaluated for the absence or presence of the NPC furcation. These images were then divided into three; 160 images were used as the training dataset, 20 as the validation dataset, and 20 as the test dataset. The training was performed by making 800 epochs using the YOLOv5x-seg model. Sensitivity, Precision, F1 score, IoU, mAP, and AUC values were determined for NPC detection, segmentation, and classification of the YOLOv5x-seg model. The values were found to be 0.9680, 0.9953, 0.9815, 0.9636, 0.7930, and 0.8841, respectively, for the group with the absence of the NPC furcation; and 0.9827, 0.9975, 0.9900, 0.9803, 0.9637, and 0.9510, for the group with the presence of the NPC furcation. Our results showed that even when the YOLOv5x-seg model is trained with the NPC furcation and fewer datasets, it achieves sufficient prediction accuracy. The segmentation feature of the YOLOv5 algorithm, which is based on an object detection algorithm, has achieved quite successful results despite its recent development.

The impact of multi-modality fusion and deep learning on adult age estimation based on bone mineral density.

Cao Y, Zhang J, Ma Y, Zhang S, Li C, Liu S, Chen F, Huang P

pubmed logopapersJul 1 2025
Age estimation, especially in adults, presents substantial challenges in different contexts ranging from forensic to clinical applications. Bone mineral density (BMD), with its distinct age-related variations, has emerged as a critical marker in this domain. This study aims to enhance chronological age estimation accuracy using deep learning (DL) incorporating a multi-modality fusion strategy based on BMD. We conducted a retrospective analysis of 4296 CT scans from a Chinese population, covering August 2015 to November 2022, encompassing lumbar, femur, and pubis modalities. Our DL approach, integrating multi-modality fusion, was applied to predict chronological age automatically. The model's performance was evaluated using an internal real-world clinical cohort of 644 scans (December 2022 to May 2023) and an external cadaver validation cohort of 351 scans. In single-modality assessments, the lumbar modality excelled. However, multi-modality models demonstrated superior performance, evidenced by lower mean absolute errors (MAEs) and higher Pearson's R² values. The optimal multi-modality model exhibited outstanding R² values of 0.89 overall, 0.88 in females, 0.90 in males, with the MAEs of 4.05 overall, 3.69 in females, 4.33 in males in the internal validation cohort. In the external cadaver validation, the model maintained favourable R² values (0.84 overall, 0.89 in females, 0.82 in males) and MAEs (5.01 overall, 4.71 in females, 5.09 in males), highlighting its generalizability across diverse scenarios. The integration of multi-modalities fusion with DL significantly refines the accuracy of adult age estimation based on BMD. The AI-based system that effectively combines multi-modalities BMD data, presenting a robust and innovative tool for accurate AAE, poised to significantly improve both geriatric diagnostics and forensic investigations.

CQENet: A segmentation model for nasopharyngeal carcinoma based on confidence quantitative evaluation.

Qi Y, Wei L, Yang J, Xu J, Wang H, Yu Q, Shen G, Cao Y

pubmed logopapersJul 1 2025
Accurate segmentation of the tumor regions of nasopharyngeal carcinoma (NPC) is of significant importance for radiotherapy of NPC. However, the precision of existing automatic segmentation methods for NPC remains inadequate, primarily manifested in the difficulty of tumor localization and the challenges in delineating blurred boundaries. Additionally, the black-box nature of deep learning models leads to insufficient quantification of the confidence in the results, preventing users from directly understanding the model's confidence in its predictions, which severely impacts the clinical application of deep learning models. This paper proposes an automatic segmentation model for NPC based on confidence quantitative evaluation (CQENet). To address the issue of insufficient confidence quantification in NPC segmentation results, we introduce a confidence assessment module (CAM) that enables the model to output not only the segmentation results but also the confidence in those results, aiding users in understanding the uncertainty risks associated with model outputs. To address the difficulty in localizing the position and extent of tumors, we propose a tumor feature adjustment module (FAM) for precise tumor localization and extent determination. To address the challenge of delineating blurred tumor boundaries, we introduce a variance attention mechanism (VAM) to assist in edge delineation during fine segmentation. We conducted experiments on a multicenter NPC dataset, validating that our proposed method is effective and superior to existing state-of-the-art models, possessing considerable clinical application value.

Interstitial-guided automatic clinical tumor volume segmentation network for cervical cancer brachytherapy.

Tan S, He J, Cui M, Gao Y, Sun D, Xie Y, Cai J, Zaki N, Qin W

pubmed logopapersJul 1 2025
Automatic clinical tumor volume (CTV) delineation is pivotal to improving outcomes for interstitial brachytherapy cervical cancer. However, the prominent differences in gray values due to the interstitial needles bring great challenges on deep learning-based segmentation model. In this study, we proposed a novel interstitial-guided segmentation network termed advance reverse guided network (ARGNet) for cervical tumor segmentation with interstitial brachytherapy. Firstly, the location information of interstitial needles was integrated into the deep learning framework via multi-task by a cross-stitch way to share encoder feature learning. Secondly, a spatial reverse attention mechanism is introduced to mitigate the distraction characteristic of needles on tumor segmentation. Furthermore, an uncertainty area module is embedded between the skip connections and the encoder of the tumor segmentation task, which is to enhance the model's capability in discerning ambiguous boundaries between the tumor and the surrounding tissue. Comprehensive experiments were conducted retrospectively on 191 CT scans under multi-course interstitial brachytherapy. The experiment results demonstrated that the characteristics of interstitial needles play a role in enhancing the segmentation, achieving the state-of-the-art performance, which is anticipated to be beneficial in radiotherapy planning.

CZT-based photon-counting-detector CT with deep-learning reconstruction: image quality and diagnostic confidence for lung tumor assessment.

Sasaki T, Kuno H, Nomura K, Muramatsu Y, Aokage K, Samejima J, Taki T, Goto E, Wakabayashi M, Furuya H, Taguchi H, Kobayashi T

pubmed logopapersJul 1 2025
This is a preliminary analysis of one of the secondary endpoints in the prospective study cohort. The aim of this study is to assess the image quality and diagnostic confidence for lung cancer of CT images generated by using cadmium-zinc-telluride (CZT)-based photon-counting-detector-CT (PCD-CT) and comparing these super-high-resolution (SHR) images with conventional normal-resolution (NR) CT images. Twenty-five patients (median age 75 years, interquartile range 66-78 years, 18 men and 7 women) with 29 lung nodules overall (including two patients with 4 and 2 nodules, respectively) were enrolled to undergo PCD-CT. Three types of images were reconstructed: a 512 × 512 matrix with adaptive iterative dose reduction 3D (AIDR 3D) as the NR<sub>AIDR3D</sub> image, a 1024 × 1024 matrix with AIDR 3D as the SHR<sub>AIDR3D</sub> image, and a 1024 × 1024 matrix with deep-learning reconstruction (DLR) as the SHR<sub>DLR</sub> image. For qualitative analysis, two radiologists evaluated the matched reconstructed series twice (NR<sub>AIDR3D</sub> vs. SHR<sub>AIDR3D</sub> and SHR<sub>AIDR3D</sub> vs. SHR<sub>DLR</sub>) and scored the presence of imaging findings, such as spiculation, lobulation, appearance of ground-glass opacity or air bronchiologram, image quality, and diagnostic confidence, using a 5-point Likert scale. For quantitative analysis, contrast-to-noise ratios (CNRs) of the three images were compared. In the qualitative analysis, compared to NR<sub>AIDR3D</sub>, SHR<sub>AIDR3D</sub> yielded higher image quality and diagnostic confidence, except for image noise (all P < 0.01). In comparison with SHR<sub>AIDR3D</sub>, SHR<sub>DLR</sub> yielded higher image quality and diagnostic confidence (all P < 0.01). In the quantitative analysis, CNRs in the modified NR<sub>AIDR3D</sub> and SHR<sub>DLR</sub> groups were higher than those in the SHR<sub>AIDR3D</sub> group (P = 0.003, <0.001, respectively). In PCD-CT, SHR<sub>DLR</sub> images provided the highest image quality and diagnostic confidence for lung tumor evaluation, followed by SHR<sub>AIDR3D</sub> and NR<sub>AIDR3D</sub> images. DLR demonstrated superior noise reduction compared to other reconstruction methods.

Deep Guess acceleration for explainable image reconstruction in sparse-view CT.

Loli Piccolomini E, Evangelista D, Morotti E

pubmed logopapersJul 1 2025
Sparse-view Computed Tomography (CT) is an emerging protocol designed to reduce X-ray dose radiation in medical imaging. Reconstructions based on the traditional Filtered Back Projection algorithm suffer from severe artifacts due to sparse data. In contrast, Model-Based Iterative Reconstruction (MBIR) algorithms, though better at mitigating noise through regularization, are too computationally costly for clinical use. This paper introduces a novel technique, denoted as the Deep Guess acceleration scheme, using a trained neural network both to quicken the regularized MBIR and to enhance the reconstruction accuracy. We integrate state-of-the-art deep learning tools to initialize a clever starting guess for a proximal algorithm solving a non-convex model and thus computing a (mathematically) interpretable solution image in a few iterations. Experimental results on real and synthetic CT images demonstrate the Deep Guess effectiveness in (very) sparse tomographic protocols, where it overcomes its mere variational counterpart and many data-driven approaches at the state of the art. We also consider a ground truth-free implementation and test the robustness of the proposed framework to noise.
Page 83 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.