Sort by:
Page 133 of 1981980 results

A Mixed-attention Network for Automated Interventricular Septum Segmentation in Bright-blood Myocardial T2* MRI Relaxometry in Thalassemia.

Wu X, Wang H, Chen Z, Sun S, Lian Z, Zhang X, Peng P, Feng Y

pubmed logopapersMay 30 2025
This study develops a deep-learning method for automatic segmentation of the interventricular septum (IS) in MR images to measure myocardial T2* and estimate cardiac iron deposition in patients with thalassemia. This retrospective study used multiple-gradient-echo cardiac MR scans from 419 thalassemia patients to develop and evaluate the segmentation network. The network was trained on 1.5 T images from Center 1 and evaluated on 3.0 T unseen images from Center 1, all data from Center 2, and the CHMMOTv1 dataset. Model performance was assessed using five metrics, and T2* values were obtained by fitting the network output. Bland-Altman analysis, coefficient of variation (CoV), and regression analysis were used to evaluate the consistency between automatic and manual methods. MA-BBIsegNet achieved a Dice of 0.90 on the internal test set, 0.85 on the external test set, and 0.81 on the CHMMOTv1 dataset. Bland-Altman analysis showed mean differences of 0.08 (95% LoA: -2.79 ∼ 2.63) ms (internal), 0.29 (95% LoA: -4.12 ∼ 3.54) ms (external) and 0.19 (95% LoA: -3.50 ∼ 3.88) ms (CHMMOTv1), with CoV of 8.9%, 6.8%, and 9.3%. Regression analysis yielded r values of 0.98 for the internal and CHMMOTv1 datasets, and 0.99 for the external dataset (p < 0.05). The IS segmentation network based on multiple-gradient-echo bright-blood images yielded T2* values that were in strong agreement with manual measurements, highlighting its potential for the efficient, non-invasive monitoring of myocardial iron deposition in patients with thalassemia.

Deep learning reconstruction improves computer-aided pulmonary nodule detection and measurement accuracy for ultra-low-dose chest CT.

Wang J, Zhu Z, Pan Z, Tan W, Han W, Zhou Z, Hu G, Ma Z, Xu Y, Ying Z, Sui X, Jin Z, Song L, Song W

pubmed logopapersMay 30 2025
To compare the image quality and pulmonary nodule detectability and measurement accuracy between deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) of chest ultra-low-dose CT (ULDCT). Participants who underwent chest standard-dose CT (SDCT) followed by ULDCT from October 2020 to January 2022 were prospectively included. ULDCT images reconstructed with HIR and DLR were compared with SDCT images to evaluate image quality, nodule detection rate, and measurement accuracy using a commercially available deep learning-based nodule evaluation system. Wilcoxon signed-rank test was used to evaluate the percentage errors of nodule size and nodule volume between HIR and DLR images. Eighty-four participants (54 ± 13 years; 26 men) were finally enrolled. The effective radiation doses of ULDCT and SDCT were 0.16 ± 0.02 mSv and 1.77 ± 0.67 mSv, respectively (P < 0.001). The mean ± standard deviation of the lung tissue noises was 61.4 ± 3.0 HU for SDCT, 61.5 ± 2.8 HU and 55.1 ± 3.4 HU for ULDCT reconstructed with HIR-Strong setting (HIR-Str) and DLR-Strong setting (DLR-Str), respectively (P < 0.001). A total of 535 nodules were detected. The nodule detection rates of ULDCT HIR-Str and ULDCT DLR-Str were 74.0% and 83.4%, respectively (P < 0.001). The absolute percentage error in nodule volume from that of SDCT was 19.5% in ULDCT HIR-Str versus 17.9% in ULDCT DLR-Str (P < 0.001). Compared with HIR, DLR reduced image noise, increased nodule detection rate, and improved measurement accuracy of nodule volume at chest ULDCT. Not applicable.

Deep learning-driven modality imputation and subregion segmentation to enhance high-grade glioma grading.

Yu J, Liu Q, Xu C, Zhou Q, Xu J, Zhu L, Chen C, Zhou Y, Xiao B, Zheng L, Zhou X, Zhang F, Ye Y, Mi H, Zhang D, Yang L, Wu Z, Wang J, Chen M, Zhou Z, Wang H, Wang VY, Wang E, Xu D

pubmed logopapersMay 30 2025
This study aims to develop a deep learning framework that leverages modality imputation and subregion segmentation to improve grading accuracy in high-grade gliomas. A retrospective analysis was conducted using data from 1,251 patients in the BraTS2021 dataset as the main cohort and 181 clinical cases collected from a medical center between April 2013 and June 2018 (51 years ± 17; 104 males) as the external test set. We propose a PatchGAN-based modality imputation network with an Aggregated Residual Transformer (ART) module combining Transformer self-attention and CNN feature extraction via residual links, paired with a U-Net variant for segmentation. Generative accuracy used PSNR and SSIM for modality conversions, while segmentation performance was measured with DSC and HD95 across necrotic core (NCR), edema (ED), and enhancing tumor (ET) regions. Senior radiologists conducted a comprehensive Likert-based assessment, with diagnostic accuracy evaluated by AUC. Statistical analysis was performed using the Wilcoxon signed-rank test and the DeLong test. The best source-target modality pairs for imputation were T1 to T1ce and T1ce to T2 (p < 0.001). In subregion segmentation, the overall DSC was 0.878 and HD95 was 19.491, with the ET region showing the highest segmentation accuracy (DSC: 0.877, HD95: 12.149). Clinical validation revealed an improvement in grading accuracy by the senior radiologist, with the AUC increasing from 0.718 to 0.913 (P < 0.001) when using the combined imputation and segmentation models. The proposed deep learning framework improves high-grade glioma grading by modality imputation and segmentation, aiding the senior radiologist and offering potential to advance clinical decision-making.

Imaging-based machine learning to evaluate the severity of ischemic stroke in the middle cerebral artery territory.

Xie G, Gao J, Liu J, Zhou X, Zhao Z, Tang W, Zhang Y, Zhang L, Li K

pubmed logopapersMay 30 2025
This study aims to develop an imaging-based machine learning model for evaluating the severity of ischemic stroke in the middle cerebral artery (MCA) territory. This retrospective study included 173 patients diagnosed with acute ischemic stroke (AIS) in the MCA territory from two centers, with 114 in the training set and 59 in the test set. In the training set, spearman correlation coefficient and multiple linear regression were utilized to analyze the correlation between the CT imaging features of patients prior to treatment and the national institutes of health stroke scale (NIHSS) score. Subsequently, an optimal machine learning algorithm was determined by comparing seven different algorithms. This algorithm was then used to construct a imaging-based prediction model for stroke severity (severe and non-severe). Finally, the model was validated in the test set. After conducting correlation analysis, CT imaging features such as infarction side, basal ganglia area involvement, dense MCA sign, and infarction volume were found to be independently associated with NIHSS score (P < 0.05). The Logistic Regression algorithm was determined to be the optimal method for constructing the prediction model for stroke severity. The area under the receiver operating characteristic curve of the model in both the training set and test set were 0.815 (95% CI: 0.736-0.893) and 0.780 (95% CI: 0.646-0.914), respectively, with accuracies of 0.772 and 0.814. Imaging-based machine learning model can effectively evaluate the severity (severe or non-severe) of ischemic stroke in the MCA territory. Not applicable.

Multi-spatial-attention U-Net: a novel framework for automated gallbladder segmentation on CT images.

Lou H, Wen X, Lin F, Peng Z, Wang Q, Ren R, Xu J, Fan J, Song H, Ji X, Wang H, Sun X, Dong Y

pubmed logopapersMay 30 2025
This study aimed to construct a novel model, Multi-Spatial Attention U-Net (MSAU-Net) by incorporating our proposed Multi-Spatial Attention (MSA) block into the U-Net for the automated segmentation of the gallbladder on CT images. The gallbladder dataset consists of CT images of retrospectively-collected 152 liver cancer patients and corresponding ground truth delineated by experienced physicians. Our proposed MSAU-Net model was transformed into two versions V1(with one Multi-Scale Feature Extraction and Fusion (MSFEF) module in each MSA block) and V2 (with two parallel MSEFE modules in each MSA blcok). The performances of V1 and V2 were evaluated and compared with four other derivatives of U-Net or state-of-the-art models quantitatively using seven commonly-used metrics, and qualitatively by comparison against experienced physicians' assessment. MSAU-Net V1 and V2 models both outperformed the comparative models across most quantitative metrics with better segmentation accuracy and boundary delineation. The optimal number of MSA was three for V1 and two for V2. Qualitative evaluations confirmed that they produced results closer to physicians' annotations. External validation revealed that MSAU-Net V2 exhibited better generalization capability. The MSAU-Net V1 and V2 both exhibited outstanding performance in gallbladder segmentation, demonstrating strong potential for clinical application. The MSA block enhances spatial information capture, improving the model's ability to segment small and complex structures with greater precision. These advantages position the MSAU-Net V1 and V2 as valuable tools for broader clinical adoption.

Radiomics-based differentiation of upper urinary tract urothelial and renal cell carcinoma in preoperative computed tomography datasets.

Marcon J, Weinhold P, Rzany M, Fabritius MP, Winkelmann M, Buchner A, Eismann L, Jokisch JF, Casuscelli J, Schulz GB, Knösel T, Ingrisch M, Ricke J, Stief CG, Rodler S, Kazmierczak PM

pubmed logopapersMay 30 2025
To investigate a non-invasive radiomics-based machine learning algorithm to differentiate upper urinary tract urothelial carcinoma (UTUC) from renal cell carcinoma (RCC) prior to surgical intervention. Preoperative computed tomography venous-phase datasets from patients that underwent procedures for histopathologically confirmed UTUC or RCC were retrospectively analyzed. Tumor segmentation was performed manually, and radiomic features were extracted according to the International Image Biomarker Standardization Initiative. Features were normalized using z-scores, and a predictive model was developed using the least absolute shrinkage and selection operator (LASSO). The dataset was split into a training cohort (70%) and a test cohort (30%). A total of 236 patients [30.5% female, median age 70.5 years (IQR: 59.5-77), median tumor size 5.8 cm (range: 4.1-8.2 cm)] were included. For differentiating UTUC from RCC, the model achieved a sensitivity of 88.4% and specificity of 81% (AUC: 0.93, radiomics score cutoff: 0.467) in the training cohort. In the validation cohort, the sensitivity was 80.6% and specificity 80% (AUC: 0.87, radiomics score cutoff: 0.601). Subgroup analysis of the validation cohort demonstrated robust performance, particularly in distinguishing clear cell RCC from high-grade UTUC (sensitivity: 84%, specificity: 73.1%, AUC: 0.84) and high-grade from low-grade UTUC (sensitivity: 57.7%, specificity: 88.9%, AUC: 0.68). Limitations include the need for independent validation in future randomized controlled trials (RCTs). Machine learning-based radiomics models can reliably differentiate between RCC and UTUC in preoperative CT imaging. With a suggested performance benefit compared to conventional imaging, this technology might be added to the current preoperative diagnostic workflow. Local ethics committee no. 20-179.

A conditional point cloud diffusion model for deformable liver motion tracking via a single arbitrarily-angled x-ray projection.

Xie J, Shao HC, Li Y, Yan S, Shen C, Wang J, Zhang Y

pubmed logopapersMay 30 2025
Deformable liver motion tracking using a single X-ray projection enables real-time motion monitoring and treatment intervention. We introduce a conditional point cloud diffusion model-based framework for accurate and robust liver motion tracking from arbitrarily angled single X-ray projections. We propose a conditional point cloud diffusion model for liver motion tracking (PCD-Liver), which estimates volumetric liver motion by solving deformable vector fields (DVFs) of a prior liver surface point cloud, based on a single X-ray image. It is a patient-specific model of two main components: a rigid alignment model to estimate the liver's overall shifts, and a conditional point cloud diffusion model that further corrects for the liver surface's deformation. Conditioned on the motion-encoded features extracted from a single X-ray projection by a geometry-informed feature pooling layer, the diffusion model iteratively solves detailed liver surface DVFs in a projection angle-agnostic fashion. The liver surface motion solved by PCD-Liver is subsequently fed as the boundary condition into a UNet-based biomechanical model to infer the liver's internal motion to localize liver tumors. A dataset of 10 liver cancer patients was used for evaluation. We used the root mean square error (RMSE) and 95-percentile Hausdorff distance (HD95) metrics to examine the liver point cloud motion estimation accuracy, and the center-of-mass error (COME) to quantify the liver tumor localization error. The mean (±s.d.) RMSE, HD95, and COME of the prior liver or tumor before motion estimation were 8.82 mm (±3.58 mm), 10.84 mm (±4.55 mm), and 9.72 mm (±4.34 mm), respectively. After PCD-Liver's motion estimation, the corresponding values were 3.63 mm (±1.88 mm), 4.29 mm (±1.75 mm), and 3.46 mm (±2.15 mm). Under highly noisy conditions, PCD-Liver maintained stable performance. This study presents an accurate and robust framework for liver deformable motion estimation and tumor localization for image-guided radiotherapy.

Deep learning based motion correction in ultrasound microvessel imaging approach improves thyroid nodule classification.

Saini M, Larson NB, Fatemi M, Alizad A

pubmed logopapersMay 30 2025
To address inter-frame motion artifacts in ultrasound quantitative high-definition microvasculature imaging (qHDMI), we introduced a novel deep learning-based motion correction technique. This approach enables the derivation of more accurate quantitative biomarkers from motion-corrected HDMI images, improving the classification of thyroid nodules. Inter-frame motion, often caused by carotid artery pulsation near the thyroid, can degrade image quality and compromise biomarker reliability, potentially leading to misdiagnosis. Our proposed technique compensates for these motion-induced artifacts, preserving the fine vascular structures critical for accurate biomarker extraction. In this study, we utilized the motion-corrected images obtained through this framework to derive the quantitative biomarkers and evaluated their effectiveness in thyroid nodule classification. We segregated the dataset according to the amount of motion into low and high motion containing cases based on the inter-frame correlation values and performed the thyroid nodule classification for the high motion containing cases and the full dataset. A comprehensive analysis of the biomarker distributions obtained after using the corresponding motion-corrected images demonstrates the significant differences between benign and malignant nodule biomarker characteristics compared to the original motion-containing images. Specifically, the bifurcation angle values derived from the quantitative high-definition microvasculature imaging (qHDMI) become more consistent with the usual trend after motion correction. The classification results demonstrated that sensitivity remained unchanged for groups with less motion, while improved by 9.2% for groups with high motion. These findings highlight that motion correction helps in deriving more accurate biomarkers, which improves the overall classification performance.

Machine Learning Models of Voxel-Level [<sup>18</sup>F] Fluorodeoxyglucose Positron Emission Tomography Data Excel at Predicting Progressive Supranuclear Palsy Pathology.

Braun AS, Satoh R, Pham NTT, Singh-Reilly N, Ali F, Dickson DW, Lowe VJ, Whitwell JL, Josephs KA

pubmed logopapersMay 30 2025
To determine whether a machine learning model of voxel level [<sup>18</sup>f]fluorodeoxyglucose positron emission tomography (PET) data could predict progressive supranuclear palsy (PSP) pathology, as well as outperform currently available biomarkers. One hundred and thirty-seven autopsied patients with PSP (n = 42) and other neurodegenerative diseases (n = 95) who underwent antemortem [<sup>18</sup>f]fluorodeoxyglucose PET and 3.0 Tesla magnetic resonance imaging (MRI) scans were analyzed. A linear support vector machine was applied to differentiate pathological groups with sensitivity analyses performed to assess the influence of voxel size and region removal. A radial basis function was also prepared to create a secondary model using the most important voxels. The models were optimized on the main dataset (n = 104), and their performance was compared with the magnetic resonance parkinsonism index measured on MRI in the independent test dataset (n = 33). The model had the highest accuracy (0.91) and F-score (0.86) when voxel size was 6mm. In this optimized model, important voxels for differentiating the groups were observed in the thalamus, midbrain, and cerebellar dentate. The secondary models found the combination of thalamus and dentate to have the highest accuracy (0.89) and F-score (0.81). The optimized secondary model showed the highest accuracy (0.91) and F-scores (0.86) in the test dataset and outperformed the magnetic resonance parkinsonism index (0.81 and 0.70, respectively). The results suggest that glucose hypometabolism in the thalamus and cerebellar dentate have the highest potential for predicting PSP pathology. Our optimized machine learning model outperformed the best currently available biomarker to predict PSP pathology. ANN NEUROL 2025.

HVAngleEst: A Dataset for End-to-end Automated Hallux Valgus Angle Measurement from X-Ray Images.

Wang Q, Ji D, Wang J, Liu L, Yang X, Zhang Y, Liang J, Liu P, Zhao H

pubmed logopapersMay 30 2025
Accurate measurement of hallux valgus angle (HVA) and intermetatarsal angle (IMA) is essential for diagnosing hallux valgus and determining appropriate treatment strategies. Traditional manual measurement methods, while standardized, are time-consuming, labor-intensive, and subject to evaluator bias. Recent advancements in deep learning have been applied to hallux valgus angle estimation, but the development of effective algorithms requires large, well-annotated datasets. Existing X-ray datasets are typically limited to cropped foot regions images, and only one dataset containing very few samples is publicly available. To address these challenges, we introduce HVAngleEst, the first large-scale, open-access dataset specifically designed for hallux valgus angle estimation. HVAngleEst comprises 1,382 X-ray images from 1,150 patients and includes comprehensive annotations, such as foot localization, hallux valgus angles, and line segments for each phalanx. This dataset enables fully automated, end-to-end hallux valgus angle estimation, reducing manual labor and eliminating evaluator bias.
Page 133 of 1981980 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.