Sort by:
Page 71 of 1351347 results

Deep learning-driven modality imputation and subregion segmentation to enhance high-grade glioma grading.

Yu J, Liu Q, Xu C, Zhou Q, Xu J, Zhu L, Chen C, Zhou Y, Xiao B, Zheng L, Zhou X, Zhang F, Ye Y, Mi H, Zhang D, Yang L, Wu Z, Wang J, Chen M, Zhou Z, Wang H, Wang VY, Wang E, Xu D

pubmed logopapersMay 30 2025
This study aims to develop a deep learning framework that leverages modality imputation and subregion segmentation to improve grading accuracy in high-grade gliomas. A retrospective analysis was conducted using data from 1,251 patients in the BraTS2021 dataset as the main cohort and 181 clinical cases collected from a medical center between April 2013 and June 2018 (51 years ± 17; 104 males) as the external test set. We propose a PatchGAN-based modality imputation network with an Aggregated Residual Transformer (ART) module combining Transformer self-attention and CNN feature extraction via residual links, paired with a U-Net variant for segmentation. Generative accuracy used PSNR and SSIM for modality conversions, while segmentation performance was measured with DSC and HD95 across necrotic core (NCR), edema (ED), and enhancing tumor (ET) regions. Senior radiologists conducted a comprehensive Likert-based assessment, with diagnostic accuracy evaluated by AUC. Statistical analysis was performed using the Wilcoxon signed-rank test and the DeLong test. The best source-target modality pairs for imputation were T1 to T1ce and T1ce to T2 (p < 0.001). In subregion segmentation, the overall DSC was 0.878 and HD95 was 19.491, with the ET region showing the highest segmentation accuracy (DSC: 0.877, HD95: 12.149). Clinical validation revealed an improvement in grading accuracy by the senior radiologist, with the AUC increasing from 0.718 to 0.913 (P < 0.001) when using the combined imputation and segmentation models. The proposed deep learning framework improves high-grade glioma grading by modality imputation and segmentation, aiding the senior radiologist and offering potential to advance clinical decision-making.

Imaging-based machine learning to evaluate the severity of ischemic stroke in the middle cerebral artery territory.

Xie G, Gao J, Liu J, Zhou X, Zhao Z, Tang W, Zhang Y, Zhang L, Li K

pubmed logopapersMay 30 2025
This study aims to develop an imaging-based machine learning model for evaluating the severity of ischemic stroke in the middle cerebral artery (MCA) territory. This retrospective study included 173 patients diagnosed with acute ischemic stroke (AIS) in the MCA territory from two centers, with 114 in the training set and 59 in the test set. In the training set, spearman correlation coefficient and multiple linear regression were utilized to analyze the correlation between the CT imaging features of patients prior to treatment and the national institutes of health stroke scale (NIHSS) score. Subsequently, an optimal machine learning algorithm was determined by comparing seven different algorithms. This algorithm was then used to construct a imaging-based prediction model for stroke severity (severe and non-severe). Finally, the model was validated in the test set. After conducting correlation analysis, CT imaging features such as infarction side, basal ganglia area involvement, dense MCA sign, and infarction volume were found to be independently associated with NIHSS score (P < 0.05). The Logistic Regression algorithm was determined to be the optimal method for constructing the prediction model for stroke severity. The area under the receiver operating characteristic curve of the model in both the training set and test set were 0.815 (95% CI: 0.736-0.893) and 0.780 (95% CI: 0.646-0.914), respectively, with accuracies of 0.772 and 0.814. Imaging-based machine learning model can effectively evaluate the severity (severe or non-severe) of ischemic stroke in the MCA territory. Not applicable.

Multi-spatial-attention U-Net: a novel framework for automated gallbladder segmentation on CT images.

Lou H, Wen X, Lin F, Peng Z, Wang Q, Ren R, Xu J, Fan J, Song H, Ji X, Wang H, Sun X, Dong Y

pubmed logopapersMay 30 2025
This study aimed to construct a novel model, Multi-Spatial Attention U-Net (MSAU-Net) by incorporating our proposed Multi-Spatial Attention (MSA) block into the U-Net for the automated segmentation of the gallbladder on CT images. The gallbladder dataset consists of CT images of retrospectively-collected 152 liver cancer patients and corresponding ground truth delineated by experienced physicians. Our proposed MSAU-Net model was transformed into two versions V1(with one Multi-Scale Feature Extraction and Fusion (MSFEF) module in each MSA block) and V2 (with two parallel MSEFE modules in each MSA blcok). The performances of V1 and V2 were evaluated and compared with four other derivatives of U-Net or state-of-the-art models quantitatively using seven commonly-used metrics, and qualitatively by comparison against experienced physicians' assessment. MSAU-Net V1 and V2 models both outperformed the comparative models across most quantitative metrics with better segmentation accuracy and boundary delineation. The optimal number of MSA was three for V1 and two for V2. Qualitative evaluations confirmed that they produced results closer to physicians' annotations. External validation revealed that MSAU-Net V2 exhibited better generalization capability. The MSAU-Net V1 and V2 both exhibited outstanding performance in gallbladder segmentation, demonstrating strong potential for clinical application. The MSA block enhances spatial information capture, improving the model's ability to segment small and complex structures with greater precision. These advantages position the MSAU-Net V1 and V2 as valuable tools for broader clinical adoption.

Radiomics-based differentiation of upper urinary tract urothelial and renal cell carcinoma in preoperative computed tomography datasets.

Marcon J, Weinhold P, Rzany M, Fabritius MP, Winkelmann M, Buchner A, Eismann L, Jokisch JF, Casuscelli J, Schulz GB, Knösel T, Ingrisch M, Ricke J, Stief CG, Rodler S, Kazmierczak PM

pubmed logopapersMay 30 2025
To investigate a non-invasive radiomics-based machine learning algorithm to differentiate upper urinary tract urothelial carcinoma (UTUC) from renal cell carcinoma (RCC) prior to surgical intervention. Preoperative computed tomography venous-phase datasets from patients that underwent procedures for histopathologically confirmed UTUC or RCC were retrospectively analyzed. Tumor segmentation was performed manually, and radiomic features were extracted according to the International Image Biomarker Standardization Initiative. Features were normalized using z-scores, and a predictive model was developed using the least absolute shrinkage and selection operator (LASSO). The dataset was split into a training cohort (70%) and a test cohort (30%). A total of 236 patients [30.5% female, median age 70.5 years (IQR: 59.5-77), median tumor size 5.8 cm (range: 4.1-8.2 cm)] were included. For differentiating UTUC from RCC, the model achieved a sensitivity of 88.4% and specificity of 81% (AUC: 0.93, radiomics score cutoff: 0.467) in the training cohort. In the validation cohort, the sensitivity was 80.6% and specificity 80% (AUC: 0.87, radiomics score cutoff: 0.601). Subgroup analysis of the validation cohort demonstrated robust performance, particularly in distinguishing clear cell RCC from high-grade UTUC (sensitivity: 84%, specificity: 73.1%, AUC: 0.84) and high-grade from low-grade UTUC (sensitivity: 57.7%, specificity: 88.9%, AUC: 0.68). Limitations include the need for independent validation in future randomized controlled trials (RCTs). Machine learning-based radiomics models can reliably differentiate between RCC and UTUC in preoperative CT imaging. With a suggested performance benefit compared to conventional imaging, this technology might be added to the current preoperative diagnostic workflow. Local ethics committee no. 20-179.

A conditional point cloud diffusion model for deformable liver motion tracking via a single arbitrarily-angled x-ray projection.

Xie J, Shao HC, Li Y, Yan S, Shen C, Wang J, Zhang Y

pubmed logopapersMay 30 2025
Deformable liver motion tracking using a single X-ray projection enables real-time motion monitoring and treatment intervention. We introduce a conditional point cloud diffusion model-based framework for accurate and robust liver motion tracking from arbitrarily angled single X-ray projections. We propose a conditional point cloud diffusion model for liver motion tracking (PCD-Liver), which estimates volumetric liver motion by solving deformable vector fields (DVFs) of a prior liver surface point cloud, based on a single X-ray image. It is a patient-specific model of two main components: a rigid alignment model to estimate the liver's overall shifts, and a conditional point cloud diffusion model that further corrects for the liver surface's deformation. Conditioned on the motion-encoded features extracted from a single X-ray projection by a geometry-informed feature pooling layer, the diffusion model iteratively solves detailed liver surface DVFs in a projection angle-agnostic fashion. The liver surface motion solved by PCD-Liver is subsequently fed as the boundary condition into a UNet-based biomechanical model to infer the liver's internal motion to localize liver tumors. A dataset of 10 liver cancer patients was used for evaluation. We used the root mean square error (RMSE) and 95-percentile Hausdorff distance (HD95) metrics to examine the liver point cloud motion estimation accuracy, and the center-of-mass error (COME) to quantify the liver tumor localization error. The mean (±s.d.) RMSE, HD95, and COME of the prior liver or tumor before motion estimation were 8.82 mm (±3.58 mm), 10.84 mm (±4.55 mm), and 9.72 mm (±4.34 mm), respectively. After PCD-Liver's motion estimation, the corresponding values were 3.63 mm (±1.88 mm), 4.29 mm (±1.75 mm), and 3.46 mm (±2.15 mm). Under highly noisy conditions, PCD-Liver maintained stable performance. This study presents an accurate and robust framework for liver deformable motion estimation and tumor localization for image-guided radiotherapy.

Deep learning based motion correction in ultrasound microvessel imaging approach improves thyroid nodule classification.

Saini M, Larson NB, Fatemi M, Alizad A

pubmed logopapersMay 30 2025
To address inter-frame motion artifacts in ultrasound quantitative high-definition microvasculature imaging (qHDMI), we introduced a novel deep learning-based motion correction technique. This approach enables the derivation of more accurate quantitative biomarkers from motion-corrected HDMI images, improving the classification of thyroid nodules. Inter-frame motion, often caused by carotid artery pulsation near the thyroid, can degrade image quality and compromise biomarker reliability, potentially leading to misdiagnosis. Our proposed technique compensates for these motion-induced artifacts, preserving the fine vascular structures critical for accurate biomarker extraction. In this study, we utilized the motion-corrected images obtained through this framework to derive the quantitative biomarkers and evaluated their effectiveness in thyroid nodule classification. We segregated the dataset according to the amount of motion into low and high motion containing cases based on the inter-frame correlation values and performed the thyroid nodule classification for the high motion containing cases and the full dataset. A comprehensive analysis of the biomarker distributions obtained after using the corresponding motion-corrected images demonstrates the significant differences between benign and malignant nodule biomarker characteristics compared to the original motion-containing images. Specifically, the bifurcation angle values derived from the quantitative high-definition microvasculature imaging (qHDMI) become more consistent with the usual trend after motion correction. The classification results demonstrated that sensitivity remained unchanged for groups with less motion, while improved by 9.2% for groups with high motion. These findings highlight that motion correction helps in deriving more accurate biomarkers, which improves the overall classification performance.

Machine Learning Models of Voxel-Level [<sup>18</sup>F] Fluorodeoxyglucose Positron Emission Tomography Data Excel at Predicting Progressive Supranuclear Palsy Pathology.

Braun AS, Satoh R, Pham NTT, Singh-Reilly N, Ali F, Dickson DW, Lowe VJ, Whitwell JL, Josephs KA

pubmed logopapersMay 30 2025
To determine whether a machine learning model of voxel level [<sup>18</sup>f]fluorodeoxyglucose positron emission tomography (PET) data could predict progressive supranuclear palsy (PSP) pathology, as well as outperform currently available biomarkers. One hundred and thirty-seven autopsied patients with PSP (n = 42) and other neurodegenerative diseases (n = 95) who underwent antemortem [<sup>18</sup>f]fluorodeoxyglucose PET and 3.0 Tesla magnetic resonance imaging (MRI) scans were analyzed. A linear support vector machine was applied to differentiate pathological groups with sensitivity analyses performed to assess the influence of voxel size and region removal. A radial basis function was also prepared to create a secondary model using the most important voxels. The models were optimized on the main dataset (n = 104), and their performance was compared with the magnetic resonance parkinsonism index measured on MRI in the independent test dataset (n = 33). The model had the highest accuracy (0.91) and F-score (0.86) when voxel size was 6mm. In this optimized model, important voxels for differentiating the groups were observed in the thalamus, midbrain, and cerebellar dentate. The secondary models found the combination of thalamus and dentate to have the highest accuracy (0.89) and F-score (0.81). The optimized secondary model showed the highest accuracy (0.91) and F-scores (0.86) in the test dataset and outperformed the magnetic resonance parkinsonism index (0.81 and 0.70, respectively). The results suggest that glucose hypometabolism in the thalamus and cerebellar dentate have the highest potential for predicting PSP pathology. Our optimized machine learning model outperformed the best currently available biomarker to predict PSP pathology. ANN NEUROL 2025.

HVAngleEst: A Dataset for End-to-end Automated Hallux Valgus Angle Measurement from X-Ray Images.

Wang Q, Ji D, Wang J, Liu L, Yang X, Zhang Y, Liang J, Liu P, Zhao H

pubmed logopapersMay 30 2025
Accurate measurement of hallux valgus angle (HVA) and intermetatarsal angle (IMA) is essential for diagnosing hallux valgus and determining appropriate treatment strategies. Traditional manual measurement methods, while standardized, are time-consuming, labor-intensive, and subject to evaluator bias. Recent advancements in deep learning have been applied to hallux valgus angle estimation, but the development of effective algorithms requires large, well-annotated datasets. Existing X-ray datasets are typically limited to cropped foot regions images, and only one dataset containing very few samples is publicly available. To address these challenges, we introduce HVAngleEst, the first large-scale, open-access dataset specifically designed for hallux valgus angle estimation. HVAngleEst comprises 1,382 X-ray images from 1,150 patients and includes comprehensive annotations, such as foot localization, hallux valgus angles, and line segments for each phalanx. This dataset enables fully automated, end-to-end hallux valgus angle estimation, reducing manual labor and eliminating evaluator bias.

Artificial Intelligence for Assessment of Digital Mammography Positioning Reveals Persistent Challenges.

Margolies LR, Spear GG, Payne JI, Iles SE, Abdolell M

pubmed logopapersMay 30 2025
Mammographic breast cancer detection depends on high-quality positioning, which is traditionally assessed and monitored subjectively. This study used artificial intelligence (AI) to evaluate mammography positioning on digital screening mammograms to identify and quantify unmet mammography positioning quality (MPQ). Data were collected within an IRB-approved collaboration. In total, 126 367 digital mammography studies (553 339 images) were processed. Unmet MPQ criteria, including exaggeration, portion cutoff, posterior tissue missing, nipple not in profile, too high on image receptor, inadequate pectoralis length, sagging, and posterior nipple line (PNL) length difference, were evaluated using MPQ AI algorithms. The similarity of unmet MPQ occurrence and rank order was compared for each health system. Altogether, 163 759 and 219 785 unmet MPQ criteria were identified, respectively, at the health systems. The rank order and the probability distribution of the unmet MPQ criteria were not statistically significantly different between health systems (P = .844 and P = .92, respectively). The 3 most-common unmet MPQ criteria were: short PNL length on the craniocaudal (CC) view, inadequate pectoralis muscle, and excessive exaggeration on the CC view. The percentages of unmet positioning criteria out of the total potential unmet positioning criteria at health system 1 and health system 2 were 8.4% (163 759/1 949 922) and 7.3% (219 785/3 030 129), respectively. Artificial intelligence identified a similar distribution of unmet MPQ criteria in 2 health systems' daily work. Knowledge of current commonly unmet MPQ criteria can facilitate the improvement of mammography quality through tailored education strategies.

Deploying a novel deep learning framework for segmentation of specific anatomical structures on cone-beam CT.

Yuce F, Buyuk C, Bilgir E, Çelik Ö, Bayrakdar İŞ

pubmed logopapersMay 30 2025
Cone-beam computed tomography (CBCT) imaging plays a crucial role in dentistry, with automatic prediction of anatomical structures on CBCT images potentially enhancing diagnostic and planning procedures. This study aims to predict anatomical structures automatically on CBCT images using a deep learning algorithm. CBCT images from 70 patients were analyzed. Anatomical structures were annotated using a regional segmentation tool within an annotation software by two dentomaxillofacial radiologists. Each volumetric dataset comprised 405 slices, with relevant anatomical structures marked in each slice. Seventy DICOM images were converted to Nifti format, with seven reserved for testing and the remaining sixty-three used for training. The training utilized nnUNetv2 with an initial learning rate of 0.01, decreasing by 0.00001 at each epoch, and was conducted for 1000 epochs. Statistical analysis included accuracy, Dice score, precision, and recall results. The segmentation model achieved an accuracy of 0.99 for nasal fossa, maxillary sinus, nasopalatine canal, mandibular canal, foramen mentale, and foramen mandible, with corresponding Dice scores of 0.85, 0.98, 0.79, 0.73, 0.78, and 0.74, respectively. Precision values ranged from 0.73 to 0.98. Maxillary sinus segmentation exhibited the highest performance, while mandibular canal segmentation showed the lowest performance. The results demonstrate high accuracy and precision across most structures, with varying Dice scores indicating the consistency of segmentation. Overall, our segmentation model exhibits robust performance in delineating anatomical features in CBCT images, promising potential applications in dental diagnostics and treatment planning.
Page 71 of 1351347 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.