Sort by:
Page 107 of 1411410 results

Research on ischemic stroke risk assessment based on CTA radiomics and machine learning.

Li ZL, Yang HY, Lv XX, Zhang YK, Zhu XY, Zhang YR, Guo L

pubmed logopapersJun 5 2025
The study explores the value of a model constructed by integrating CTA-based carotid plaque radiomic features, clinical risk factors, and plaque imaging characteristics for prognosticating the risk of ischemic stroke. Data from 123 patients with carotid atherosclerosis were analyzed and divided into stroke and asymptomatic groups based on DWI findings. Clinical information was collected, and plaque imaging characteristics were assessed to construct a traditional model. Radiomic features of carotid plaques were extracted using 3D-Slicer software to build a radiomics model. Logistic regression was applied in the training set to establish the traditional model, the radiomics model, and a combined model, which were then tested in the validation set. The prognostic ability of the three models for ischemic stroke was evaluated using ROC curves, while calibration curves, decision curve analysis, and clinical impact curves were used to assess the clinical utility of the models. Differences in AUC values between models were compared using the DeLong test. Hypertension, diabetes, elevated homocysteine (Hcy) concentrations, and plaque burden are independent risk factors for ischemic stroke and were used to establish the traditional model. Through Lasso regression, nine optimal features were selected to construct the radiomics model. ROC curve analysis showed that the AUC values of the three Logistic regression models were 0.766, 0.766, and 0.878 in the training set, and 0.798, 0.801, and 0.847 in the validation set. Calibration curves and decision curve analysis showed that the radiomics model and the combined model had higher accuracy and better fit in prognosticating the risk of ischemic stroke. The radiomics model is slightly better than the traditional model in evaluating the risk of ischemic stroke, while the combined model has the best prognostic performance.

Deep learning based rapid X-ray fluorescence signal extraction and image reconstruction for preclinical benchtop X-ray fluorescence computed tomography applications.

Kaphle A, Jayarathna S, Cho SH

pubmed logopapersJun 4 2025
Recent research advances have resulted in an experimental benchtop X-ray fluorescence computed tomography (XFCT) system that likely meets the imaging dose/scan time constraints for benchtop XFCT imaging of live mice injected with gold nanoparticles (GNPs). For routine in vivo benchtop XFCT imaging, however, additional challenges, most notably the need for rapid/near-real-time handling of X-ray fluorescence (XRF) signal extraction and XFCT image reconstruction, must be successfully addressed. Here we propose a novel end-to-end deep learning (DL) framework that integrates a one-dimensional convolutional neural network (1D CNN) for rapid XRF signal extraction with a U-Net model for XFCT image reconstruction. We trained the models using a comprehensive dataset including experimentally-acquired and augmented XRF/scatter photon spectra from various GNP concentrations and imaging scenarios, including phantom and synthetic mouse models. The DL framework demonstrated exceptional performance in both tasks. The 1D CNN achieved a high coefficient-of-determination (R² > 0.9885) and a low mean-absolute-error (MAE < 0.6248) in XRF signal extraction. The U-Net model achieved an average structural-similarity-index-measure (SSIM) of 0.9791 and a peak signal-to-noise ratio (PSNR) of 39.11 in XFCT image reconstruction, closely matching ground truth images. Notably, the DL approach (vs. the conventional approach) reduced the total post-processing time per slice from approximately 6 min to just 1.25 s.

Diffusion Transformer-based Universal Dose Denoising for Pencil Beam Scanning Proton Therapy

Yuzhen Ding, Jason Holmes, Hongying Feng, Martin Bues, Lisa A. McGee, Jean-Claude M. Rwigema, Nathan Y. Yu, Terence S. Sio, Sameer R. Keole, William W. Wong, Steven E. Schild, Jonathan B. Ashman, Sujay A. Vora, Daniel J. Ma, Samir H. Patel, Wei Liu

arxiv logopreprintJun 4 2025
Purpose: Intensity-modulated proton therapy (IMPT) offers precise tumor coverage while sparing organs at risk (OARs) in head and neck (H&N) cancer. However, its sensitivity to anatomical changes requires frequent adaptation through online adaptive radiation therapy (oART), which depends on fast, accurate dose calculation via Monte Carlo (MC) simulations. Reducing particle count accelerates MC but degrades accuracy. To address this, denoising low-statistics MC dose maps is proposed to enable fast, high-quality dose generation. Methods: We developed a diffusion transformer-based denoising framework. IMPT plans and 3D CT images from 80 H&N patients were used to generate noisy and high-statistics dose maps using MCsquare (1 min and 10 min per plan, respectively). Data were standardized into uniform chunks with zero-padding, normalized, and transformed into quasi-Gaussian distributions. Testing was done on 10 H&N, 10 lung, 10 breast, and 10 prostate cancer cases, preprocessed identically. The model was trained with noisy dose maps and CT images as input and high-statistics dose maps as ground truth, using a combined loss of mean square error (MSE), residual loss, and regional MAE (focusing on top/bottom 10% dose voxels). Performance was assessed via MAE, 3D Gamma passing rate, and DVH indices. Results: The model achieved MAEs of 0.195 (H&N), 0.120 (lung), 0.172 (breast), and 0.376 Gy[RBE] (prostate). 3D Gamma passing rates exceeded 92% (3%/2mm) across all sites. DVH indices for clinical target volumes (CTVs) and OARs closely matched the ground truth. Conclusion: A diffusion transformer-based denoising framework was developed and, though trained only on H&N data, generalizes well across multiple disease sites.

Validation study comparing Artificial intelligence for fully automatic aortic aneurysms Segmentation and diameter Measurements On contrast and non-contrast enhanced computed Tomography (ASMOT).

Gatinot A, Caradu C, Stephan L, Foret T, Rinckenbach S

pubmed logopapersJun 4 2025
Accurate aortic diameter measurements are essential for diagnosis, surveillance, and procedural planning in aortic disease. Semi-automatic methods remain widely used but require manual corrections, which can be time-consuming and operator-dependent. Artificial intelligence (AI)-driven fully automatic methods may offer improved efficiency and measurement accuracy. This study aims to validate a fully automatic method against a semi-automatic approach using computed tomography angiography (CTA) and non-contrast CT scans. A monocentric retrospective comparative study was conducted on patients who underwent endovascular aortic repair (EVAR) for infrarenal, juxta-renal or thoracic aneurysms and a control group. Maximum aortic wall-to-wall diameters were measured before and after repair using a fully automatic software (PRAEVAorta2®, Nurea, Bordeaux, France) and compared to measurements performed by two vascular surgeons using a semi-automatic approach on CTA and non-contrast CT scans. Correlation coefficients (Pearson's R) and absolute differences were calculated to assess agreement. A total of 120 CT scans (60 CTA and 60 non-contrast CT) were included, comprising 23 EVAR, 4 thoracic EVAR, 1 fenestrated EVAR, and 4 control cases. Strong correlations were observed between the fully automatic and semi-automatic measurements in both CTA and non-contrast CT. For CTA, correlation coefficients ranged from 0.94 to 0.96 (R<sup>2</sup> = 0.88-0.92), while for non-contrast CT, they ranged from 0.87 to 0.89 (R<sup>2</sup> = 0.76-0.79). Median absolute differences in aortic diameter measurements varied between 1.1 mm and 4.2 mm across the different anatomical locations. The fully automatic method demonstrated a significantly faster processing time, with a median execution time of 73 seconds (IQR: 57-91) compared to 700 (IQR: 613-800) for the semi-automatic method (p < 0.001). The fully automatic method demonstrated strong agreement with semi-automatic measurements for both CTA and non-contrast CT, before and after endovascular repair in different aortic locations, with significantly reduced analysis time. This method could improve workflow efficiency in clinical practice and research applications.

Enhanced risk stratification for stage II colorectal cancer using deep learning-based CT classifier and pathological markers to optimize adjuvant therapy decision.

Huang YQ, Chen XB, Cui YF, Yang F, Huang SX, Li ZH, Ying YJ, Li SY, Li MH, Gao P, Wu ZQ, Wen G, Wang ZS, Wang HX, Hong MP, Diao WJ, Chen XY, Hou KQ, Zhang R, Hou J, Fang Z, Wang ZN, Mao Y, Wee L, Liu ZY

pubmed logopapersJun 4 2025
Current risk stratification for stage II colorectal cancer (CRC) has limited accuracy in identifying patients who would benefit from adjuvant chemotherapy, leading to potential over- or under-treatment. We aimed to develop a more precise risk stratification system by integrating artificial intelligence-based imaging analysis with pathological markers. We analyzed 2,992 stage II CRC patients from 12 centers. A deep learning classifier (Swin Transformer Assisted Risk-stratification for CRC, STAR-CRC) was developed using multi-planar CT images from 1,587 patients (training:internal validation=7:3) and validated in 1,405 patients from 8 independent centers, which stratified patients into low-, uncertain-, and high-risk groups. To further refine the uncertain-risk group, a composite score based on pathological markers (pT4 stage, number of lymph nodes sampled, perineural invasion, and lymphovascular invasion) was applied, forming the intelligent risk integration system for stage II CRC (IRIS-CRC). IRIS-CRC was compared against the guideline-based risk stratification system (GRSS-CRC) for prediction performance and validated in the validation dataset. IRIS-CRC stratified patients into four prognostic groups with distinct 3-year disease-free survival rates (≥95%, 95-75%, 75-55%, ≤55%). Upon external validation, compared to GRSS-CRC, IRIS-CRC downstaged 27.1% of high-risk patients into Favorable group, while upstaged 6.5% of low-risk patients into Very Poor prognosis group who might require more aggressive treatment. In the GRSS-CRC intermediate-risk group of the external validation dataset, IRIS-CRC reclassified 40.1% as Favorable prognosis and 7.0% as Very Poor prognosis. IRIS-CRC's performance maintained generalized in both chemotherapy and non-chemotherapy cohorts. IRIS-CRC offers a more precise and personalized risk assessment than current guideline-based risk factors, potentially sparing low-risk patients from unnecessary adjuvant chemotherapy while identifying high-risk individuals for more aggressive treatment. This novel approach holds promise for improving clinical decision-making and outcomes in stage II CRC.

Digital removal of dermal denticle layer using geometric AI from 3D CT scans of shark craniofacial structures enhances anatomical precision.

Kim SW, Yuen AHL, Kim HW, Lee S, Lee SB, Lee YM, Jung WJ, Poon CTC, Park D, Kim S, Kim SG, Kang JW, Kwon J, Jo SJ, Giri SS, Park H, Seo JP, Kim DS, Kim BY, Park SC

pubmed logopapersJun 4 2025
Craniofacial morphometrics in sharks provide crucial insights into evolutionary history, geographical variation, sexual dimorphism, and developmental patterns. However, the fragile cartilaginous nature of shark craniofacial skeleton poses significant challenges for traditional specimen preparation, often resulting in damaged cranial landmarks and compromised measurement accuracy. While computed tomography (CT) offers a non-invasive alternative for anatomical observation, the high electron density of dermal denticles in sharks creates a unique challenge, obstructing clear visualization of internal structures in three-dimensional volume-rendered images (3DVRI). This study presents an artificial intelligence (AI)-based solution using machine-learning algorithms for digitally removing dermal denticle layer from CT scans of shark craniofacial skeleton. We developed a geometric AI-driven software (SKINPEELER) that selectively removes high-intensity voxels corresponding to dermal denticle layer while preserving underlying anatomical structures. We evaluated this approach using CT scans from 20 sharks (16 Carcharhinus brachyurus, 2 Alopias vulpinus, 1 Sphyrna lewini, and 1 Prionace glauca), applying our AI-driven software to process the Digital Imaging and Communications in Medicine (DICOM) images. The processed scans were reconstructed using bone reconstruction algorithms to enable precise craniofacial measurements. We assessed the accuracy of our method by comparing measurements from the processed 3DVRIs with traditional manual measurements. The AI-assisted approach demonstrated high accuracy (86.16-98.52%) relative to manual measurements. Additionally, we evaluated reproducibility and repeatability using intraclass correlation coefficients (ICC), finding high reproducibility (ICC: 0.456-0.998) and repeatability (ICC: 0.985-1.000 for operator 1 and 0.882-0.999 for operator 2). Our results indicate that this AI-enhanced digital denticle removal technique, combined with 3D CT reconstruction, provides a reliable and non-destructive alternative to traditional specimen preparation methods for investigating shark craniofacial morphology. This novel approach enhances measurement precision while preserving specimen integrity, potentially advancing various aspects of shark research including evolutionary studies, conservation efforts, and anatomical investigations.

Latent space reconstruction for missing data problems in CT.

Kabelac A, Eulig E, Maier J, Hammermann M, Knaup M, Kachelrieß M

pubmed logopapersJun 4 2025
The reconstruction of a computed tomography (CT) image can be compromised by artifacts, which, in many cases, reduce the diagnostic value of the image. These artifacts often result from missing or corrupt regions in the projection data, for example, by truncation, metal, or limited angle acquisitions. In this work, we introduce a novel deep learning-based framework, latent space reconstruction (LSR), which enables correction of various types of artifacts arising from missing or corrupted data. First, we train a generative neural network on uncorrupted CT images. After training, we iteratively search for the point in the latent space of this network that best matches the compromised projection data we measured. Once an optimal point is found, forward-projection of the generated CT image can be used to inpaint the corrupted or incomplete regions of the measured raw data. We used LSR to correct for truncation and metal artifacts. For the truncation artifact correction, images corrected by LSR show effective artifact suppression within the field of measurement (FOM), alongside a substantial high-quality extension of the FOM compared to other methods. For the metal artifact correction, images corrected by LSR demonstrate effective artifact reduction, providing a clearer view of the surrounding tissues and anatomical details. The results indicate that LSR is effective in correcting metal and truncation artifacts. Furthermore, the versatility of LSR allows its application to various other types of artifacts resulting from missing or corrupt data.

AI-powered segmentation of bifid mandibular canals using CBCT.

Gumussoy I, Demirezer K, Duman SB, Haylaz E, Bayrakdar IS, Celik O, Syed AZ

pubmed logopapersJun 4 2025
Accurate segmentation of the mandibular and bifid canals is crucial in dental implant planning to ensure safe implant placement, third molar extractions and other surgical interventions. The objective of this study is to develop and validate an innovative artificial intelligence tool for the efficient, and accurate segmentation of the mandibular and bifid canals on CBCT. CBCT data were screened to identify patients with clearly visible bifid canal variations, and their DICOM files were extracted. These DICOM files were then imported into the 3D Slicer<sup>®</sup> open-source software, where bifid canals and mandibular canals were annotated. The annotated data, along with the raw DICOM files, were processed using the nnU-Netv2 training model by CranioCatch AI software team. 69 anonymized CBCT volumes in DICOM format were converted to NIfTI file format. The method, utilizing nnU-Net v2, accurately predicted the voxels associated with the mandibular canal, achieving an intersection of over 50% in nearly all samples. The accuracy, Dice score, precision, and recall scores for the mandibular canal/bifid canal were determined to be 0.99/0.99, 0.82/0.46, 0.85/0.70, and 0.80/0.42, respectively. Despite the bifid canal segmentation not meeting the expected level of success, the findings indicate that the proposed method shows promising and has the potential to be utilized as a supplementary tool for mandibular canal segmentation. Due to the significance of accurately evaluating the mandibular canal before surgery, the use of artificial intelligence could assist in reducing the burden on practitioners by automating the complicated and time-consuming process of tracing and segmenting this structure. Being able to distinguish bifid channels with artificial intelligence will help prevent neurovascular problems that may occur before or after surgery.

Deep learning-based cone-beam CT motion compensation with single-view temporal resolution.

Maier J, Sawall S, Arheit M, Paysan P, Kachelrieß M

pubmed logopapersJun 4 2025
Cone-beam CT (CBCT) scans that are affected by motion often require motion compensation to reduce artifacts or to reconstruct 4D (3D+time) representations of the patient. To do so, most existing strategies rely on some sort of gating strategy that sorts the acquired projections into motion bins. Subsequently, these bins can be reconstructed individually before further post-processing may be applied to improve image quality. While this concept is useful for periodic motion patterns, it fails in case of non-periodic motion as observed, for example, in irregularly breathing patients. To address this issue and to increase temporal resolution, we propose the deep single angle-based motion compensation (SAMoCo). To avoid gating, and therefore its downsides, the deep SAMoCo trains a U-net-like network to predict displacement vector fields (DVFs) representing the motion that occurred between any two given time points of the scan. To do so, 4D clinical CT scans are used to simulate 4D CBCT scans as well as the corresponding ground truth DVFs that map between the different motion states of the scan. The network is then trained to predict these DVFs as a function of the respective projection views and an initial 3D reconstruction. Once the network is trained, an arbitrary motion state corresponding to a certain projection view of the scan can be recovered by estimating DVFs from any other state or view and by considering them during reconstruction. Applied to 4D CBCT simulations of breathing patients, the deep SAMoCo provides high-quality reconstructions for periodic and non-periodic motion. Here, the deviations with respect to the ground truth are less than 27 HU on average, while respiratory motion, or the diaphragm position, can be resolved with an accuracy of about 0.75 mm. Similar results were obtained for real measurements where a high correlation with external motion monitoring signals could be observed, even in patients with highly irregular respiration. The ability to estimate DVFs as a function of two arbitrary projection views and an initial 3D reconstruction makes deep SAMoCo applicable to arbitrary motion patterns with single-view temporal resolution. Therefore, the deep SAMoCo is particularly useful for cases with unsteady breathing, compensation of residual motion during a breath-hold scan, or scans with fast gantry rotation times in which the data acquisition only covers a very limited number of breathing cycles. Furthermore, not requiring gating signals may simplify the clinical workflow and reduces the time needed for patient preparation.

Computed tomography-based radiomics model for predicting station 4 lymph node metastasis in non-small cell lung cancer.

Kang Y, Li M, Xing X, Qian K, Liu H, Qi Y, Liu Y, Cui Y, Zhang H

pubmed logopapersJun 4 2025
This study aimed to develop and validate machine learning models for preoperative identification of metastasis to station 4 mediastinal lymph nodes (MLNM) in non-small cell lung cancer (NSCLC) patients at pathological N0-N2 (pN0-pN2) stage, thereby enhancing the precision of clinical decision-making. We included a total of 356 NSCLC patients at pN0-pN2 stage, divided into training (n = 207), internal test (n = 90), and independent test (n = 59) sets. Station 4 mediastinal lymph nodes (LNs) regions of interest (ROIs) were semi-automatically segmented on venous-phase computed tomography (CT) images for radiomics feature extraction. Using least absolute shrinkage and selection operator (LASSO) regression to select features with non-zero coefficients. Four machine learning algorithms-decision tree (DT), logistic regression (LR), random forest (RF), and support vector machine (SVM)-were employed to construct radiomics models. Clinical predictors were identified through univariate and multivariate logistic regression, which were subsequently integrated with radiomics features to develop combined models. Models performance were evaluated using receiver operating characteristic (ROC) analysis, calibration curves, decision curve analysis (DCA), and DeLong's test. Out of 1721 radiomics features, eight radiomics features were selected using LASSO regression. The RF-based combined model exhibited the strongest discriminative power, with an area under the curve (AUC) of 0.934 for the training set and 0.889 for the internal test set. The calibration curve and DCA further indicated the superior performance of the combined model based on RF. The independent test set further verified the model's robustness. The combined model based on RF, integrating radiomics and clinical features, effectively and non-invasively identifies metastasis to the station 4 mediastinal LNs in NSCLC patients at pN0-pN2 stage. This model serves as an effective auxiliary tool for clinical decision-making and has the potential to optimize treatment strategies and improve prognostic assessment for pN0-pN2 patients. Not applicable.
Page 107 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.