Sort by:
Page 206 of 3203198 results

[Albumin-myoestatosis gauge assisted by an artificial intelligence tool as a prognostic factor in patients with metastatic colorectal-cancer].

de Luis Román D, Primo D, Izaola Jáuregui O, Sánchez Lite I, López Gómez JJ

pubmed logopapersJun 6 2025
to evaluate the prognostic role of the marker albumin-myosteatosis (MAM) in Caucasian patients with metastatic colorectal cancer. this study involved 55 consecutive Caucasian patients diagnosed with metastatic colorectal cancer. CT scans at the L3 vertebral level were analyzed to determine skeletal muscle cross-sectional area, skeletal muscle index (SMI), and skeletal muscle density (SMD). Bioelectrical impedance analysis (BIA) (phase angle, reactance, resistance, and SMI-BIA) was used. Albumin and prealbumin were measured. The albumin-myosteatosis marker (AMM = serum albumin (g/dL) × skeletal muscle density (SMD) in Hounsfield units (HU) was calculated. Survival was estimated using the Kaplan-Meier method and comparisons between groups were performed using the log-rank test. the median age was 68.1 ± 9.1 years. Patients were divided into two groups based on the median MAM (129.1 AU for women and 156.3 AU for men). Patients in the low MAM group had significantly reduced values of phase angle and reactance, as well as older age. These patients also had higher rates of malnutrition by GLIM criteria (odds ratio: 3.8; 95 % CI = 1.2-12.9), low muscle mass diagnosed with TC (odds ratio: 3.6; 95 % CI = 1.2-10.9) and mortality (odds ratio: 9.82; 95 % CI = 1.2-10.9). The Kaplan-Meir analysis demonstrated significant differences in 5-year survival between MAM groups (patients in the low median MAM group vs. patients in the high median MAM group), (HR: 6.2; 95 % CI = 1.10-37.5). the marker albumin-myosteatosis (MAM) may function as a prognostic marker of survival in Caucasian patients with metastatic CRC.

Research on ischemic stroke risk assessment based on CTA radiomics and machine learning.

Li ZL, Yang HY, Lv XX, Zhang YK, Zhu XY, Zhang YR, Guo L

pubmed logopapersJun 5 2025
The study explores the value of a model constructed by integrating CTA-based carotid plaque radiomic features, clinical risk factors, and plaque imaging characteristics for prognosticating the risk of ischemic stroke. Data from 123 patients with carotid atherosclerosis were analyzed and divided into stroke and asymptomatic groups based on DWI findings. Clinical information was collected, and plaque imaging characteristics were assessed to construct a traditional model. Radiomic features of carotid plaques were extracted using 3D-Slicer software to build a radiomics model. Logistic regression was applied in the training set to establish the traditional model, the radiomics model, and a combined model, which were then tested in the validation set. The prognostic ability of the three models for ischemic stroke was evaluated using ROC curves, while calibration curves, decision curve analysis, and clinical impact curves were used to assess the clinical utility of the models. Differences in AUC values between models were compared using the DeLong test. Hypertension, diabetes, elevated homocysteine (Hcy) concentrations, and plaque burden are independent risk factors for ischemic stroke and were used to establish the traditional model. Through Lasso regression, nine optimal features were selected to construct the radiomics model. ROC curve analysis showed that the AUC values of the three Logistic regression models were 0.766, 0.766, and 0.878 in the training set, and 0.798, 0.801, and 0.847 in the validation set. Calibration curves and decision curve analysis showed that the radiomics model and the combined model had higher accuracy and better fit in prognosticating the risk of ischemic stroke. The radiomics model is slightly better than the traditional model in evaluating the risk of ischemic stroke, while the combined model has the best prognostic performance.

Quantitative and automatic plan-of-the-day assessment to facilitate adaptive radiotherapy in cervical cancer.

Mason SA, Wang L, Alexander SE, Lalondrelle S, McNair HA, Harris EJ

pubmed logopapersJun 5 2025
To facilitate implementation of plan-of-the-day (POTD) selection for treating locally advanced cervical cancer (LACC), we developed a POTD assessment tool for CBCT-guided radiotherapy (RT). A female pelvis segmentation model (U-Seg3) is combined with a quantitative standard operating procedure (qSOP) to identify optimal and acceptable plans. 

Approach: The planning CT[i], corresponding structure set[ii], and manually contoured CBCTs[iii] (n=226) from 39 LACC patients treated with POTD (n=11) or non-adaptive RT (n=28) were used to develop U-Seg3, an algorithm incorporating deep-learning and deformable image registration techniques to segment the low-risk clinical target volume (LR-CTV), high-risk CTV (HR-CTV), bladder, rectum, and bowel bag. A single-channel input model (iii only, U-Seg1) was also developed. Contoured CBCTs from the POTD patients were (a) reserved for U-Seg3 validation/testing, (b) audited to determine optimal and acceptable plans, and (c) used to empirically derive a qSOP that maximised classification accuracy. 

Main Results: The median [interquartile range] DSC between manual and U-Seg3 contours was 0.83 [0.80], 0.78 [0.13], 0.94 [0.05], 0.86[0.09], and 0.90 [0.05] for the LR-CTV, HR-CTV, bladder, rectum, and bowel bag. These were significantly higher than U-Seg1 in all structures but bladder. The qSOP classified plans as acceptable if they met target coverage thresholds (LR-CTV≧99%, HR-CTV≧99.8%), with lower LR-CTV coverage (≧95%) sometimes allowed. The acceptable plan minimising bowel irradiation was considered optimal unless substantial bladder sparing could be achieved. With U-Seg3 embedded in the qSOP, optimal and acceptable plans were identified in 46/60 and 57/60 cases. 

Significance: U-Seg3 outperforms U-Seg1 and all known CBCT-based female pelvis segmentation models. The tool combining U-Seg3 and the qSOP identifies optimal plans with equivalent accuracy as two observers. In an implementation strategy whereby this tool serves as the second observer, plan selection confidence and decision-making time could be improved whilst simultaneously reducing the required number of POTD-trained radiographers by 50%.

&#xD.

High-definition motion-resolved MRI using 3D radial kooshball acquisition and deep learning spatial-temporal 4D reconstruction.

Murray V, Wu C, Otazo R

pubmed logopapersJun 5 2025
&#xD;To develop motion-resolved volumetric MRI with 1.1mm isotropic resolution and scan times <5 minutes using a combination of 3D radial kooshball acquisition and spatial-temporal deep learning 4D reconstruction for free-breathing high-definition lung MRI. &#xD;Approach: &#xD;Free-breathing lung MRI was conducted on eight healthy volunteers and ten patients with lung tumors on a 3T MRI scanner using a 3D radial kooshball sequence with half-spoke (ultrashort echo time, UTE, TE=0.12ms) and full-spoke (T1-weighted, TE=1.55ms) acquisitions. Data were motion-sorted using amplitude-binning on a respiratory motion signal. Two high-definition Movienet (HD-Movienet) deep learning models were proposed to reconstruct 3D radial kooshball data: slice-by-slice reconstruction in the coronal orientation using 2D convolutional kernels (2D-based HD-Movienet) and reconstruction on blocks of eight coronal slices using 3D convolutional kernels (3D-based HD-Movienet). Two applications were considered: (a) anatomical imaging at expiration and inspiration with four motion states and a scan time of 2 minutes, and (b) dynamic motion imaging with 10 motion states and a scan time of 4 minutes. The training was performed using XD-GRASP 4D images reconstructed from 4.5-minute and 6.5-minute acquisitions as references. &#xD;Main Results: &#xD;2D-based HD-Movienet achieved a reconstruction time of <6 seconds, significantly faster than the iterative XD-GRASP reconstruction (>10 minutes with GPU optimization) while maintaining comparable image quality to XD-GRASP with two extra minutes of scan time. The 3D-based HD-Movienet improved reconstruction quality at the expense of longer reconstruction times (<11 seconds). &#xD;Significance: &#xD;HD-Movienet demonstrates the feasibility of motion-resolved 4D MRI with isotropic 1.1mm resolution and scan times of only 2 minutes for four motion states and 4 minutes for 10 motion states, marking a significant advancement in clinical free-breathing lung MRI.

StrokeNeXt: an automated stroke classification model using computed tomography and magnetic resonance images.

Ekingen E, Yildirim F, Bayar O, Akbal E, Sercek I, Hafeez-Baig A, Dogan S, Tuncer T

pubmed logopapersJun 5 2025
Stroke ranks among the leading causes of disability and death worldwide. Timely detection can reduce its impact. Machine learning delivers powerful tools for image‑based diagnosis. This study introduces StrokeNeXt, a lightweight convolutional neural network (CNN) for computed tomography (CT) and magnetic resonance (MR) scans, and couples it with deep feature engineering (DFE) to improve accuracy and facilitate clinical deployment. We assembled a multimodal dataset of CT and MR images, each labeled as stroke or control. StrokeNeXt employs a ConvNeXt‑inspired block and a squeeze‑and‑excitation (SE) unit across four stages: stem, StrokeNeXt block, downsampling, and output. In the DFE pipeline, StrokeNeXt extracts features from fixed‑size patches, iterative neighborhood component analysis (INCA) selects the top features, and a t algorithm-based k-nearest neighbors (tkNN) classifier has been utilized for classification. StrokeNeXt achieved 93.67% test accuracy on the assembled dataset. Integrating DFE raised accuracy to 97.06%. This combined approach outperformed StrokeNeXt alone and reduced classification time. StrokeNeXt paired with DFE offers an effective solution for stroke detection on CT and MR images. Its high accuracy and fewer learnable parameters make it lightweight and it is suitable for integration into clinical workflows. This research lays a foundation for real‑time decision support in emergency and radiology settings.

Association between age and lung cancer risk: evidence from lung lobar radiomics.

Li Y, Lin C, Cui L, Huang C, Shi L, Huang S, Yu Y, Zhou X, Zhou Q, Chen K, Shi L

pubmed logopapersJun 5 2025
Previous studies have highlighted the prominent role of age in lung cancer risk, with signs of lung aging visible in computed tomography (CT) imaging. This study aims to characterize lung aging using quantitative radiomic features extracted from five delineated lung lobes and explore how age contributes to lung cancer development through these features. We analyzed baseline CT scans from the Wenling lung cancer screening cohort, consisting of 29,810 participants. Deep learning-based segmentation method was used to delineate lung lobes. A total of 1,470 features were extracted from each lobe. The minimum redundancy maximum relevance algorithm was applied to identify the top 10 age-related radiomic features among 13,137 never smokers. Multiple regression analyses were used to adjust for confounders in the association of age, lung lobar radiomic features, and lung cancer. Linear, Cox proportional hazards, and parametric accelerated failure time models were applied as appropriate. Mediation analyses were conducted to evaluate whether lobar radiomic features mediate the relationship between age and lung cancer risk. Age was significantly associated with an increased lung cancer risk, particularly among current smokers (hazard ratio = 1.07, P = 2.81 × 10<sup>- 13</sup>). Age-related radiomic features exhibited distinct effects across lung lobes. Specifically, the first order mean (mean attenuation value) filtered by wavelet in the right upper lobe increased with age (β = 0.019, P = 2.41 × 10<sup>- 276</sup>), whereas it decreased in the right lower lobe (β = -0.028, P = 7.83 × 10<sup>- 277</sup>). Three features, namely wavelet_HL_firstorder_Mean of the right upper lobe, wavelet_LH_firstorder_Mean of the right lower lobe, and original_shape_MinorAxisLength of the left upper lobe, were independently associated with lung cancer risk at Bonferroni-adjusted P value. Mediation analyses revealed that density and shape features partially mediated the relationship between age and lung cancer risk while a suppression effect was observed in the wavelet first order mean of right upper lobe. The study reveals lobe-specific heterogeneity in lung aging patterns through radiomics and their associations with lung cancer risk. These findings may contribute to identify new approaches for early intervention in lung cancer related to aging. Not applicable.

Artificial intelligence-based detection of dens invaginatus in panoramic radiographs.

Sarı AH, Sarı H, Magat G

pubmed logopapersJun 5 2025
The aim of this study was to automatically detect teeth with dens invaginatus (DI) in panoramic radiographs using deep learning algorithms and to compare the success of the algorithms. For this purpose, 400 panoramic radiographs with DI were collected from the faculty database and separated into 60% training, 20% validation and 20% test images. The training and validation images were labeled by oral, dental and maxillofacial radiologists and augmented with various augmentation methods, and the improved models were asked for the images allocated for the test phase and the results were evaluated according to performance measures including accuracy, sensitivity, F1 score and mean detection time. According to the test results, YOLOv8 achieved a precision, sensitivity and F1 score of 0.904 and was the fastest detection model with an average detection time of 0.041. The Faster R-CNN model achieved 0.912 precision, 0.904 sensitivity and 0.907 F1 score, with an average detection time of 0.1 s. The YOLOv9 algorithm showed the most successful performance with 0.946 precision, 0.930 sensitivity, 0.937 F1 score value and the average detection speed per image was 0.158 s. According to the results obtained, all models achieved over 90% success. YOLOv8 was relatively more successful in detection speed and YOLOv9 in other performance criteria. Faster R-CNN ranked second in all criteria.

Automatic cervical tumors segmentation in PET/MRI by parallel encoder U-net.

Liu S, Tan Z, Gong T, Tang X, Sun H, Shang F

pubmed logopapersJun 5 2025
Automatic segmentation of cervical tumors is important in quantitative analysis and radiotherapy planning. A parallel encoder U-Net (PEU-Net) integrating the multi-modality information of PET/MRI was proposed to segment cervical tumor, which consisted of two parallel encoders with the same structure for PET and MR images. The features of the two modalities were extracted separately and fused at each layer of the decoder. Res2Net module on skip connection aggregated the features of various scales and refined the segmentation performance. PET/MRI images of 165 patients with cervical cancer were included in this study. U-Net, TransUNet, and nnU-Net with single or multi-modality (PET or/and T2WI) input were used for comparison. The Dice similarity coefficient (DSC) with volume data, DSC and the 95th percentile of Hausdorff distance (HD95) with tumor slices were calculated to evaluate the performance. The proposed PEU-Net exhibited the best performance (DSC<sub>3d</sub>: 0.726 ± 0.204, HD<sub>95</sub>: 4.603 ± 4.579 mm), DSC<sub>2d</sub> (0.871 ± 0.113) was comparable to the best result of TransUNet with PET/MRI (0.873 ± 0.125). The networks with multi-modality input outperformed those with single-modality images as input. The results showed that the proposed PEU-Net could use multi-modality information more effectively through the redesigned structure and achieved competitive performance.

Multitask deep learning model based on multimodal data for predicting prognosis of rectal cancer: a multicenter retrospective study.

Ma Q, Meng R, Li R, Dai L, Shen F, Yuan J, Sun D, Li M, Fu C, Li R, Feng F, Li Y, Tong T, Gu Y, Sun Y, Shen D

pubmed logopapersJun 5 2025
Prognostic prediction is crucial to guide individual treatment for patients with rectal cancer. We aimed to develop and validated a multitask deep learning model for predicting prognosis in rectal cancer patients. This retrospective study enrolled 321 rectal cancer patients (training set: 212; internal testing set: 53; external testing set: 56) who directly received total mesorectal excision from five hospitals between March 2014 to April 2021. A multitask deep learning model was developed to simultaneously predict recurrence/metastasis and disease-free survival (DFS). The model integrated clinicopathologic data and multiparametric magnetic resonance imaging (MRI) images including diffusion kurtosis imaging (DKI), without performing tumor segmentation. The receiver operating characteristic (ROC) curve and Harrell's concordance index (C-index) were used to evaluate the predictive performance of the proposed model. The deep learning model achieved good discrimination capability of recurrence/metastasis, with area under the curve (AUC) values of 0.885, 0.846, and 0.797 in the training, internal testing and external testing sets, respectively. Furthermore, the model successfully predicted DFS in the training set (C-index: 0.812), internal testing set (C-index: 0.794), and external testing set (C-index: 0.733), and classified patients into significantly distinct high- and low-risk groups (p < 0.05). The multitask deep learning model, incorporating clinicopathologic data and multiparametric MRI, effectively predicted both recurrence/metastasis and survival for patients with rectal cancer. It has the potential to be an essential tool for risk stratification, and assist in making individualized treatment decisions. Not applicable.

Development and validation of a predictive nomogram for bilateral posterior condylar displacement using cone-beam computed tomography and machine-learning algorithms: a retrospective observational study.

Sui H, Xiao M, Jiang X, Li J, Qiao F, Yin B, Wang Y, Wu L

pubmed logopapersJun 5 2025
Temporomandibular disorders (TMDs) are frequently associated with posterior condylar displacement; however, early prediction of this displacement remains a significant challenge. Therefore, in this study, we aimed to develop and evaluate a predictive model for bilateral posterior condylar displacement. In this retrospective observational study, 166 cone-beam computed tomography images were examined and categorized into two groups based on condyle positions as observed in the sagittal images of the joint space: those with bilateral posterior condylar displacement and those without. Three machine-learning algorithms-Random Forest, Least Absolute Shrinkage and Selection Operator (LASSO) regression, and Extreme Gradient Boosting (XGBoost)-were used to identify risk factors and establish a risk assessment model. Calibration curves, receiver operating characteristic curves, and decision curve analyses were employed to evaluate the accuracy of the predictions, differentiation, and clinical usefulness of the models, respectively. Articular eminence inclination (AEI) and age were identified as significant risk factors for bilateral posterior condylar displacement. The area under the curve values for the LASSO and Random Forest models were both > 0.7, indicating satisfactory discriminative ability of the nomogram. No significant differences were observed in the differentiation and calibration performance of the three models. Clinical utility analysis revealed that the LASSO regression model, which incorporated age, AEI, A point-nasion-B point (ANB) angle, and facial height ratio (S-Go/N-Me), demonstrated superior net benefit compared to the other models when the probability threshold exceeded 45%. Patients with a steeper AEI, insufficient posterior vertical distance (S-Go/N-Me), an ANB angle ≥ 4.7°, and older age are more likely to experience bilateral posterior condylar displacement. The prognostic nomogram developed and validated in this study may assist clinicians in assessing the risk of bilateral posterior condylar displacement.
Page 206 of 3203198 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.