Sort by:
Page 22 of 47469 results

Estimation of tumor coverage after RF ablation of hepatocellular carcinoma using single 2D image slices.

Varble N, Li M, Saccenti L, Borde T, Arrichiello A, Christou A, Lee K, Hazen L, Xu S, Lencioni R, Wood BJ

pubmed logopapersJun 7 2025
To assess the technical success of radiofrequency ablation (RFA) in patients with hepatocellular carcinoma (HCC), an artificial intelligence (AI) model was developed to estimate the tumor coverage without the need for segmentation or registration tools. A secondary retrospective analysis of 550 patients in the multicenter and multinational OPTIMA trial (3-7 cm solidary HCC lesions, randomized to RFA or RFA + LTLD) identified 182 patients with well-defined pre-RFA tumor and 1-month post-RFA devascularized ablation zones on enhanced CT. The ground-truth, or percent tumor coverage, was determined based on the result of semi-automatic 3D tumor and ablation zone segmentation and elastic registration. The isocenter of the tumor and ablation was isolated on 2D axial CT images. Feature extraction was performed, and classification and linear regression models were built. Images were augmented, and 728 image pairs were used for training and testing. The estimated percent tumor coverage using the models was compared to ground-truth. Validation was performed on eight patient cases from a separate institution, where RFA was performed, and pre- and post-ablation images were collected. In testing cohorts, the best model accuracy was with classification and moderate data augmentation (AUC = 0.86, TPR = 0.59, and TNR = 0.89, accuracy = 69%) and regression with random forest (RMSE = 12.6%, MAE = 9.8%). Validation in a separate institution did not achieve accuracy greater than random estimation. Visual review of training cases suggests that poor tumor coverage may be a result of atypical ablation zone shrinkage 1 month post-RFA, which may not be reflected in clinical utilization. An AI model that uses 2D images at the center of the tumor and 1 month post-ablation can accurately estimate ablation tumor coverage. In separate validation cohorts, translation could be challenging.

[Albumin-myoestatosis gauge assisted by an artificial intelligence tool as a prognostic factor in patients with metastatic colorectal-cancer].

de Luis Román D, Primo D, Izaola Jáuregui O, Sánchez Lite I, López Gómez JJ

pubmed logopapersJun 6 2025
to evaluate the prognostic role of the marker albumin-myosteatosis (MAM) in Caucasian patients with metastatic colorectal cancer. this study involved 55 consecutive Caucasian patients diagnosed with metastatic colorectal cancer. CT scans at the L3 vertebral level were analyzed to determine skeletal muscle cross-sectional area, skeletal muscle index (SMI), and skeletal muscle density (SMD). Bioelectrical impedance analysis (BIA) (phase angle, reactance, resistance, and SMI-BIA) was used. Albumin and prealbumin were measured. The albumin-myosteatosis marker (AMM = serum albumin (g/dL) × skeletal muscle density (SMD) in Hounsfield units (HU) was calculated. Survival was estimated using the Kaplan-Meier method and comparisons between groups were performed using the log-rank test. the median age was 68.1 ± 9.1 years. Patients were divided into two groups based on the median MAM (129.1 AU for women and 156.3 AU for men). Patients in the low MAM group had significantly reduced values of phase angle and reactance, as well as older age. These patients also had higher rates of malnutrition by GLIM criteria (odds ratio: 3.8; 95 % CI = 1.2-12.9), low muscle mass diagnosed with TC (odds ratio: 3.6; 95 % CI = 1.2-10.9) and mortality (odds ratio: 9.82; 95 % CI = 1.2-10.9). The Kaplan-Meir analysis demonstrated significant differences in 5-year survival between MAM groups (patients in the low median MAM group vs. patients in the high median MAM group), (HR: 6.2; 95 % CI = 1.10-37.5). the marker albumin-myosteatosis (MAM) may function as a prognostic marker of survival in Caucasian patients with metastatic CRC.

Automatic cervical tumors segmentation in PET/MRI by parallel encoder U-net.

Liu S, Tan Z, Gong T, Tang X, Sun H, Shang F

pubmed logopapersJun 5 2025
Automatic segmentation of cervical tumors is important in quantitative analysis and radiotherapy planning. A parallel encoder U-Net (PEU-Net) integrating the multi-modality information of PET/MRI was proposed to segment cervical tumor, which consisted of two parallel encoders with the same structure for PET and MR images. The features of the two modalities were extracted separately and fused at each layer of the decoder. Res2Net module on skip connection aggregated the features of various scales and refined the segmentation performance. PET/MRI images of 165 patients with cervical cancer were included in this study. U-Net, TransUNet, and nnU-Net with single or multi-modality (PET or/and T2WI) input were used for comparison. The Dice similarity coefficient (DSC) with volume data, DSC and the 95th percentile of Hausdorff distance (HD95) with tumor slices were calculated to evaluate the performance. The proposed PEU-Net exhibited the best performance (DSC<sub>3d</sub>: 0.726 ± 0.204, HD<sub>95</sub>: 4.603 ± 4.579 mm), DSC<sub>2d</sub> (0.871 ± 0.113) was comparable to the best result of TransUNet with PET/MRI (0.873 ± 0.125). The networks with multi-modality input outperformed those with single-modality images as input. The results showed that the proposed PEU-Net could use multi-modality information more effectively through the redesigned structure and achieved competitive performance.

Multitask deep learning model based on multimodal data for predicting prognosis of rectal cancer: a multicenter retrospective study.

Ma Q, Meng R, Li R, Dai L, Shen F, Yuan J, Sun D, Li M, Fu C, Li R, Feng F, Li Y, Tong T, Gu Y, Sun Y, Shen D

pubmed logopapersJun 5 2025
Prognostic prediction is crucial to guide individual treatment for patients with rectal cancer. We aimed to develop and validated a multitask deep learning model for predicting prognosis in rectal cancer patients. This retrospective study enrolled 321 rectal cancer patients (training set: 212; internal testing set: 53; external testing set: 56) who directly received total mesorectal excision from five hospitals between March 2014 to April 2021. A multitask deep learning model was developed to simultaneously predict recurrence/metastasis and disease-free survival (DFS). The model integrated clinicopathologic data and multiparametric magnetic resonance imaging (MRI) images including diffusion kurtosis imaging (DKI), without performing tumor segmentation. The receiver operating characteristic (ROC) curve and Harrell's concordance index (C-index) were used to evaluate the predictive performance of the proposed model. The deep learning model achieved good discrimination capability of recurrence/metastasis, with area under the curve (AUC) values of 0.885, 0.846, and 0.797 in the training, internal testing and external testing sets, respectively. Furthermore, the model successfully predicted DFS in the training set (C-index: 0.812), internal testing set (C-index: 0.794), and external testing set (C-index: 0.733), and classified patients into significantly distinct high- and low-risk groups (p < 0.05). The multitask deep learning model, incorporating clinicopathologic data and multiparametric MRI, effectively predicted both recurrence/metastasis and survival for patients with rectal cancer. It has the potential to be an essential tool for risk stratification, and assist in making individualized treatment decisions. Not applicable.

Preliminary analysis of AI-based thyroid nodule evaluation in a non-subspecialist endocrinology setting.

Fernández Velasco P, Estévez Asensio L, Torres B, Ortolá A, Gómez Hoyos E, Delgado E, de Luís D, Díaz Soto G

pubmed logopapersJun 5 2025
Thyroid nodules are commonly evaluated using ultrasound-based risk stratification systems, which rely on subjective descriptors. Artificial intelligence (AI) may improve assessment, but its effectiveness in non-subspecialist settings is unclear. This study evaluated the impact of an AI-based decision support system (AI-DSS) on thyroid nodule ultrasound assessments by general endocrinologists (GE) without subspecialty thyroid imaging training. A prospective cohort study was conducted on 80 patients undergoing thyroid ultrasound in GE outpatient clinics. Thyroid ultrasound was performed based on clinical judgment as part of routine care by GE. Images were retrospectively analyzed using an AI-DSS (Koios DS), independently of clinician assessments. AI-DSS results were compared with initial GE evaluations and, when referred, with expert evaluations at a subspecialized thyroid nodule clinic (TNC). Agreement in ultrasound features, risk classification by the American College of Radiology Thyroid Imaging Reporting and Data System (ACR TI-RADS) and American Thyroid Association guidelines, and referral recommendations was assessed. AI-DSS differed notably from GE, particularly assessing nodule composition (solid: 80%vs.36%,p < 0.01), echogenicity (hypoechoic:52%vs.16%,p < 0.01), and echogenic foci (microcalcifications:10.7%vs.1.3%,p < 0.05). AI-DSS classification led to a higher referral rate compared to GE (37.3%vs.30.7%, not statistically significant). Agreement between AI-DSS and GE in ACR TI-RADS scoring was moderate (r = 0.337;p < 0.001), but improved when comparing GE to AI-DSS and TNC subspecialist (r = 0.465;p < 0.05 and r = 0.607;p < 0.05, respectively). In a non-subspecialist setting, non-adjunct AI-DSS use did not significantly improve risk stratification or reduce hypothetical referrals. The system tended to overestimate risk, potentially leading to unnecessary procedures. Further optimization is required for AI to function effectively in low-prevalence environment.

A radiogenomics study on <sup>18</sup>F-FDG PET/CT in endometrial cancer by a novel deep learning segmentation algorithm.

Li X, Shi W, Zhang Q, Lin X, Sun H

pubmed logopapersJun 5 2025
To create an automated PET/CT segmentation method and radiomics model to forecast Mismatch repair (MMR) and TP53 gene expression in endometrial cancer patients, and to examine the effect of gene expression variability on image texture features. We generated two datasets in this retrospective and exploratory study. The first, with 123 histopathologically confirmed patient cases, was used to develop an endometrial cancer segmentation model. The second dataset, including 249 patients for MMR and 179 for TP53 mutation prediction, was derived from PET/CT exams and immunohistochemical analysis. A PET-based Attention-U Net network was used for segmentation, followed by region-growing with co-registered PET and CT images. Feature models were constructed using PET, CT, and combined data, with model selection based on performance comparison. Our segmentation model achieved 99.99% training accuracy and a dice coefficient of 97.35%, with validation accuracy at 99.93% and a dice coefficient of 84.81%. The combined PET + CT model demonstrated superior predictive power for both genes, with AUCs of 0.8146 and 0.8102 for MMR, and 0.8833 and 0.8150 for TP53 in training and test sets, respectively. MMR-related protein heterogeneity and TP53 expression differences were predominantly seen in PET images. An efficient deep learning algorithm for endometrial cancer segmentation has been established, highlighting the enhanced predictive power of integrated PET and CT radiomics for MMR and TP53 expression. The study underscores the distinct influences of MMR and TP53 gene expression on tumor characteristics.

Quantitative and automatic plan-of-the-day assessment to facilitate adaptive radiotherapy in cervical cancer.

Mason SA, Wang L, Alexander SE, Lalondrelle S, McNair HA, Harris EJ

pubmed logopapersJun 5 2025
To facilitate implementation of plan-of-the-day (POTD) selection for treating locally advanced cervical cancer (LACC), we developed a POTD assessment tool for CBCT-guided radiotherapy (RT). A female pelvis segmentation model (U-Seg3) is combined with a quantitative standard operating procedure (qSOP) to identify optimal and acceptable plans. &#xD;&#xD;Approach: The planning CT[i], corresponding structure set[ii], and manually contoured CBCTs[iii] (n=226) from 39 LACC patients treated with POTD (n=11) or non-adaptive RT (n=28) were used to develop U-Seg3, an algorithm incorporating deep-learning and deformable image registration techniques to segment the low-risk clinical target volume (LR-CTV), high-risk CTV (HR-CTV), bladder, rectum, and bowel bag. A single-channel input model (iii only, U-Seg1) was also developed. Contoured CBCTs from the POTD patients were (a) reserved for U-Seg3 validation/testing, (b) audited to determine optimal and acceptable plans, and (c) used to empirically derive a qSOP that maximised classification accuracy. &#xD;&#xD;Main Results: The median [interquartile range] DSC between manual and U-Seg3 contours was 0.83 [0.80], 0.78 [0.13], 0.94 [0.05], 0.86[0.09], and 0.90 [0.05] for the LR-CTV, HR-CTV, bladder, rectum, and bowel bag. These were significantly higher than U-Seg1 in all structures but bladder. The qSOP classified plans as acceptable if they met target coverage thresholds (LR-CTV≧99%, HR-CTV≧99.8%), with lower LR-CTV coverage (≧95%) sometimes allowed. The acceptable plan minimising bowel irradiation was considered optimal unless substantial bladder sparing could be achieved. With U-Seg3 embedded in the qSOP, optimal and acceptable plans were identified in 46/60 and 57/60 cases. &#xD;&#xD;Significance: U-Seg3 outperforms U-Seg1 and all known CBCT-based female pelvis segmentation models. The tool combining U-Seg3 and the qSOP identifies optimal plans with equivalent accuracy as two observers. In an implementation strategy whereby this tool serves as the second observer, plan selection confidence and decision-making time could be improved whilst simultaneously reducing the required number of POTD-trained radiographers by 50%.&#xD;&#xD;&#xD.

Enhancing pancreatic cancer detection in CT images through secretary wolf bird optimization and deep learning.

Mekala S, S PK

pubmed logopapersJun 5 2025
The pancreas is a gland in the abdomen that helps to produce hormones and digest food. The irregular development of tissues in the pancreas is termed as pancreatic cancer. Identification of pancreatic tumors early is significant for enhancing survival rate and providing appropriate treatment. Thus, an efficient Secretary Wolf Bird Optimization (SeWBO)_Efficient DenseNet is presented for pancreatic tumor detection using Computed Tomography (CT) scans. Firstly, the input pancreatic CT image is accumulated from a database and subjected to image preprocessing using a bilateral filter. After this, lesion is segmented by utilizing Parallel Reverse Attention Network (PraNet), and hyperparameters of PraNet are enhanced by using the proposed SeWBO. The SeWBO is designed by incorporating Wolf Bird Optimization (WBO) and the Secretary Bird Optimization Algorithm (SBOA). Then, features like Complete Local Binary Pattern (CLBP) with Discrete Wavelet Transformation (DWT), statistical features, and Shape Local Binary Texture (SLBT) are extracted. Finally, pancreatic tumor detection is performed by SeWBO_Efficient DenseNet. Here, Efficient DenseNet is developed by combining EfficientNet and DenseNet. Moreover, the proposed SeWBO_Efficient DenseNet achieves better True Negative Rate (TNR), accuracy, and True Positive Rate (TPR), of 93.596%, 94.635%, and 92.579%.

CT-based radiogenomic analysis to predict high-risk colon cancer (ATTRACT): a multicentric trial.

Caruso D, Polici M, Zerunian M, Monterubbiano A, Tarallo M, Pilozzi E, Belloni L, Scafetta G, Valanzuolo D, Pugliese D, De Santis D, Vecchione A, Mercantini P, Iannicelli E, Fiori E, Laghi A

pubmed logopapersJun 5 2025
Clinical staging on CT has several biases, and a radiogenomics approach could be proposed. The study aimed to test the performance of a radiogenomics approach in identifying high-risk colon cancer. ATTRACT is a multicentric trial, registered in ClinicalTrials.gov (NCT06108310). Three hundred non-metastatic colon cancer patients were retrospectively enrolled and divided into two groups, high-risk and no-risk, according to the pathological staging. Radiological evaluations were performed by two abdominal radiologists. For 151 patients, we achieved genomics. The baseline CT scans were used to evaluate the radiological assessment and to perform 3D cancer segmentation. One expert radiologist used open-source software to perform the volumetric cancer segmentations on baseline CT scans in the portal phase (3DSlicer v4.10.2). Implementing the classical LASSO with a machine-learning library method was used to select the optimal features to build Model 1 (clinical-radiological plus radiomic feature, 300 patients) and Model 2 (Model 1 plus genomics, 151 patients). The performance of clinical-radiological interpretation was assessed regarding the area under the curve (AUC), sensitivity, specificity, and accuracy. The average performance of Models 1 and 2 was also calculated. In total, 262/300 were classified as high-risk and 38/300 as no-risk. Clinical-radiological interpretation by the two radiologists achieved an AUC of 0.58-0.82 (95% CI: 0.52-0.63 and 0.76-0.85, p < 0.001, respectively), sensitivity: 67.9-93.8%, specificity: 47.4-68.4%, and accuracy: 65.3-90.7%, respectively. Model 1 yielded AUC: 0.74 (95% CI: 0.61-0.88, p < 0.005), sensitivity: 86%, specificity: 48%, and accuracy: 81%. Model2 reached AUC: 0.84, (95% CI: 0.68-0.99, p < 0.005), sensitivity: 88%, specificity: 63%, and accuracy: 84%. The radiogenomics model outperformed radiological interpretation in identifying high-risk colon cancer. Question Can this radiogenomic model identify high-risk stages II and III colon cancer in a preoperative clinical setting? Findings This radiogenomics model outperformed both the radiomics and radiological interpretations, reducing the risk of improper staging and incorrect treatment options. Clinical relevance The radiogenomics model was demonstrated to be superior to radiological interpretation and radiomics in identifying high-risk colon cancer, and could therefore be promising in stratifying high-risk and low-risk patients.

Current State of Artificial Intelligence Model Development in Obstetrics.

Devoe LD, Muhanna M, Maher J, Evans MI, Klein-Seetharaman J

pubmed logopapersJun 5 2025
Publications on artificial intelligence (AI) applications have dramatically increased for most medical specialties, including obstetrics. Here, we review the most recent pertinent publications on AI programs in obstetrics, describe trends in AI applications for specific obstetric problems, and assess AI's possible effects on obstetric care. Searches were performed in PubMed (MeSH), MEDLINE, Ovid, ClinicalTrials.gov, Google Scholar, and Web of Science using a combination of keywords and text words related to "obstetrics," "pregnancy," "artificial intelligence," "machine learning," "deep learning," and "neural networks," for articles published between June 1, 2019, and May 31, 2024. A total of 1,768 articles met at least one search criterion. After eliminating reviews, duplicates, retractions, inactive research protocols, unspecified AI programs, and non-English-language articles, 207 publications remained for further review. Most studies were conducted outside of the United States, were published in nonobstetric journals, and focused on risk prediction. Study population sizes ranged widely from 10 to 953,909, and model performance abilities also varied widely. Evidence quality was assessed by the description of model construction, predictive accuracy, and whether validation had been performed. Most studies had patient groups differing considerably from U.S. populations, rendering their generalizability to U.S. patients uncertain. Artificial intelligence ultrasound applications focused on imaging issues are those most likely to influence current obstetric care. Other promising AI models include early risk screening for spontaneous preterm birth, preeclampsia, and gestational diabetes mellitus. The rate at which AI studies are being performed virtually guarantees that numerous applications will eventually be introduced into future U.S. obstetric practice. Very few of the models have been deployed in obstetric practice, and more high-quality studies are needed with high predictive accuracy and generalizability. Assuming these conditions are met, there will be an urgent need to educate medical students, postgraduate trainees and practicing physicians to understand how to effectively and safely implement this technology.
Page 22 of 47469 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.