Sort by:
Page 74 of 100991 results

Automatic MRI segmentation of masticatory muscles using deep learning enables large-scale muscle parameter analysis.

Ten Brink RSA, Merema BJ, den Otter ME, Jensma ML, Witjes MJH, Kraeima J

pubmed logopapersJun 7 2025
Mandibular reconstruction to restore mandibular continuity often relies on patient-specific implants and virtual surgical planning, but current implant designs rarely consider individual biomechanical demands, which are critical for preventing complications such as stress shielding, screw loosening, and implant failure. The inclusion of patient-specific masticatory muscle parameters such as cross-sectional area, vectors, and volume could improve implant success, but manual segmentation of these parameters is time-consuming, limiting large-scale analyses. In this study, a deep learning model was trained for automatic segmentation of eight masticatory muscles on MRI images. Forty T1-weighted MRI scans were segmented manually or via pseudo-labelling for training. Training employed 5-fold cross-validation over 1000 epochs per fold and testing was done on 10 manually segmented scans. The model achieved a mean Dice similarity coefficient (DSC) of 0.88, intersection over union (IoU) of 0.79, precision of 0.87, and recall of 0.89, demonstrating high segmentation accuracy. These results indicate the feasibility of large-scale, reproducible analyses of muscle volumes, directions, and estimated forces. By integrating these parameters into implant design and surgical planning, this method offers a step forward in developing personalized surgical strategies that could improve postoperative outcomes in mandibular reconstruction. This brings the field closer to truly individualized patient care.

Diagnostic accuracy of radiomics in risk stratification of gastrointestinal stromal tumors: A systematic review and meta-analysis.

Salimi M, Mohammadi H, Ghahramani S, Nemati M, Ashari A, Imani A, Imani MH

pubmed logopapersJun 7 2025
This systematic review and meta-analysis aimed to assess the diagnostic accuracy of radiomics in risk stratification of gastrointestinal stromal tumors (GISTs). It focused on evaluating radiomic models as a non-invasive tool in clinical practice. A comprehensive search was conducted across PubMed, Web of Science, EMBASE, Scopus, and Cochrane Library up to May 17, 2025. Studies involving preoperative imaging and radiomics-based risk stratification of GISTs were included. Quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool and Radiomics Quality Score (RQS). Pooled sensitivity, specificity, and area under the curve (AUC) were calculated using bivariate random-effects models. Meta-regression and subgroup analyses were performed to explore heterogeneity. A total of 29 studies were included, with 22 (76 %) based on computed tomography scans, while 2 (7 %) were based on endoscopic ultrasound, 3 (10 %) on magnetic resonance imaging, and 2 (7 %) on ultrasound. Of these, 18 studies provided sufficient data for meta-analysis. Pooled sensitivity, specificity, and AUC for radiomics-based GIST risk stratification were 0.84, 0.86, and 0.90 for training cohorts, and 0.84, 0.80, and 0.89 for validation cohorts. QUADAS-2 indicated some bias due to insufficient pre-specified thresholds. The mean RQS score was 13.14 ± 3.19. Radiomics holds promise for non-invasive GIST risk stratification, particularly with advanced imaging techniques. However, radiomic models are still in the early stages of clinical adoption. Further research is needed to improve diagnostic accuracy and validate their role alongside conventional methods like biopsy or surgery.

[Albumin-myoestatosis gauge assisted by an artificial intelligence tool as a prognostic factor in patients with metastatic colorectal-cancer].

de Luis Román D, Primo D, Izaola Jáuregui O, Sánchez Lite I, López Gómez JJ

pubmed logopapersJun 6 2025
to evaluate the prognostic role of the marker albumin-myosteatosis (MAM) in Caucasian patients with metastatic colorectal cancer. this study involved 55 consecutive Caucasian patients diagnosed with metastatic colorectal cancer. CT scans at the L3 vertebral level were analyzed to determine skeletal muscle cross-sectional area, skeletal muscle index (SMI), and skeletal muscle density (SMD). Bioelectrical impedance analysis (BIA) (phase angle, reactance, resistance, and SMI-BIA) was used. Albumin and prealbumin were measured. The albumin-myosteatosis marker (AMM = serum albumin (g/dL) × skeletal muscle density (SMD) in Hounsfield units (HU) was calculated. Survival was estimated using the Kaplan-Meier method and comparisons between groups were performed using the log-rank test. the median age was 68.1 ± 9.1 years. Patients were divided into two groups based on the median MAM (129.1 AU for women and 156.3 AU for men). Patients in the low MAM group had significantly reduced values of phase angle and reactance, as well as older age. These patients also had higher rates of malnutrition by GLIM criteria (odds ratio: 3.8; 95 % CI = 1.2-12.9), low muscle mass diagnosed with TC (odds ratio: 3.6; 95 % CI = 1.2-10.9) and mortality (odds ratio: 9.82; 95 % CI = 1.2-10.9). The Kaplan-Meir analysis demonstrated significant differences in 5-year survival between MAM groups (patients in the low median MAM group vs. patients in the high median MAM group), (HR: 6.2; 95 % CI = 1.10-37.5). the marker albumin-myosteatosis (MAM) may function as a prognostic marker of survival in Caucasian patients with metastatic CRC.

Quantitative and automatic plan-of-the-day assessment to facilitate adaptive radiotherapy in cervical cancer.

Mason SA, Wang L, Alexander SE, Lalondrelle S, McNair HA, Harris EJ

pubmed logopapersJun 5 2025
To facilitate implementation of plan-of-the-day (POTD) selection for treating locally advanced cervical cancer (LACC), we developed a POTD assessment tool for CBCT-guided radiotherapy (RT). A female pelvis segmentation model (U-Seg3) is combined with a quantitative standard operating procedure (qSOP) to identify optimal and acceptable plans. 

Approach: The planning CT[i], corresponding structure set[ii], and manually contoured CBCTs[iii] (n=226) from 39 LACC patients treated with POTD (n=11) or non-adaptive RT (n=28) were used to develop U-Seg3, an algorithm incorporating deep-learning and deformable image registration techniques to segment the low-risk clinical target volume (LR-CTV), high-risk CTV (HR-CTV), bladder, rectum, and bowel bag. A single-channel input model (iii only, U-Seg1) was also developed. Contoured CBCTs from the POTD patients were (a) reserved for U-Seg3 validation/testing, (b) audited to determine optimal and acceptable plans, and (c) used to empirically derive a qSOP that maximised classification accuracy. 

Main Results: The median [interquartile range] DSC between manual and U-Seg3 contours was 0.83 [0.80], 0.78 [0.13], 0.94 [0.05], 0.86[0.09], and 0.90 [0.05] for the LR-CTV, HR-CTV, bladder, rectum, and bowel bag. These were significantly higher than U-Seg1 in all structures but bladder. The qSOP classified plans as acceptable if they met target coverage thresholds (LR-CTV≧99%, HR-CTV≧99.8%), with lower LR-CTV coverage (≧95%) sometimes allowed. The acceptable plan minimising bowel irradiation was considered optimal unless substantial bladder sparing could be achieved. With U-Seg3 embedded in the qSOP, optimal and acceptable plans were identified in 46/60 and 57/60 cases. 

Significance: U-Seg3 outperforms U-Seg1 and all known CBCT-based female pelvis segmentation models. The tool combining U-Seg3 and the qSOP identifies optimal plans with equivalent accuracy as two observers. In an implementation strategy whereby this tool serves as the second observer, plan selection confidence and decision-making time could be improved whilst simultaneously reducing the required number of POTD-trained radiographers by 50%.

&#xD.

Automatic cervical tumors segmentation in PET/MRI by parallel encoder U-net.

Liu S, Tan Z, Gong T, Tang X, Sun H, Shang F

pubmed logopapersJun 5 2025
Automatic segmentation of cervical tumors is important in quantitative analysis and radiotherapy planning. A parallel encoder U-Net (PEU-Net) integrating the multi-modality information of PET/MRI was proposed to segment cervical tumor, which consisted of two parallel encoders with the same structure for PET and MR images. The features of the two modalities were extracted separately and fused at each layer of the decoder. Res2Net module on skip connection aggregated the features of various scales and refined the segmentation performance. PET/MRI images of 165 patients with cervical cancer were included in this study. U-Net, TransUNet, and nnU-Net with single or multi-modality (PET or/and T2WI) input were used for comparison. The Dice similarity coefficient (DSC) with volume data, DSC and the 95th percentile of Hausdorff distance (HD95) with tumor slices were calculated to evaluate the performance. The proposed PEU-Net exhibited the best performance (DSC<sub>3d</sub>: 0.726 ± 0.204, HD<sub>95</sub>: 4.603 ± 4.579 mm), DSC<sub>2d</sub> (0.871 ± 0.113) was comparable to the best result of TransUNet with PET/MRI (0.873 ± 0.125). The networks with multi-modality input outperformed those with single-modality images as input. The results showed that the proposed PEU-Net could use multi-modality information more effectively through the redesigned structure and achieved competitive performance.

Multitask deep learning model based on multimodal data for predicting prognosis of rectal cancer: a multicenter retrospective study.

Ma Q, Meng R, Li R, Dai L, Shen F, Yuan J, Sun D, Li M, Fu C, Li R, Feng F, Li Y, Tong T, Gu Y, Sun Y, Shen D

pubmed logopapersJun 5 2025
Prognostic prediction is crucial to guide individual treatment for patients with rectal cancer. We aimed to develop and validated a multitask deep learning model for predicting prognosis in rectal cancer patients. This retrospective study enrolled 321 rectal cancer patients (training set: 212; internal testing set: 53; external testing set: 56) who directly received total mesorectal excision from five hospitals between March 2014 to April 2021. A multitask deep learning model was developed to simultaneously predict recurrence/metastasis and disease-free survival (DFS). The model integrated clinicopathologic data and multiparametric magnetic resonance imaging (MRI) images including diffusion kurtosis imaging (DKI), without performing tumor segmentation. The receiver operating characteristic (ROC) curve and Harrell's concordance index (C-index) were used to evaluate the predictive performance of the proposed model. The deep learning model achieved good discrimination capability of recurrence/metastasis, with area under the curve (AUC) values of 0.885, 0.846, and 0.797 in the training, internal testing and external testing sets, respectively. Furthermore, the model successfully predicted DFS in the training set (C-index: 0.812), internal testing set (C-index: 0.794), and external testing set (C-index: 0.733), and classified patients into significantly distinct high- and low-risk groups (p < 0.05). The multitask deep learning model, incorporating clinicopathologic data and multiparametric MRI, effectively predicted both recurrence/metastasis and survival for patients with rectal cancer. It has the potential to be an essential tool for risk stratification, and assist in making individualized treatment decisions. Not applicable.

CT-based radiogenomic analysis to predict high-risk colon cancer (ATTRACT): a multicentric trial.

Caruso D, Polici M, Zerunian M, Monterubbiano A, Tarallo M, Pilozzi E, Belloni L, Scafetta G, Valanzuolo D, Pugliese D, De Santis D, Vecchione A, Mercantini P, Iannicelli E, Fiori E, Laghi A

pubmed logopapersJun 5 2025
Clinical staging on CT has several biases, and a radiogenomics approach could be proposed. The study aimed to test the performance of a radiogenomics approach in identifying high-risk colon cancer. ATTRACT is a multicentric trial, registered in ClinicalTrials.gov (NCT06108310). Three hundred non-metastatic colon cancer patients were retrospectively enrolled and divided into two groups, high-risk and no-risk, according to the pathological staging. Radiological evaluations were performed by two abdominal radiologists. For 151 patients, we achieved genomics. The baseline CT scans were used to evaluate the radiological assessment and to perform 3D cancer segmentation. One expert radiologist used open-source software to perform the volumetric cancer segmentations on baseline CT scans in the portal phase (3DSlicer v4.10.2). Implementing the classical LASSO with a machine-learning library method was used to select the optimal features to build Model 1 (clinical-radiological plus radiomic feature, 300 patients) and Model 2 (Model 1 plus genomics, 151 patients). The performance of clinical-radiological interpretation was assessed regarding the area under the curve (AUC), sensitivity, specificity, and accuracy. The average performance of Models 1 and 2 was also calculated. In total, 262/300 were classified as high-risk and 38/300 as no-risk. Clinical-radiological interpretation by the two radiologists achieved an AUC of 0.58-0.82 (95% CI: 0.52-0.63 and 0.76-0.85, p < 0.001, respectively), sensitivity: 67.9-93.8%, specificity: 47.4-68.4%, and accuracy: 65.3-90.7%, respectively. Model 1 yielded AUC: 0.74 (95% CI: 0.61-0.88, p < 0.005), sensitivity: 86%, specificity: 48%, and accuracy: 81%. Model2 reached AUC: 0.84, (95% CI: 0.68-0.99, p < 0.005), sensitivity: 88%, specificity: 63%, and accuracy: 84%. The radiogenomics model outperformed radiological interpretation in identifying high-risk colon cancer. Question Can this radiogenomic model identify high-risk stages II and III colon cancer in a preoperative clinical setting? Findings This radiogenomics model outperformed both the radiomics and radiological interpretations, reducing the risk of improper staging and incorrect treatment options. Clinical relevance The radiogenomics model was demonstrated to be superior to radiological interpretation and radiomics in identifying high-risk colon cancer, and could therefore be promising in stratifying high-risk and low-risk patients.

A radiogenomics study on <sup>18</sup>F-FDG PET/CT in endometrial cancer by a novel deep learning segmentation algorithm.

Li X, Shi W, Zhang Q, Lin X, Sun H

pubmed logopapersJun 5 2025
To create an automated PET/CT segmentation method and radiomics model to forecast Mismatch repair (MMR) and TP53 gene expression in endometrial cancer patients, and to examine the effect of gene expression variability on image texture features. We generated two datasets in this retrospective and exploratory study. The first, with 123 histopathologically confirmed patient cases, was used to develop an endometrial cancer segmentation model. The second dataset, including 249 patients for MMR and 179 for TP53 mutation prediction, was derived from PET/CT exams and immunohistochemical analysis. A PET-based Attention-U Net network was used for segmentation, followed by region-growing with co-registered PET and CT images. Feature models were constructed using PET, CT, and combined data, with model selection based on performance comparison. Our segmentation model achieved 99.99% training accuracy and a dice coefficient of 97.35%, with validation accuracy at 99.93% and a dice coefficient of 84.81%. The combined PET + CT model demonstrated superior predictive power for both genes, with AUCs of 0.8146 and 0.8102 for MMR, and 0.8833 and 0.8150 for TP53 in training and test sets, respectively. MMR-related protein heterogeneity and TP53 expression differences were predominantly seen in PET images. An efficient deep learning algorithm for endometrial cancer segmentation has been established, highlighting the enhanced predictive power of integrated PET and CT radiomics for MMR and TP53 expression. The study underscores the distinct influences of MMR and TP53 gene expression on tumor characteristics.

Current State of Artificial Intelligence Model Development in Obstetrics.

Devoe LD, Muhanna M, Maher J, Evans MI, Klein-Seetharaman J

pubmed logopapersJun 5 2025
Publications on artificial intelligence (AI) applications have dramatically increased for most medical specialties, including obstetrics. Here, we review the most recent pertinent publications on AI programs in obstetrics, describe trends in AI applications for specific obstetric problems, and assess AI's possible effects on obstetric care. Searches were performed in PubMed (MeSH), MEDLINE, Ovid, ClinicalTrials.gov, Google Scholar, and Web of Science using a combination of keywords and text words related to "obstetrics," "pregnancy," "artificial intelligence," "machine learning," "deep learning," and "neural networks," for articles published between June 1, 2019, and May 31, 2024. A total of 1,768 articles met at least one search criterion. After eliminating reviews, duplicates, retractions, inactive research protocols, unspecified AI programs, and non-English-language articles, 207 publications remained for further review. Most studies were conducted outside of the United States, were published in nonobstetric journals, and focused on risk prediction. Study population sizes ranged widely from 10 to 953,909, and model performance abilities also varied widely. Evidence quality was assessed by the description of model construction, predictive accuracy, and whether validation had been performed. Most studies had patient groups differing considerably from U.S. populations, rendering their generalizability to U.S. patients uncertain. Artificial intelligence ultrasound applications focused on imaging issues are those most likely to influence current obstetric care. Other promising AI models include early risk screening for spontaneous preterm birth, preeclampsia, and gestational diabetes mellitus. The rate at which AI studies are being performed virtually guarantees that numerous applications will eventually be introduced into future U.S. obstetric practice. Very few of the models have been deployed in obstetric practice, and more high-quality studies are needed with high predictive accuracy and generalizability. Assuming these conditions are met, there will be an urgent need to educate medical students, postgraduate trainees and practicing physicians to understand how to effectively and safely implement this technology.
Page 74 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.