Sort by:
Page 70 of 100991 results

First experiences with an adaptive pelvic radiotherapy system: Analysis of treatment times and learning curve.

Benzaquen D, Taussky D, Fave V, Bouveret J, Lamine F, Letenneur G, Halley A, Solmaz Y, Champion A

pubmed logopapersJun 16 2025
The Varian Ethos system allows not only on-treatment-table plan adaptation but also automated contouring with the aid of artificial intelligence. This study evaluates the initial clinical implementation of an adaptive pelvic radiotherapy system, focusing on the treatment times and the associated learning curve. We analyzed the data from 903 consecutive treatments for most urogenital cancers at our center. The treatment time was calculated from the time of the first cone-beam computed tomography scan used for replanning until the end of treatment. To calculate whether treatments were generally shorter over time, we divided the date of the first treatment into 3-months quartiles. Differences between the groups were calculated using t-tests. The mean time from the first cone-beam computed tomography scan to the end of treatment was 25.9min (standard deviation: 6.9min). Treatment time depended on the number of planning target volumes and treatment of the pelvic lymph nodes. The mean time from cone-beam computed tomography to the end of treatment was 37 % longer if the pelvic lymph nodes were treated and 26 % longer if there were more than two planning target volumes. There was a learning curve: in linear regression analysis, both quartiles of months of treatment (odds ratio [OR]: 1.3, 95 % confidence interval [CI]: 1.8-0.70, P<0.001) and the number of planning target volumes (OR: 3.0, 95 % CI: 2.6-3.4, P<0.001) were predictive of treatment time. Approximately two-thirds of the treatments were delivered within 33min. Treatment time was strongly dependent on the number of separate planning target volumes. There was a continuous learning curve.

AN INNOVATIVE MACHINE LEARNING-BASED ALGORITHM FOR DIAGNOSING PEDIATRIC OVARIAN TORSION.

Boztas AE, Sencan E, Payza AD, Sencan A

pubmed logopapersJun 16 2025
We aimed to develop a machine-learning(ML) algorithm consisting of physical examination, sonographic findings, and laboratory markers. The data of 70 patients with confirmed ovarian torsion followed and treated in our clinic for ovarian torsion and 73 patients for control group that presented to the emergency department with similar complaints but didn't have ovarian torsion detected on ultrasound as the control group between 2013-2023 were retrospectively analyzed. Sonographic findings, laboratory values, and clinical status of patients were examined and fed into three supervised ML systems to identify and develop viable decision algorithms. Presence of nausea/vomiting and symptom duration was statistically significant(p<0.05) for ovarian torsion. Presence of abdominal pain and palpable mass on physical examination weren't significant(p>0.05). White blood cell count(WBC), neutrophile/lymphocyte ratio(NLR), systemic immune-inflammation index(SII) and systemic inflammation response index(SIRI), high values of C-reactive protein was highly significant in prediction of torsion( p<0.001,p<0.05). Ovarian size ratio, medialization, follicular ring sign, presence of free fluid in pelvis in ultrasound demonstrated statistical significance in the torsion group(p<0.001). We used supervised ML algorithms, including decision trees, random forests, and LightGBM, to classify patients as either control or having torsion. We evaluated the models using 5-fold cross-validation, achieving an average F1-score of 98%, an accuracy of 98%, and a specificity of 100% across each fold with the decision tree model. This study represents the first development of a ML algorithm that integrates clinical, laboratory and ultrasonographic findings for the diagnosis of pediatric ovarian torsion with over 98% accuracy.

Three-dimensional multimodal imaging for predicting early recurrence of hepatocellular carcinoma after surgical resection.

Peng J, Wang J, Zhu H, Jiang P, Xia J, Cui H, Hong C, Zeng L, Li R, Li Y, Liang S, Deng Q, Deng H, Xu H, Dong H, Xiao L, Liu L

pubmed logopapersJun 16 2025
High tumor recurrence after surgery remains a significant challenge in managing hepatocellular carcinoma (HCC). We aimed to construct a multimodal model to forecast the early recurrence of HCC after surgical resection and explore the associated biological mechanisms. Overall, 519 patients with HCC were included from three medical centers. 433 patients from Nanfang Hospital were used as the training cohort, and 86 patients from the other two hospitals comprised validation cohort. Radiomics and deep learning (DL) models were developed using contrast-enhanced computed tomography images. Radiomics feature visualization and gradient-weighted class activation mapping were applied to improve interpretability. A multimodal model (MM-RDLM) was constructed by integrating radiomics and DL models. Associations between MM-RDLM and recurrence-free survival (RFS) and overall survival were analyzed. Gene set enrichment analysis (GSEA) and multiplex immunohistochemistry (mIHC) were used to investigate the biological mechanisms. Models based on hepatic arterial phase images exhibited the best predictive performance, with radiomics and DL models achieving areas under the curve (AUCs) of 0.770 (95 % confidence interval [CI]: 0.725-0.815) and 0.846 (95 % CI: 0.807-0.886), respectively, in the training cohort. MM-RDLM achieved an AUC of 0.955 (95 % CI: 0.937-0.972) in the training cohort and 0.930 (95 % CI: 0.876-0.984) in the validation cohort. MM-RDLM (high vs. low) was notably linked to RFS in the training (hazard ratio [HR] = 7.80 [5.74 - 10.61], P < 0.001) and validation (HR = 10.46 [4.96 - 22.68], P < 0.001) cohorts. GSEA revealed enrichment of the natural killer cell-mediated cytotoxicity pathway in the MM-RDLM low cohort. mIHC showed significantly higher percentages of CD3-, CD56-, and CD8-positive cells in the MM-RDLM low group. The MM-RDLM model demonstrated strong predictive performance for early postoperative recurrence of HCC. These findings contribute to identifying patients at high risk for early recurrence and provide insights into the potential underlying biological mechanisms.

Improving Prostate Gland Segmenting Using Transformer based Architectures

Shatha Abudalou

arxiv logopreprintJun 16 2025
Inter reader variability and cross site domain shift challenge the automatic segmentation of prostate anatomy using T2 weighted MRI images. This study investigates whether transformer models can retain precision amid such heterogeneity. We compare the performance of UNETR and SwinUNETR in prostate gland segmentation against our previous 3D UNet model [1], based on 546 MRI (T2weighted) volumes annotated by two independent experts. Three training strategies were analyzed: single cohort dataset, 5 fold cross validated mixed cohort, and gland size based dataset. Hyperparameters were tuned by Optuna. The test set, from an independent population of readers, served as the evaluation endpoint (Dice Similarity Coefficient). In single reader training, SwinUNETR achieved an average dice score of 0.816 for Reader#1 and 0.860 for Reader#2, while UNETR scored 0.8 and 0.833 for Readers #1 and #2, respectively, compared to the baseline UNets 0.825 for Reader #1 and 0.851 for Reader #2. SwinUNETR had an average dice score of 0.8583 for Reader#1 and 0.867 for Reader#2 in cross-validated mixed training. For the gland size-based dataset, SwinUNETR achieved an average dice score of 0.902 for Reader#1 subset and 0.894 for Reader#2, using the five-fold mixed training strategy (Reader#1, n=53; Reader#2, n=87) at larger gland size-based subsets, where UNETR performed poorly. Our findings demonstrate that global and shifted-window self-attention effectively reduces label noise and class imbalance sensitivity, resulting in improvements in the Dice score over CNNs by up to five points while maintaining computational efficiency. This contributes to the high robustness of SwinUNETR for clinical deployment.

AI based automatic measurement of split renal function in [<sup>18</sup>F]PSMA-1007 PET/CT.

Valind K, Ulén J, Gålne A, Jögi J, Minarik D, Trägårdh E

pubmed logopapersJun 16 2025
Prostate-specific membrane antigen (PSMA) is an important target for positron emission tomography (PET) with computed tomography (CT) in prostate cancer. In addition to overexpression in prostate cancer cells, PSMA is expressed in healthy cells in the proximal tubules of the kidneys. Consequently, PSMA PET is being explored for renal functional imaging. Left and right renal uptake of PSMA targeted radiopharmaceuticals have shown strong correlations to split renal function (SRF) as determined by other methods. Manual segmentation of kidneys in PET images is, however, time consuming, making this method of measuring SRF impractical. In this study, we designed, trained and validated an artificial intelligence (AI) model for automatic renal segmentation and measurement of SRF in [<sup>18</sup>F]PSMA-1007 PET images. Kidneys were segmented in 135 [<sup>18</sup>F]PSMA-1007 PET/CT studies used to train the AI model. The model was evaluated in 40 test studies. Left renal function percentage (LRF%) measurements ranged from 40 to 67%. Spearman correlation coefficients for LRF% measurements ranged between 0.98 and 0.99 when comparing segmentations made by 3 human readers and the AI model. The largest LRF% difference between any measurements in a single case was 3 percentage points. The AI model produced measurements similar to those of human readers. Automatic measurement of SRF in PSMA PET is feasible. A potential use could be to provide additional data in investigation of renal functional impairment in patients treated for prostate cancer.

Artificial intelligence (AI) and CT in abdominal imaging: image reconstruction and beyond.

Pisuchpen N, Srinivas Rao S, Noda Y, Kongboonvijit S, Rezaei A, Kambadakone A

pubmed logopapersJun 16 2025
Computed tomography (CT) is a cornerstone of abdominal imaging, playing a vital role in accurate diagnosis, appropriate treatment planning, and disease monitoring. The evolution of artificial intelligence (AI) in imaging has introduced deep learning-based reconstruction (DLR) techniques that enhance image quality, reduce radiation dose, and improve workflow efficiency. Traditional image reconstruction methods, including filtered back projection (FBP) and iterative reconstruction (IR), have limitations such as high noise levels and artificial image texture. DLR overcomes these challenges by leveraging convolutional neural networks to generate high-fidelity images while preserving anatomical details. Recent advances in vendor-specific and vendor-agnostic DLR algorithms, such as TrueFidelity, AiCE, and Precise Image, have demonstrated significant improvements in contrast-to-noise ratio, lesion detection, and diagnostic confidence across various abdominal organs, including the liver, pancreas, and kidneys. Furthermore, AI extends beyond image reconstruction to applications such as low contrast lesion detection, quantitative imaging, and workflow optimization, augmenting radiologists' efficiency and diagnostic accuracy. However, challenges remain in clinical validation, standardization, and widespread adoption. This review explores the principles, advancements, and future directions of AI-driven CT image reconstruction and its expanding role in abdominal imaging.

Predicting mucosal healing in Crohn's disease: development of a deep-learning model based on intestinal ultrasound images.

Ma L, Chen Y, Fu X, Qin J, Luo Y, Gao Y, Li W, Xiao M, Cao Z, Shi J, Zhu Q, Guo C, Wu J

pubmed logopapersJun 16 2025
Predicting treatment response in Crohn's disease (CD) is essential for making an optimal therapeutic regimen, but relevant models are lacking. This study aimed to develop a deep learning model based on baseline intestinal ultrasound (IUS) images and clinical information to predict mucosal healing. Consecutive CD patients who underwent pretreatment IUS were retrospectively recruited at a tertiary hospital. A total of 1548 IUS images of longitudinal diseased bowel segments were collected and divided into a training cohort and a test cohort. A convolutional neural network model was developed to predict mucosal healing after one year of standardized treatment. The model's efficacy was validated using the five-fold internal cross-validation and further tested in the test cohort. A total of 190 patients (68.9% men, mean age 32.3 ± 14.1 years) were enrolled, consisting of 1038 IUS images of mucosal healing and 510 images of no mucosal healing. The mean area under the curve in the test cohort was 0.73 (95% CI: 0.68-0.78), with the mean sensitivity of 68.1% (95% CI: 60.5-77.4%), specificity of 69.5% (95% CI: 60.1-77.2%), positive prediction value of 80.0% (95% CI: 74.5-84.9%), negative prediction value of 54.8% (95% CI: 48.0-63.7%). Heat maps showing the deep-learning decision-making process revealed that information from the bowel wall, serous surface, and surrounding mesentery was mainly considered by the model. We developed a deep learning model based on IUS images to predict mucosal healing in CD with notable accuracy. Further validation and improvement of this model with more multi-center, real-world data are needed. Predicting treatment response in CD is essential to making an optimal therapeutic regimen. In this study, a deep-learning model using pretreatment ultrasound images and clinical information was generated to predict mucosal healing with an AUC of 0.73. Response to medication treatment is highly variable among patients with CD. High-resolution IUS images of the intestinal wall may hide significant characteristics for treatment response. A deep-learning model capable of predicting treatment response was generated using pretreatment IUS images.

Interpretable deep fuzzy network-aided detection of central lymph node metastasis status in papillary thyroid carcinoma.

Wang W, Ning Z, Zhang J, Zhang Y, Wang W

pubmed logopapersJun 16 2025
The non-invasive assessment of central lymph node metastasis (CLNM) in patients with papillary thyroid carcinoma (PTC) plays a crucial role in assisting treatment decision and prognosis planning. This study aims to use an interpretable deep fuzzy network guided by expert knowledge to predict the CLNM status of patients with PTC from ultrasound images. A total of 1019 PTC patients were enrolled in this study, comprising 465 CLNM patients and 554 non-CLNM patients. Pathological diagnosis served as the gold standard to determine metastasis status. Clinical and morphological features of thyroid were collected as expert knowledge to guide the deep fuzzy network in predicting CLNM status. The network consisted of a region of interest (ROI) segmentation module, a knowledge-aware feature extraction module, and a fuzzy prediction module. The network was trained on 652 patients, validated on 163 patients and tested on 204 patients. The model exhibited promising performance in predicting CLNM status, achieving the area under the receiver operating characteristic curve (AUC), accuracy, precision, sensitivity and specificity of 0.786 (95% CI 0.720-0.846), 0.745 (95% CI 0.681-0.799), 0.727 (95% CI 0.636-0.819), 0.696 (95% CI 0.594-0.789), and 0.786 (95% CI 0.712-0.864), respectively. In addition, the rules of the fuzzy system in the model are easy to understand and explain, and have good interpretability. The deep fuzzy network guided by expert knowledge predicted CLNM status of PTC patients with high accuracy and good interpretability, and may be considered as an effective tool to guide preoperative clinical decision-making.

Imaging-Based AI for Predicting Lymphovascular Space Invasion in Cervical Cancer: Systematic Review and Meta-Analysis.

She L, Li Y, Wang H, Zhang J, Zhao Y, Cui J, Qiu L

pubmed logopapersJun 16 2025
The role of artificial intelligence (AI) in enhancing the accuracy of lymphovascular space invasion (LVSI) detection in cervical cancer remains debated. This meta-analysis aimed to evaluate the diagnostic accuracy of imaging-based AI for predicting LVSI in cervical cancer. We conducted a comprehensive literature search across multiple databases, including PubMed, Embase, and Web of Science, identifying studies published up to November 9, 2024. Studies were included if they evaluated the diagnostic performance of imaging-based AI models in detecting LVSI in cervical cancer. We used a bivariate random-effects model to calculate pooled sensitivity and specificity with corresponding 95% confidence intervals. Study heterogeneity was assessed using the I2 statistic. Of 403 studies identified, 16 studies (2514 patients) were included. For the interval validation set, the pooled sensitivity, specificity, and area under the curve (AUC) for detecting LVSI were 0.84 (95% CI 0.79-0.87), 0.78 (95% CI 0.75-0.81), and 0.87 (95% CI 0.84-0.90). For the external validation set, the pooled sensitivity, specificity, and AUC for detecting LVSI were 0.79 (95% CI 0.70-0.86), 0.76 (95% CI 0.67-0.83), and 0.84 (95% CI 0.81-0.87). Using the likelihood ratio test for subgroup analysis, deep learning demonstrated significantly higher sensitivity compared to machine learning (P=.01). Moreover, AI models based on positron emission tomography/computed tomography exhibited superior sensitivity relative to those based on magnetic resonance imaging (P=.01). Imaging-based AI, particularly deep learning algorithms, demonstrates promising diagnostic performance in predicting LVSI in cervical cancer. However, the limited external validation datasets and the retrospective nature of the research may introduce potential biases. These findings underscore AI's potential as an auxiliary diagnostic tool, necessitating further large-scale prospective validation.

A multimodal deep learning model for detecting endoscopic images of near-infrared fluorescence capsules.

Wang J, Zhou C, Wang W, Zhang H, Zhang A, Cui D

pubmed logopapersJun 15 2025
Early screening for gastrointestinal (GI) diseases is critical for preventing cancer development. With the rapid advancement of deep learning technology, artificial intelligence (AI) has become increasingly prominent in the early detection of GI diseases. Capsule endoscopy is a non-invasive medical imaging technique used to examine the gastrointestinal tract. In our previous work, we developed a near-infrared fluorescence capsule endoscope (NIRF-CE) capable of exciting and capturing near-infrared (NIR) fluorescence images to specifically identify subtle mucosal microlesions and submucosal abnormalities while simultaneously capturing conventional white-light images to detect lesions with significant morphological changes. However, limitations such as low camera resolution and poor lighting within the gastrointestinal tract may lead to misdiagnosis and other medical errors. Manually reviewing and interpreting large volumes of capsule endoscopy images is time-consuming and prone to errors. Deep learning models have shown potential in automatically detecting abnormalities in NIRF-CE images. This study focuses on an improved deep learning model called Retinex-Attention-YOLO (RAY), which is based on single-modality image data and built on the YOLO series of object detection models. RAY enhances the accuracy and efficiency of anomaly detection, especially under low-light conditions. To further improve detection performance, we also propose a multimodal deep learning model, Multimodal-Retinex-Attention-YOLO (MRAY), which combines both white-light and fluorescence image data. The dataset used in this study consists of images of pig stomachs captured by our NIRF-CE system, simulating the human GI tract. In conjunction with a targeted fluorescent probe, which accumulates at lesion sites and releases fluorescent signals for imaging when abnormalities are present, a bright spot indicates a lesion. The MRAY model achieved an impressive precision of 96.3%, outperforming similar object detection models. To further validate the model's performance, ablation experiments were conducted, and comparisons were made with publicly available datasets. MRAY shows great promise for the automated detection of GI cancers, ulcers, inflammations, and other medical conditions in clinical practice.
Page 70 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.