Sort by:
Page 51 of 63630 results

Strategies for Treatment De-escalation in Metastatic Renal Cell Carcinoma.

Gulati S, Nardo L, Lara PN

pubmed logopapersMay 30 2025
Immune checkpoint inhibitors (ICIs) and targeted therapies have revolutionized the management of metastatic renal cell carcinoma (mRCC). Currently, the frontline standard of care for patients with mRCC involves the provision of systemic ICI-based combination therapy with no clear guidelines on holding or de-escalating treatment, even with a complete or partial radiological response. Treatments usually continue until disease progression or unacceptable toxicity, frequently leading to overtreatment, which can elevate the risk of toxicity without providing a corresponding increase in therapeutic efficacy. In addition, the ongoing use of expensive antineoplastic drugs increases the financial burden on the already overstretched health care systems and on patients and their families. De-escalation strategies could be designed by integrating contemporary technologies, such as circulating tumor DNA, and advanced imaging techniques, such as computed tomography (CT) scans, positron emission tomography CT, magnetic resonance imaging, and machine learning models. Treatment de-escalation, when appropriate, can minimize treatment-related toxicities, reduce health care costs, and optimize the patients' quality of life while maintaining effective cancer control. This paper discusses the advantages, challenges, and clinical implications of de-escalation strategies in the management of mRCC. PATIENT SUMMARY: In this report, we describe the burden of overtreatment in patients who are never able to stop treatments for metastatic kidney cancer. We discuss the application of the latest technology that can help in making de-escalation decisions.

A conditional point cloud diffusion model for deformable liver motion tracking via a single arbitrarily-angled x-ray projection.

Xie J, Shao HC, Li Y, Yan S, Shen C, Wang J, Zhang Y

pubmed logopapersMay 30 2025
Deformable liver motion tracking using a single X-ray projection enables real-time motion monitoring and treatment intervention. We introduce a conditional point cloud diffusion model-based framework for accurate and robust liver motion tracking from arbitrarily angled single X-ray projections. We propose a conditional point cloud diffusion model for liver motion tracking (PCD-Liver), which estimates volumetric liver motion by solving deformable vector fields (DVFs) of a prior liver surface point cloud, based on a single X-ray image. It is a patient-specific model of two main components: a rigid alignment model to estimate the liver's overall shifts, and a conditional point cloud diffusion model that further corrects for the liver surface's deformation. Conditioned on the motion-encoded features extracted from a single X-ray projection by a geometry-informed feature pooling layer, the diffusion model iteratively solves detailed liver surface DVFs in a projection angle-agnostic fashion. The liver surface motion solved by PCD-Liver is subsequently fed as the boundary condition into a UNet-based biomechanical model to infer the liver's internal motion to localize liver tumors. A dataset of 10 liver cancer patients was used for evaluation. We used the root mean square error (RMSE) and 95-percentile Hausdorff distance (HD95) metrics to examine the liver point cloud motion estimation accuracy, and the center-of-mass error (COME) to quantify the liver tumor localization error. The mean (±s.d.) RMSE, HD95, and COME of the prior liver or tumor before motion estimation were 8.82 mm (±3.58 mm), 10.84 mm (±4.55 mm), and 9.72 mm (±4.34 mm), respectively. After PCD-Liver's motion estimation, the corresponding values were 3.63 mm (±1.88 mm), 4.29 mm (±1.75 mm), and 3.46 mm (±2.15 mm). Under highly noisy conditions, PCD-Liver maintained stable performance. This study presents an accurate and robust framework for liver deformable motion estimation and tumor localization for image-guided radiotherapy.

Radiomics-based differentiation of upper urinary tract urothelial and renal cell carcinoma in preoperative computed tomography datasets.

Marcon J, Weinhold P, Rzany M, Fabritius MP, Winkelmann M, Buchner A, Eismann L, Jokisch JF, Casuscelli J, Schulz GB, Knösel T, Ingrisch M, Ricke J, Stief CG, Rodler S, Kazmierczak PM

pubmed logopapersMay 30 2025
To investigate a non-invasive radiomics-based machine learning algorithm to differentiate upper urinary tract urothelial carcinoma (UTUC) from renal cell carcinoma (RCC) prior to surgical intervention. Preoperative computed tomography venous-phase datasets from patients that underwent procedures for histopathologically confirmed UTUC or RCC were retrospectively analyzed. Tumor segmentation was performed manually, and radiomic features were extracted according to the International Image Biomarker Standardization Initiative. Features were normalized using z-scores, and a predictive model was developed using the least absolute shrinkage and selection operator (LASSO). The dataset was split into a training cohort (70%) and a test cohort (30%). A total of 236 patients [30.5% female, median age 70.5 years (IQR: 59.5-77), median tumor size 5.8 cm (range: 4.1-8.2 cm)] were included. For differentiating UTUC from RCC, the model achieved a sensitivity of 88.4% and specificity of 81% (AUC: 0.93, radiomics score cutoff: 0.467) in the training cohort. In the validation cohort, the sensitivity was 80.6% and specificity 80% (AUC: 0.87, radiomics score cutoff: 0.601). Subgroup analysis of the validation cohort demonstrated robust performance, particularly in distinguishing clear cell RCC from high-grade UTUC (sensitivity: 84%, specificity: 73.1%, AUC: 0.84) and high-grade from low-grade UTUC (sensitivity: 57.7%, specificity: 88.9%, AUC: 0.68). Limitations include the need for independent validation in future randomized controlled trials (RCTs). Machine learning-based radiomics models can reliably differentiate between RCC and UTUC in preoperative CT imaging. With a suggested performance benefit compared to conventional imaging, this technology might be added to the current preoperative diagnostic workflow. Local ethics committee no. 20-179.

Multi-spatial-attention U-Net: a novel framework for automated gallbladder segmentation on CT images.

Lou H, Wen X, Lin F, Peng Z, Wang Q, Ren R, Xu J, Fan J, Song H, Ji X, Wang H, Sun X, Dong Y

pubmed logopapersMay 30 2025
This study aimed to construct a novel model, Multi-Spatial Attention U-Net (MSAU-Net) by incorporating our proposed Multi-Spatial Attention (MSA) block into the U-Net for the automated segmentation of the gallbladder on CT images. The gallbladder dataset consists of CT images of retrospectively-collected 152 liver cancer patients and corresponding ground truth delineated by experienced physicians. Our proposed MSAU-Net model was transformed into two versions V1(with one Multi-Scale Feature Extraction and Fusion (MSFEF) module in each MSA block) and V2 (with two parallel MSEFE modules in each MSA blcok). The performances of V1 and V2 were evaluated and compared with four other derivatives of U-Net or state-of-the-art models quantitatively using seven commonly-used metrics, and qualitatively by comparison against experienced physicians' assessment. MSAU-Net V1 and V2 models both outperformed the comparative models across most quantitative metrics with better segmentation accuracy and boundary delineation. The optimal number of MSA was three for V1 and two for V2. Qualitative evaluations confirmed that they produced results closer to physicians' annotations. External validation revealed that MSAU-Net V2 exhibited better generalization capability. The MSAU-Net V1 and V2 both exhibited outstanding performance in gallbladder segmentation, demonstrating strong potential for clinical application. The MSA block enhances spatial information capture, improving the model's ability to segment small and complex structures with greater precision. These advantages position the MSAU-Net V1 and V2 as valuable tools for broader clinical adoption.

Predicting abnormal fetal growth using deep learning.

Mikołaj KW, Christensen AN, Taksøe-Vester CA, Feragen A, Petersen OB, Lin M, Nielsen M, Svendsen MBS, Tolsgaard MG

pubmed logopapersMay 29 2025
Ultrasound assessment of fetal size and growth is the mainstay of monitoring fetal well-being during pregnancy, as being small for gestational age (SGA) or large for gestational age (LGA) poses significant risks for both the fetus and the mother. This study aimed to enhance the prediction accuracy of abnormal fetal growth. We developed a deep learning model, trained on a dataset of 433,096 ultrasound images derived from 94,538 examinations conducted on 65,752 patients. The deep learning model performed significantly better in detecting both SGA (58% vs 70%) and LGA compared with the current clinical standard, the Hadlock formula (41% vs 55%), p < 0.001. Additionally, the model estimates were significantly less biased across all demographic and technical variables compared to the Hadlock formula. Incorporating key anatomical features such as cortical structures, liver texture, and skin thickness was likely to be responsible for the improved prediction accuracy observed.

Prediction of clinical stages of cervical cancer via machine learning integrated with clinical features and ultrasound-based radiomics.

Zhang M, Zhang Q, Wang X, Peng X, Chen J, Yang H

pubmed logopapersMay 29 2025
To investigate the prediction of a model constructed by combining machine learning (ML) with clinical features and ultrasound radiomics in the clinical staging of cervical cancer. General clinical and ultrasound data of 227 patients with cervical cancer who received transvaginal ultrasonography were retrospectively analyzed. The region of interest (ROI) radiomics profiles of the original image and derived image were retrieved and profile screening was performed. The chosen profiles were employed in radiomics model and Radscore formula construction. Prediction models were developed utilizing several ML algorithms by Python based on an integrated dataset of clinical features and ultrasound radiomics. Model performances were evaluated via AUC. Plot calibration curves and clinical decision curves were used to assess model efficacy. The model developed by support vector machine (SVM) emerged as the superior model. Integrating clinical characteristics with ultrasound radiomics, it showed notable performance metrics in both the training and validation datasets. Specifically, in the training set, the model obtained an AUC of 0.88 (95% Confidence Interval (CI): 0.83-0.93), alongside a 0.84 accuracy, 0.68 sensitivity, and 0.91 specificity. When validated, the model maintained an AUC of 0.77 (95% CI: 0.63-0.88), with 0.77 accuracy, 0.62 sensitivity, and 0.83 specificity. The calibration curve aligned closely with the perfect calibration line. Additionally, based on the clinical decision curve analysis, the model offers clinical utility over wide-ranging threshold possibilities. The clinical- and radiomics-based SVM model provides a noninvasive tool for predicting cervical cancer stage, integrating ultrasound radiomics and key clinical factors (age, abortion history) to improve risk stratification. This approach could guide personalized treatment (surgery vs. chemoradiation) and optimize staging accuracy, particularly in resource-limited settings where advanced imaging is scarce.

Ultrasound image-based contrastive fusion non-invasive liver fibrosis staging algorithm.

Dong X, Tan Q, Xu S, Zhang J, Zhou M

pubmed logopapersMay 29 2025
The diagnosis of liver fibrosis is usually based on histopathological examination of liver puncture specimens. Although liver puncture is accurate, it has invasive risks and high economic costs, which are difficult for some patients to accept. Therefore, this study uses deep learning technology to build a liver fibrosis diagnosis model to achieve non-invasive staging of liver fibrosis, avoid complications, and reduce costs. This study uses ultrasound examination to obtain pure liver parenchyma image section data. With the consent of the patient, combined with the results of percutaneous liver puncture biopsy, the degree of liver fibrosis indicated by ultrasound examination data is judged. The concept of Fibrosis Contrast Layer (FCL) is creatively introduced in our experimental method, which can help our model more keenly capture the significant differences in the characteristics of liver fibrosis of various grades. Finally, through label fusion (LF), the characteristics of liver specimens of the same fibrosis stage are abstracted and fused to improve the accuracy and stability of the diagnostic model. Experimental evaluation demonstrated that our model achieved an accuracy of 85.6%, outperforming baseline models such as ResNet (81.9%), InceptionNet (80.9%), and VGG (80.8%). Even under a small-sample condition (30% data), the model maintained an accuracy of 84.8%, significantly outperforming traditional deep-learning models exhibiting sharp performance declines. The training results show that in the whole sample data set and 30% small sample data set training environments, the FCLLF model's test performance results are better than those of traditional deep learning models such as VGG, ResNet, and InceptionNet. The performance of the FCLLF model is more stable, especially in the small sample data set environment. Our proposed FCLLF model effectively improves the accuracy and stability of liver fibrosis staging using non-invasive ultrasound imaging.

Dharma: A novel machine learning framework for pediatric appendicitis--diagnosis, severity assessment and evidence-based clinical decision support.

Thapa, A., Pahari, S., Timilsina, S., Chapagain, B.

medrxiv logopreprintMay 29 2025
BackgroundAcute appendicitis remains a challenging diagnosis in pediatric populations, with high rates of misdiagnosis and negative appendectomies despite advances in imaging modalities. Current diagnostic tools, including clinical scoring systems like Alvarado and Pediatric Appendicitis Score (PAS), lack sufficient sensitivity and specificity, while reliance on CT scans raises concerns about radiation exposure, contrast hazards and sedation in children. Moreover, no established tool effectively predicts progression from uncomplicated to complicated appendicitis, creating a critical gap in clinical decision-making. ObjectiveTo develop and evaluate a machine learning model that integrates clinical, laboratory, and radiological findings for accurate diagnosis and complication prediction in pediatric appendicitis and to deploy this model as an interpretable web-based tool for clinical decision support. MethodsWe analyzed data from 780 pediatric patients (ages 0-18) with suspected appendicitis admitted to Childrens Hospital St. Hedwig, Regensburg, between 2016 and 2021. For severity prediction, our dataset was augmented with 430 additional cases from published literature and only the confirmed cases of acute appendicitis(n=602) were used. After feature selection using statistical methods and recursive feature elimination, we developed a Random Forest model named Dharma, optimized through hyperparameter tuning and cross-validation. Model performance was evaluated on independent test sets and compared with conventional diagnostic tools. ResultsDharma demonstrated superior diagnostic performance with an AUC-ROC of 0.96 ({+/-}0.02 SD) in cross-validation and 0.97-0.98 on independent test sets. At an optimal threshold of 64%, the model achieved specificity of 88%-98%, sensitivity of 89%-95%, and positive predictive value of 93%-99%. For complication prediction, Dharma attained a sensitivity of 93% ({+/-}0.05 SD) in cross-validation and 96% on the test set, with a negative predictive value of 98%. The model maintained strong performance even in cases where the appendix could not be visualized on ultrasonography (AUC-ROC 0.95, sensitivity 89%, specificity 87% at the threshold of 30%). ConclusionDharma is a novel, interpretable machine learning based clinical decision support tool designed to address the diagnostic challenges of pediatric appendicitis by integrating easily obtainable clinical, laboratory, and radiological data into a unified, real-time predictive framework. Unlike traditional scoring systems and imaging modalities, which may lack specificity or raise safety concerns in children, Dharma demonstrates high accuracy in diagnosing appendicitis and predicting progression from uncomplicated to complicated cases, potentially reducing unnecessary surgeries and CT scans. Its robust performance, even with incomplete imaging data, underscores its utility in resource-limited settings. Delivered through an intuitive, transparent, and interpretable web application, Dharma supports frontline providers--particularly in low- and middle-income settings--in making timely, evidence-based decisions, streamlining patient referrals, and improving clinical outcomes. By bridging critical gaps in current diagnostic and prognostic tools, Dharma offers a practical and accessible 21st-century solution tailored to real-world pediatric surgical care across diverse healthcare contexts. Furthermore, the underlying framework and concepts of Dharma may be adaptable to other clinical challenges beyond pediatric appendicitis, providing a foundation for broader applications of machine learning in healthcare. Author SummaryAccurate diagnosis of pediatric appendicitis remains challenging, with current clinical scores and imaging tests limited by sensitivity, specificity, predictive values, and safety concerns. We developed Dharma, an interpretable machine learning model that integrates clinical, laboratory, and radiological data to assist in diagnosing appendicitis and predicting its severity in children. Evaluated on a large dataset supplemented by published cases, Dharma demonstrated strong diagnostic and prognostic performance, including in cases with incomplete imaging--making it potentially especially useful in resource-limited settings for early decision-making and streamlined referrals. Available as a web-based tool, it provides real-time support to healthcare providers in making evidence-based decisions that could reduce negative appendectomies while avoiding hazards associated with advanced imaging modalities such as sedation, contrast, or radiation exposure. Furthermore, the open-access concepts and framework underlying Dharma have the potential to address diverse healthcare challenges beyond pediatric appendicitis.

Enhanced Pelvic CT Segmentation via Deep Learning: A Study on Loss Function Effects.

Ghaedi E, Asadi A, Hosseini SA, Arabi H

pubmed logopapersMay 29 2025
Effective radiotherapy planning requires precise delineation of organs at risk (OARs), but the traditional manual method is laborious and subject to variability. This study explores using convolutional neural networks (CNNs) for automating OAR segmentation in pelvic CT images, focusing on the bladder, prostate, rectum, and femoral heads (FHs) as an efficient alternative to manual segmentation. Utilizing the Medical Open Network for AI (MONAI) framework, we implemented and compared U-Net, ResU-Net, SegResNet, and Attention U-Net models and explored different loss functions to enhance segmentation accuracy. Our study involved 240 patients for prostate segmentation and 220 patients for the other organs. The models' performance was evaluated using metrics such as the Dice similarity coefficient (DSC), Jaccard index (JI), and the 95th percentile Hausdorff distance (95thHD), benchmarking the results against expert segmentation masks. SegResNet outperformed all models, achieving DSC values of 0.951 for the bladder, 0.829 for the prostate, 0.860 for the rectum, 0.979 for the left FH, and 0.985 for the right FH (p < 0.05 vs. U-Net and ResU-Net). Attention U-Net also excelled, particularly for bladder and rectum segmentation. Experiments with loss functions on SegResNet showed that Dice loss consistently delivered optimal or equivalent performance across OARs, while DiceCE slightly enhanced prostate segmentation (DSC = 0.845, p = 0.0138). These results indicate that advanced CNNs, especially SegResNet, paired with optimized loss functions, provide a reliable, efficient alternative to manual methods, promising improved precision in radiotherapy planning.

A vessel bifurcation landmark pair dataset for abdominal CT deformable image registration (DIR) validation.

Criscuolo ER, Zhang Z, Hao Y, Yang D

pubmed logopapersMay 28 2025
Deformable image registration (DIR) is an enabling technology in many diagnostic and therapeutic tasks. Despite this, DIR algorithms have limited clinical use, largely due to a lack of benchmark datasets for quality assurance during development. DIRs of intra-patient abdominal CTs are among the most challenging registration scenarios due to significant organ deformations and inconsistent image content. To support future algorithm development, here we introduce our first-of-its-kind abdominal CT DIR benchmark dataset, comprising large numbers of highly accurate landmark pairs on matching blood vessel bifurcations. Abdominal CT image pairs of 30 patients were acquired from several publicly available repositories as well as the authors' institution with IRB approval. The two CTs of each pair were originally acquired for the same patient but on different days. An image processing workflow was developed and applied to each CT image pair: (1) Abdominal organs were segmented with a deep learning model, and image intensity within organ masks was overwritten. (2) Matching image patches were manually identified between two CTs of each image pair. (3) Vessel bifurcation landmarks were labeled on one image of each image patch pair. (4) Image patches were deformably registered, and landmarks were projected onto the second image. (5) Landmark pair locations were refined manually or with an automated process. This workflow resulted in 1895 total landmark pairs, or 63 per case on average. Estimates of the landmark pair accuracy using digital phantoms were 0.7 mm ± 1.2 mm. The data are published in Zenodo at https://doi.org/10.5281/zenodo.14362785. Instructions for use can be found at https://github.com/deshanyang/Abdominal-DIR-QA. This dataset is a first-of-its-kind for abdominal DIR validation. The number, accuracy, and distribution of landmark pairs will allow for robust validation of DIR algorithms with precision beyond what is currently available.
Page 51 of 63630 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.