Sort by:
Page 61 of 100991 results

Measuring kidney stone volume - practical considerations and current evidence from the EAU endourology section.

Grossmann NC, Panthier F, Afferi L, Kallidonis P, Somani BK

pubmed logopapersJul 1 2025
This narrative review provides an overview of the use, differences, and clinical impact of current methods for kidney stone volume assessment. The different approaches to volume measurement are based on noncontrast computed tomography (NCCT). While volume measurement using formulas is sufficient for smaller stones, it tends to overestimate volume for larger or irregularly shaped calculi. In contrast, software-based segmentation significantly improves accuracy and reproducibility, and artificial intelligence based volumetry additionally shows excellent agreement with reference standards while reducing observer variability and measurement time. Moreover, specific CT preparation protocols may further enhance image quality and thus improve measurement accuracy. Clinically, stone volume has proven to be a superior predictor of stone-related events during follow-up, spontaneous stone passage under conservative management, and stone-free rates after shockwave lithotripsy (SWL) and ureteroscopy (URS) compared to linear measurements. Although manual measurement remains practical, its accuracy diminishes for complex or larger stones. Software-based segmentation and volumetry offer higher precision and efficiency but require established standards and broader access to dedicated software for routine clinical use.

Liver lesion segmentation in ultrasound: A benchmark and a baseline network.

Li J, Zhu L, Shen G, Zhao B, Hu Y, Zhang H, Wang W, Wang Q

pubmed logopapersJul 1 2025
Accurate liver lesion segmentation in ultrasound is a challenging task due to high speckle noise, ambiguous lesion boundaries, and inhomogeneous intensity distribution inside the lesion regions. This work first collected and annotated a dataset for liver lesion segmentation in ultrasound. In this paper, we propose a novel convolutional neural network to learn dual self-attentive transformer features for boosting liver lesion segmentation by leveraging the complementary information among non-local features encoded at different layers of the transformer architecture. To do so, we devise a dual self-attention refinement (DSR) module to synergistically utilize self-attention and reverse self-attention mechanisms to extract complementary lesion characteristics between cascaded multi-layer feature maps, assisting the model to produce more accurate segmentation results. Moreover, we propose a False-Positive-Negative loss to enable our network to further suppress the non-liver-lesion noise at shallow transformer layers and enhance more target liver lesion details into CNN features at deep transformer layers. Experimental results show that our network outperforms state-of-the-art methods quantitatively and qualitatively.

Interstitial-guided automatic clinical tumor volume segmentation network for cervical cancer brachytherapy.

Tan S, He J, Cui M, Gao Y, Sun D, Xie Y, Cai J, Zaki N, Qin W

pubmed logopapersJul 1 2025
Automatic clinical tumor volume (CTV) delineation is pivotal to improving outcomes for interstitial brachytherapy cervical cancer. However, the prominent differences in gray values due to the interstitial needles bring great challenges on deep learning-based segmentation model. In this study, we proposed a novel interstitial-guided segmentation network termed advance reverse guided network (ARGNet) for cervical tumor segmentation with interstitial brachytherapy. Firstly, the location information of interstitial needles was integrated into the deep learning framework via multi-task by a cross-stitch way to share encoder feature learning. Secondly, a spatial reverse attention mechanism is introduced to mitigate the distraction characteristic of needles on tumor segmentation. Furthermore, an uncertainty area module is embedded between the skip connections and the encoder of the tumor segmentation task, which is to enhance the model's capability in discerning ambiguous boundaries between the tumor and the surrounding tissue. Comprehensive experiments were conducted retrospectively on 191 CT scans under multi-course interstitial brachytherapy. The experiment results demonstrated that the characteristics of interstitial needles play a role in enhancing the segmentation, achieving the state-of-the-art performance, which is anticipated to be beneficial in radiotherapy planning.

Foundation Model and Radiomics-Based Quantitative Characterization of Perirenal Fat in Renal Cell Carcinoma Surgery.

Mei H, Chen H, Zheng Q, Yang R, Wang N, Jiao P, Wang X, Chen Z, Liu X

pubmed logopapersJul 1 2025
To quantitatively characterize the degree of perirenal fat adhesion using artificial intelligence in renal cell carcinoma. This retrospective study analyzed a total of 596 patients from three cohorts, utilizing corticomedullary phase computed tomography urography (CTU) images. The nnUNet v2 network combined with numerical computation was employed to segment the perirenal fat region. Pyradiomics algorithms and a computed tomography foundation model were used to extract features from CTU images separately, creating single-modality predictive models for identifying perirenal fat adhesion. By concatenating the Pyradiomics and foundation model features, an early fusion multimodal predictive signature was developed. The prognostic performance of the single-modality and multimodality models was further validated in two independent cohorts. The nnUNet v2 segmentation model accurately segmented both kidneys. The neural network and thresholding approach effectively delineated the perirenal fat region. Single-modality models based on radiomic and computed tomography foundation features demonstrated a certain degree of accuracy in diagnosing and identifying perirenal fat adhesion, while the early feature fusion diagnostic model outperformed the single-modality models. Also, the perirenal fat adhesion score showed a positive correlation with surgical time and intraoperative blood loss. AI-based radiomics and foundation models can accurately identify the degree of perirenal fat adhesion and have the potential to be used for surgical risk assessment.

A deep-learning model to predict the completeness of cytoreductive surgery in colorectal cancer with peritoneal metastasis☆.

Lin Q, Chen C, Li K, Cao W, Wang R, Fichera A, Han S, Zou X, Li T, Zou P, Wang H, Ye Z, Yuan Z

pubmed logopapersJul 1 2025
Colorectal cancer (CRC) with peritoneal metastasis (PM) is associated with poor prognosis. The Peritoneal Cancer Index (PCI) is used to evaluate the extent of PM and to select Cytoreductive Surgery (CRS). However, PCI score is not accurate to guide patient's selection for CRS. We have developed a novel AI framework of decoupling feature alignment and fusion (DeAF) by deep learning to aid selection of PM patients and predict surgical completeness of CRS. 186 CRC patients with PM recruited from four tertiary hospitals were enrolled. In the training cohort, deep learning was used to train the DeAF model using Simsiam algorithms by contrast CT images and then fuse clinicopathological parameters to increase performance. The accuracy, sensitivity, specificity, and AUC by ROC were evaluated both in the internal validation cohort and three external cohorts. The DeAF model demonstrated a robust accuracy to predict the completeness of CRS with AUC of 0.9 (95 % CI: 0.793-1.000) in internal validation cohort. The model can guide selection of suitable patients and predict potential benefits from CRS. The high predictive performance in predicting CRS completeness were validated in three external cohorts with AUC values of 0.906(95 % CI: 0.812-1.000), 0.960(95 % CI: 0.885-1.000), and 0.933 (95 % CI: 0.791-1.000), respectively. The novel DeAF framework can aid surgeons to select suitable PM patients for CRS and predict the completeness of CRS. The model can change surgical decision-making and provide potential benefits for PM patients.

Development of Multiparametric Prognostic Models for Stereotactic Magnetic Resonance Guided Radiation Therapy of Pancreatic Cancers.

Michalet M, Valenzuela G, Nougaret S, Tardieu M, Azria D, Riou O

pubmed logopapersJul 1 2025
Stereotactic magnetic resonance guided adaptive radiation therapy (SMART) is a new option for local treatment of unresectable pancreatic ductal adenocarcinoma, showing interesting survival and local control (LC) results. Despite this, some patients will experience early local and/or metastatic recurrence leading to death. We aimed to develop multiparametric prognostic models for these patients. All patients treated in our institution with SMART for an unresectable pancreatic ductal adenocarcinoma between October 21, 2019, and August 5, 2022 were included. Several initial clinical characteristics as well as dosimetric data of SMART were recorded. Radiomics data from 0.35-T simulation magnetic resonance imaging were extracted. All these data were combined to build prognostic models of overall survival (OS) and LC using machine learning algorithms. Eighty-three patients with a median age of 64.9 years were included. A majority of patients had a locally advanced pancreatic cancer (77%). The median OS was 21 months after SMART completion and 27 months after chemotherapy initiation. The 6- and 12-month post-SMART OS was 87.8% (IC95%, 78.2%-93.2%) and 70.9% (IC95%, 58.8%-80.0%), respectively. The best model for OS was the Cox proportional hazard survival analysis using clinical data, with a concordance index inverse probability of censoring weighted of 0.87. Tested on its 12-month OS prediction capacity, this model had good performance (sensitivity 67%, specificity 71%, and area under the curve 0.90). The median LC was not reached. The 6- and 12-month post-SMART LC was 92.4% [IC95%, 83.7%-96.6%] and 76.3% [IC95%, 62.6%-85.5%], respectively. The best model for LC was the component-wise gradient boosting survival analysis using clinical and radiomics data, with a concordance index inverse probability of censoring weighted of 0.80. Tested on its 9-month LC prediction capacity, this model had good performance (sensitivity 50%, specificity 97%, and area under the curve 0.78). Combining clinical and radiomics data in multiparametric prognostic models using machine learning algorithms showed good performance for the prediction of OS and LC. External validation of these models will be needed.

Transformer-based skeletal muscle deep-learning model for survival prediction in gastric cancer patients after curative resection.

Chen Q, Jian L, Xiao H, Zhang B, Yu X, Lai B, Wu X, You J, Jin Z, Yu L, Zhang S

pubmed logopapersJul 1 2025
We developed and evaluated a skeletal muscle deep-learning (SMDL) model using skeletal muscle computed tomography (CT) imaging to predict the survival of patients with gastric cancer (GC). This multicenter retrospective study included patients who underwent curative resection of GC between April 2008 and December 2020. Preoperative CT images at the third lumbar vertebra were used to develop a Transformer-based SMDL model for predicting recurrence-free survival (RFS) and disease-specific survival (DSS). The predictive performance of the SMDL model was assessed using the area under the curve (AUC) and benchmarked against both alternative artificial intelligence models and conventional body composition parameters. The association between the model score and survival was assessed using Cox regression analysis. An integrated model combining SMDL signature with clinical variables was constructed, and its discrimination and fairness were evaluated. A total of 1242, 311, and 94 patients were assigned to the training, internal, and external validation cohorts, respectively. The Transformer-based SMDL model yielded AUCs of 0.791-0.943 for predicting RFS and DSS across all three cohorts and significantly outperformed other models and body composition parameters. The model score was a strong independent prognostic factor for survival. Incorporating the SMDL signature into the clinical model resulted in better prognostic prediction performance. The false-negative and false-positive rates of the integrated model were similar across sex and age subgroups, indicating robust fairness. The Transformer-based SMDL model could accurately predict survival of GC and identify patients at high risk of recurrence or death, thereby assisting clinical decision-making.

Development and Validation an AI Model to Improve the Diagnosis of Deep Infiltrating Endometriosis for Junior Sonologists.

Xu J, Zhang A, Zheng Z, Cao J, Zhang X

pubmed logopapersJul 1 2025
This study aims to develop and validate an artificial intelligence (AI) model based on ultrasound (US) videos and images to improve the performance of junior sonologists in detecting deep infiltrating endometriosis (DE). In this retrospective study, data were collected from female patients who underwent US examinations and had DE. The US image records were divided into two parts. First, during the model development phase, an AI-DE model was trained employing YOLOv8 to detect pelvic DE nodules. Subsequently, its clinical applicability was evaluated by comparing the diagnostic performance of junior sonologists with and without AI-model assistance. The AI-DE model was trained using 248 images, which demonstrated high performance, with a mAP50 (mean Average Precision at IoU threshold 0.5) of 0.9779 on the test set. Total 147 images were used for evaluate the diagnostic performance. The diagnostic performance of junior sonologists improved with the assistance of the AI-DE model. The area under the receiver operating characteristic (AUROC) curve improved from 0.748 (95% CI, 0.624-0.867) to 0.878 (95% CI, 0.792-0.964; p < 0.0001) for junior sonologist A, and from 0.713 (95% CI, 0.592-0.835) to 0.798 (95% CI, 0.677-0.919; p < 0.0001) for junior sonologist B. Notably, the sensitivity of both sonologists increased significantly, with the largest increase from 77.42% to 94.35%. The AI-DE model based on US images showed good performance in DE detection and significantly improved the diagnostic performance of junior sonologists.

Self-supervised network predicting neoadjuvant chemoradiotherapy response to locally advanced rectal cancer patients.

Chen Q, Dang J, Wang Y, Li L, Gao H, Li Q, Zhang T, Bai X

pubmed logopapersJul 1 2025
Radiographic imaging is a non-invasive technique of considerable importance for evaluating tumor treatment response. However, redundancy in CT data and the lack of labeled data make it challenging to accurately assess the response of locally advanced rectal cancer (LARC) patients to neoadjuvant chemoradiotherapy (nCRT) using current imaging indicators. In this study, we propose a novel learning framework to automatically predict the response of LARC patients to nCRT. Specifically, we develop a deep learning network called the Expand Intensive Attention Network (EIA-Net), which enhances the network's feature extraction capability through cascaded 3D convolutions and coordinate attention. Instance-oriented collaborative self-supervised learning (IOC-SSL) is proposed to leverage unlabeled data for training, reducing the reliance on labeled data. In a dataset consisting of 1,575 volumes, the proposed method achieves an AUC score of 0.8562. The dataset includes two distinct parts: the self-supervised dataset containing 1,394 volumes and the supervised dataset comprising 195 volumes. Analysis of the lifetime predictions reveals that patients with pathological complete response (pCR) predicted by EIA-Net exhibit better overall survival (OS) compared to non-pCR patients with LARC. The retrospective study demonstrates that imaging-based pCR prediction for patients with low rectal cancer can assist clinicians in making informed decisions regarding the need for Miles operation, thereby improving the likelihood of anal preservation, with an AUC of 0.8222. These results underscore the potential of our method to enhance clinical decision-making, offering a promising tool for personalized treatment and improved patient outcomes in LARC management.

GUSL: A Novel and Efficient Machine Learning Model for Prostate Segmentation on MRI

Jiaxin Yang, Vasileios Magoulianitis, Catherine Aurelia Christie Alexander, Jintang Xue, Masatomo Kaneko, Giovanni Cacciamani, Andre Abreu, Vinay Duddalwar, C. -C. Jay Kuo, Inderbir S. Gill, Chrysostomos Nikias

arxiv logopreprintJun 30 2025
Prostate and zonal segmentation is a crucial step for clinical diagnosis of prostate cancer (PCa). Computer-aided diagnosis tools for prostate segmentation are based on the deep learning (DL) paradigm. However, deep neural networks are perceived as "black-box" solutions by physicians, thus making them less practical for deployment in the clinical setting. In this paper, we introduce a feed-forward machine learning model, named Green U-shaped Learning (GUSL), suitable for medical image segmentation without backpropagation. GUSL introduces a multi-layer regression scheme for coarse-to-fine segmentation. Its feature extraction is based on a linear model, which enables seamless interpretability during feature extraction. Also, GUSL introduces a mechanism for attention on the prostate boundaries, which is an error-prone region, by employing regression to refine the predictions through residue correction. In addition, a two-step pipeline approach is used to mitigate the class imbalance, an issue inherent in medical imaging problems. After conducting experiments on two publicly available datasets and one private dataset, in both prostate gland and zonal segmentation tasks, GUSL achieves state-of-the-art performance among other DL-based models. Notably, GUSL features a very energy-efficient pipeline, since it has a model size several times smaller and less complexity than the rest of the solutions. In all datasets, GUSL achieved a Dice Similarity Coefficient (DSC) performance greater than $0.9$ for gland segmentation. Considering also its lightweight model size and transparency in feature extraction, it offers a competitive and practical package for medical imaging applications.
Page 61 of 100991 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.