Sort by:
Page 19 of 73728 results

An end-to-end interpretable machine-learning-based framework for early-stage diagnosis of gallbladder cancer using multi-modality medical data.

Zhao H, Miao C, Zhu Y, Shu Y, Wu X, Yin Z, Deng X, Gong W, Yang Z, Zou W

pubmed logopapersJul 16 2025
The accurate early-stage diagnosis of gallbladder cancer (GBC) is regarded as one of the major challenges in the field of oncology. However, few studies have focused on the comprehensive classification of GBC based on multiple modalities. This study aims to develop a comprehensive diagnostic framework for GBC based on both imaging and non-imaging medical data. This retrospective study reviewed 298 clinical patients with gallbladder disease or volunteers from two devices. A novel end-to-end interpretable diagnostic framework for GBC is proposed to handle multiple medical modalities, including CT imaging, demographics, tumor markers, coagulation function tests, and routine blood tests. To achieve better feature extraction and fusion of the imaging modality, a novel global-hybrid-local network, namely GHL-Net, has also been developed. The ensemble learning strategy is employed to fuse multi-modality data and obtain the final classification result. In addition, two interpretable methods are applied to help clinicians understand the model-based decisions. Model performance was evaluated through accuracy, precision, specificity, sensitivity, F1-score, area under the curve (AUC), and matthews correlation coefficient (MCC). In both binary and multi-class classification scenarios, the proposed method showed better performance compared to other comparison methods in both datasets. Especially in the binary classification scenario, the proposed method achieved the highest accuracy, sensitivity, specificity, precision, F1-score, ROC-AUC, PR-AUC, and MCC of 95.24%, 93.55%, 96.87%, 96.67%, 95.08%, 0.9591, 0.9636, and 0.9051, respectively. The visualization results obtained based on the interpretable methods also demonstrated a high clinical relevance of the intermediate decision-making processes. Ablation studies then provided an in-depth understanding of our methodology. The machine learning-based framework can effectively improve the accuracy of GBC diagnosis and is expected to have a more significant impact in other cancer diagnosis scenarios.

Automated microvascular invasion prediction of hepatocellular carcinoma via deep relation reasoning from dynamic contrast-enhanced ultrasound.

Wang Y, Xie W, Li C, Xu Q, Du Z, Zhong Z, Tang L

pubmed logopapersJul 16 2025
Hepatocellular carcinoma (HCC) is a major global health concern, with microvascular invasion (MVI) being a critical prognostic factor linked to early recurrence and poor survival. Preoperative MVI prediction remains challenging, but recent advancements in dynamic contrast-enhanced ultrasound (CEUS) imaging combined with artificial intelligence show promise in improving prediction accuracy. CEUS offers real-time visualization of tumor vascularity, providing unique insights into MVI characteristics. This study proposes a novel deep relation reasoning approach to address the challenges of modeling intricate temporal relationships and extracting complex spatial features from CEUS video frames. Our method integrates CEUS video sequences and introduces a visual graph reasoning framework that correlates intratumoral and peritumoral features across various imaging phases. The system employs dual-path feature extraction, MVI pattern topology construction, Graph Convolutional Network learning, and an MVI pattern discovery module to capture complex features while providing interpretable results. Experimental findings demonstrate that our approach surpasses existing state-of-the-art models in accuracy, sensitivity, and specificity for MVI prediction. The system achieved superiors accuracy, sensitivity, specificity and AUC. These advancements promise to enhance HCC diagnosis and management, potentially revolutionizing patient care. The method's robust performance, even with limited data, underscores its potential for practical clinical application in improving the efficacy and efficiency of HCC patient diagnosis and treatment planning.

Fully Automated Online Adaptive Radiation Therapy Decision-Making for Cervical Cancer Using Artificial Intelligence.

Sun S, Gong X, Cheng S, Cao R, He S, Liang Y, Yang B, Qiu J, Zhang F, Hu K

pubmed logopapersJul 15 2025
Interfraction variations during radiation therapy pose a challenge for patients with cervical cancer, highlighting the benefits of online adaptive radiation therapy (oART). However, adaptation decisions rely on subjective image reviews by physicians, leading to high interobserver variability and inefficiency. This study explores the feasibility of using artificial intelligence for decision-making in oART. A total of 24 patients with cervical cancer who underwent 671 fractions of daily fan-beam computed tomography (FBCT) guided oART were included in this study, with each fraction consisting of a daily FBCT image series and a pair of scheduled and adaptive plans. Dose deviations of scheduled plans exceeding predefined criteria were labeled as "trigger," otherwise as "nontrigger." A data set comprising 588 fractions from 21 patients was used for model development. For the machine learning model (ML), 101 morphologic, gray-level, and dosimetric features were extracted, with feature selection by the least absolute shrinkage and selection operator (LASSO) and classification by support vector machine (SVM). For deep learning, a Siamese network approach was used: the deep learning model of contour (DL_C) used only imaging data and contours, whereas a deep learning model of contour and dose (DL_D) also incorporated dosimetric data. A 5-fold cross-validation strategy was employed for model training and testing, and model performance was evaluated using the area under the curve (AUC), accuracy, precision, and recall. An independent data set comprising 83 fractions from 3 patients was used for model evaluation, with predictions compared against trigger labels assigned by 3 experienced radiation oncologists. Based on dosimetric labels, the 671 fractions were classified into 492 trigger and 179 nontrigger cases. The ML model selected 39 key features, primarily reflecting morphologic and gray-level changes in the clinical target volume (CTV) of the uterus (CTV_U), the CTV of the cervix, vagina, and parametrial tissues (CTV_C), and the small intestine. It achieved an AUC of 0.884, with accuracy, precision, and recall of 0.825, 0.824, and 0.827, respectively. The DL_C model demonstrated superior performance with an AUC of 0.917, accuracy of 0.869, precision of 0.860, and recall of 0.881. The DL_D model, which incorporated additional dosimetric data, exhibited a slight decline in performance compared with DL_C. Heatmap analyses indicated that for trigger fractions, the deep learning models focused on regions where the reference CT's CTV_U did not fully encompass the daily FBCT's CTV_U. Evaluation on an independent data set confirmed the robustness of all models. The weighted model's prediction accuracy significantly outperformed the physician consensus (0.855 vs 0.795), with comparable precision (0.917 vs 0.925) but substantially higher recall (0.887 vs 0.790). This study proposes machine learning and deep learning models to identify treatment fractions that may benefit from adaptive replanning in radical radiation therapy for cervical cancer, providing a promising decision-support tool to assist clinicians in determining when to trigger the oART workflow during treatment.

Placenta segmentation redefined: review of deep learning integration of magnetic resonance imaging and ultrasound imaging.

Jittou A, Fazazy KE, Riffi J

pubmed logopapersJul 15 2025
Placental segmentation is critical for the quantitative analysis of prenatal imaging applications. However, segmenting the placenta using magnetic resonance imaging (MRI) and ultrasound is challenging because of variations in fetal position, dynamic placental development, and image quality. Most segmentation methods define regions of interest with different shapes and intensities, encompassing the entire placenta or specific structures. Recently, deep learning has emerged as a key approach that offer high segmentation performance across diverse datasets. This review focuses on the recent advances in deep learning techniques for placental segmentation in medical imaging, specifically MRI and ultrasound modalities, and cover studies from 2019 to 2024. This review synthesizes recent research, expand knowledge in this innovative area, and highlight the potential of deep learning approaches to significantly enhance prenatal diagnostics. These findings emphasize the importance of selecting appropriate imaging modalities and model architectures tailored to specific clinical scenarios. In addition, integrating both MRI and ultrasound can enhance segmentation performance by leveraging complementary information. This review also discusses the challenges associated with the high costs and limited availability of advanced imaging technologies. It provides insights into the current state of placental segmentation techniques and their implications for improving maternal and fetal health outcomes, underscoring the transformative impact of deep learning on prenatal diagnostics.

Automated Whole-Liver Fat Quantification with Magnetic Resonance Imaging-Derived Proton Density Fat Fraction Map: A Prospective Study in Taiwan.

Wu CH, Yen KC, Wang LY, Hsieh PL, Wu WK, Lee PL, Liu CJ

pubmed logopapersJul 15 2025
Magnetic resonance imaging (MRI) with a proton density fat fraction (PDFF) sequence is the most accurate, noninvasive method for assessing hepatic steatosis. However, manual measurement on the PDFF map is time-consuming. This study aimed to validate automated whole-liver fat quantification for assessing hepatic steatosis with MRI-PDFF. In this prospective study, 80 patients were enrolled from August 2020 to January 2023. Baseline MRI-PDFF and magnetic resonance spectroscopy (MRS) data were collected. The analysis of MRI-PDFF included values from automated whole-liver segmentation (autoPDFF) and the average value from measurements taken from eight segments (avePDFF). Twenty patients with ≥10% autoPDFF values who received 24 weeks of exercise training were also collected for the chronologic evaluation. The correlation and concordance coefficients (r and ρ) among the values and differences were calculated. There were strong correlations between autoPDFF versus avePDFF, autoPDFF versus MRS, and avePDFF versus MRS (r=0.963, r=0.955, and r=0.977, all p<0.001). The autoPDFF values were also highly concordant with the avePDFF and MRS values (ρ=0.941 and ρ=0.942). The autoPDFF, avePDFF, and MRS values consistently decreased after 24 weeks of exercise. The change in autoPDFF was also highly correlated with the changes in avePDFF and MRS (r=0.961 and r=0.870, all p<0.001). Automated whole-liver fat quantification might be feasible for clinical trials and practice, yielding values with high correlations and concordance with the time-consuming manual measurements from the PDFF map and the values from the highly complex processing of MRS (ClinicalTrials.gov identifier: NCT04463667).

Non-invasive liver fibrosis screening on CT images using radiomics.

Yoo JJ, Namdar K, Carey S, Fischer SE, McIntosh C, Khalvati F, Rogalla P

pubmed logopapersJul 15 2025
To develop a radiomics machine learning model for detecting liver fibrosis on CT images of the liver. With Ethics Board approval, 169 patients (68 women, 101 men; mean age, 51.2 years ± 14.7 [SD]) underwent an ultrasound-guided liver biopsy with simultaneous CT acquisitions without and following intravenous contrast material administration. Radiomic features were extracted from two regions of interest (ROIs) on the CT images, one placed at the biopsy site and another distant from the biopsy site. A development cohort, which was split further into training and validation cohorts across 100 trials, was used to determine the optimal combinations of contrast, normalization, machine learning model, and radiomic features for liver fibrosis detection based on their Area Under the Receiver Operating Characteristic curve (AUC) on the validation cohort. The optimal combinations were then used to develop one final liver fibrosis model which was evaluated on a test cohort. When averaging the AUC across all combinations, non-contrast enhanced (NC) CT (AUC, 0.6100; 95% CI: 0.5897, 0.6303) outperformed contrast-enhanced CT (AUC, 0.5680; 95% CI: 0.5471, 0.5890). The most effective model was found to be a logistic regression model with input features of maximum, energy, kurtosis, skewness, and small area high gray level emphasis extracted from non-contrast enhanced NC CT normalized using Gamma correction with γ = 1.5 (AUC, 0.7833; 95% CI: 0.7821, 0.7845). The presented radiomics-based logistic regression model holds promise as a non-invasive detection tool for subclinical, asymptomatic liver fibrosis. The model may serve as an opportunistic liver fibrosis screening tool when operated in the background during routine CT examinations covering liver parenchyma. The final liver fibrosis detection model is made publicly available at: https://github.com/IMICSLab/RadiomicsLiverFibrosisDetection .

Comparison of diagnostic performance between manual diagnosis following PROMISE V2 and aPROMISE utilizing Ga/F-PSMA PET/CT.

Enei Y, Yanagisawa T, Okada A, Kuruma H, Okazaki C, Watanabe K, Lenzo NP, Kimura T, Miki K

pubmed logopapersJul 15 2025
Automated PROMISE (aPROMISE), which is an artificial intelligence-supported software for prostate-specific membrane antigen (PSMA) PET/CT based on PROMISE V2, has demonstrated diagnostic utility with better correspondence rates compared to manual diagnosis. However, previous studies have consistently utilized <sup>18</sup>F-PSMA PET/CT. Therefore, we investigated the diagnostic utility of aPROMISE using both <sup>18</sup>F- and <sup>68</sup> Ga-PSMA PET/CT of Japanese patients with metastatic prostate cancer (mPCa). We retrospectively evaluated 21 PSMA PET/CT images (<sup>68</sup> Ga-PSMA PET/CT: n = 12, <sup>18</sup>F-PSMA PET/CT: n = 9) from 21 patients with mPCa. A single, well-experienced nuclear radiologist performed manual diagnosis following PROMISE V2 and subsequently performed aPROMISE-assisted diagnosis to assess miTNM and details of metastatic sites. We compared the diagnostic time and correspondence rates of miTNM diagnosis between manual and aPROMISE-assisted diagnoses. Additionally, we investigated the differences in diagnostic performance between the two radioisotopes. aPROMISE-assisted diagnosis was significantly associated with shorter median diagnostic time compared to manual diagnosis (427 s [IQR: 370-834] vs. 1,114 s [IQR: 922-1291], p < 0.001). The time reduction with aPROMISE-assisted diagnosis was particularly notable when using <sup>68</sup> Ga-PSMA PET/CT. aPROMISE had high diagnostic accuracy with 100% sensitivity for miT, M1a, and M1b stages. Notably, for M1b stages, aPROMISE achieved 100% sensitivity and specificity, regardless of the type of radioisotope used. However, aPROMISE was misinterpreted in lymph node detection in some cases and missed five visceral metastases (2 adrenal and 3 liver), resulting in lower sensitivity for miM1c stage (63%). In addition to detecting metastatic sites, aPROMISE successfully provided detailed metrics, including the number of metastatic lesions, total metastatic volume, and SUV mean. Despite the preliminary nature of the study, aPROMISE-assisted diagnosis significantly reduces diagnostic time and achieves satisfactory accuracy compared to manual diagnosis. While aPROMISE is effective in detecting bone metastases, its limitations in identifying lymph node and visceral metastases must be carefully addressed. This study supports the utility of aPROMISE in Japanese patients with mPCa and underscores the need for further validation in larger cohorts.

Ultrafast T2-weighted MR imaging of the urinary bladder using deep learning-accelerated HASTE at 3 Tesla.

Yan L, Tan Q, Kohnert D, Nickel MD, Weiland E, Kubicka F, Jahnke P, Geisel D, Wagner M, Walter-Rittel T

pubmed logopapersJul 15 2025
This prospective study aimed to assess the feasibility of a half-Fourier single-shot turbo spin echo sequence (HASTE) with deep learning (DL) reconstruction for ultrafast imaging of the bladder with reduced susceptibility to motion artifacts. 50 patients underwent pelvic T2w imaging at 3 Tesla using the following MR sequences in sagittal orientation without antiperistaltic premedication: T2-TSE (time of acquisition [TA]: 2.03-4.00 min), standard HASTE (TA: 0.65-1.10 min), and DL-HASTE (TA: 0.25-0.47 min), with a slice thickness of 3 mm and a varying number of slices (25-45). Three radiologists evaluated the image quality of the three sequences quantitatively and qualitatively. Overall image quality of DL-HASTE (average score: 5) was superior to HASTE and T2-TSE (p < .001). DL-HASTE provided the clearest bladder wall delineation, especially in the apical part of the bladder (p < .001). SNR (36.3 ± 6.3) and CNR (50.3 ± 19.7) were the highest on DL-HASTE, followed by T2-TSE (33.1 ± 6.3 and 44.3 ± 21.0, respectively; p < .05) and HASTE (21.7 ± 5.4 and 35.8 ± 17.5, respectively; p < .01). A limitation of DL-HASTE and HASTE was the susceptibility to urine flow artifact within the bladder, which was absent or only minimal on T2-TSE. Diagnostic confidence in assessment of the bladder was highest with the combination of DL-HASTE and T2-TSE (p < .05). DL-HASTE allows for ultrafast imaging of the bladder with high image quality and is a promising addition to T2-TSE.

<sup>18</sup>F-FDG PET-based liver segmentation using deep-learning.

Kaneko Y, Miwa K, Yamao T, Miyaji N, Nishii R, Yamazaki K, Nishikawa N, Yusa M, Higashi T

pubmed logopapersJul 15 2025
Organ segmentation using <sup>18</sup>F-FDG PET images alone has not been extensively explored. Segmentation based methods based on deep learning (DL) have traditionally relied on CT or MRI images, which are vulnerable to alignment issues and artifacts. This study aimed to develop a DL approach for segmenting the entire liver based solely on <sup>18</sup>F-FDG PET images. We analyzed data from 120 patients who were assessed using <sup>18</sup>F-FDG PET. A three-dimensional (3D) U-Net model from nnUNet and preprocessed PET images served as DL and input images, respectively, for the model. The model was trained with 5-fold cross-validation on data from 100 patients, and segmentation accuracy was evaluated on an independent test set of 20 patients. Accuracy was assessed using Intersection over Union (IoU), Dice coefficient, and liver volume. Image quality was evaluated using mean (SUVmean) and maximum (SUVmax) standardized uptake value and signal-to-noise ratio (SNR). The model achieved an average IoU of 0.89 and an average Dice coefficient of 0.94 based on test data from 20 patients, indicating high segmentation accuracy. No significant discrepancies in image quality metrics were identified compared with ground truth. Liver regions were accurately extracted from <sup>18</sup>F-FDG PET images which allowed rapid and stable evaluation of liver uptake in individual patients without the need for CT or MRI assessments.
Page 19 of 73728 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.