Sort by:
Page 20 of 73730 results

Fetal-Net: enhancing Maternal-Fetal ultrasound interpretation through Multi-Scale convolutional neural networks and Transformers.

Islam U, Ali YA, Al-Razgan M, Ullah H, Almaiah MA, Tariq Z, Wazir KM

pubmed logopapersJul 15 2025
Ultrasound imaging plays an important role in fetal growth and maternal-fetal health evaluation, but due to the complicated anatomy of the fetus and image quality fluctuation, its interpretation is quite challenging. Although deep learning include Convolution Neural Networks (CNNs) have been promising, they have largely been limited to one task or the other, such as the segmentation or detection of fetal structures, thus lacking an integrated solution that accounts for the intricate interplay between anatomical structures. To overcome these limitations, Fetal-Net-a new deep learning architecture that integrates Multi-Scale-CNNs and transformer layers-was developed. The model was trained on a large, expertly annotated set of more than 12,000 ultrasound images across different anatomical planes for effective identification of fetal structures and anomaly detection. Fetal-Net achieved excellent performance in anomaly detection, with precision (96.5%), accuracy (97.5%), and recall (97.8%) showed robustness factor against various imaging settings, making it a potent means of augmenting prenatal care through refined ultrasound image interpretation.

<sup>18</sup>F-FDG PET-based liver segmentation using deep-learning.

Kaneko Y, Miwa K, Yamao T, Miyaji N, Nishii R, Yamazaki K, Nishikawa N, Yusa M, Higashi T

pubmed logopapersJul 15 2025
Organ segmentation using <sup>18</sup>F-FDG PET images alone has not been extensively explored. Segmentation based methods based on deep learning (DL) have traditionally relied on CT or MRI images, which are vulnerable to alignment issues and artifacts. This study aimed to develop a DL approach for segmenting the entire liver based solely on <sup>18</sup>F-FDG PET images. We analyzed data from 120 patients who were assessed using <sup>18</sup>F-FDG PET. A three-dimensional (3D) U-Net model from nnUNet and preprocessed PET images served as DL and input images, respectively, for the model. The model was trained with 5-fold cross-validation on data from 100 patients, and segmentation accuracy was evaluated on an independent test set of 20 patients. Accuracy was assessed using Intersection over Union (IoU), Dice coefficient, and liver volume. Image quality was evaluated using mean (SUVmean) and maximum (SUVmax) standardized uptake value and signal-to-noise ratio (SNR). The model achieved an average IoU of 0.89 and an average Dice coefficient of 0.94 based on test data from 20 patients, indicating high segmentation accuracy. No significant discrepancies in image quality metrics were identified compared with ground truth. Liver regions were accurately extracted from <sup>18</sup>F-FDG PET images which allowed rapid and stable evaluation of liver uptake in individual patients without the need for CT or MRI assessments.

Non-invasive liver fibrosis screening on CT images using radiomics.

Yoo JJ, Namdar K, Carey S, Fischer SE, McIntosh C, Khalvati F, Rogalla P

pubmed logopapersJul 15 2025
To develop a radiomics machine learning model for detecting liver fibrosis on CT images of the liver. With Ethics Board approval, 169 patients (68 women, 101 men; mean age, 51.2 years ± 14.7 [SD]) underwent an ultrasound-guided liver biopsy with simultaneous CT acquisitions without and following intravenous contrast material administration. Radiomic features were extracted from two regions of interest (ROIs) on the CT images, one placed at the biopsy site and another distant from the biopsy site. A development cohort, which was split further into training and validation cohorts across 100 trials, was used to determine the optimal combinations of contrast, normalization, machine learning model, and radiomic features for liver fibrosis detection based on their Area Under the Receiver Operating Characteristic curve (AUC) on the validation cohort. The optimal combinations were then used to develop one final liver fibrosis model which was evaluated on a test cohort. When averaging the AUC across all combinations, non-contrast enhanced (NC) CT (AUC, 0.6100; 95% CI: 0.5897, 0.6303) outperformed contrast-enhanced CT (AUC, 0.5680; 95% CI: 0.5471, 0.5890). The most effective model was found to be a logistic regression model with input features of maximum, energy, kurtosis, skewness, and small area high gray level emphasis extracted from non-contrast enhanced NC CT normalized using Gamma correction with γ = 1.5 (AUC, 0.7833; 95% CI: 0.7821, 0.7845). The presented radiomics-based logistic regression model holds promise as a non-invasive detection tool for subclinical, asymptomatic liver fibrosis. The model may serve as an opportunistic liver fibrosis screening tool when operated in the background during routine CT examinations covering liver parenchyma. The final liver fibrosis detection model is made publicly available at: https://github.com/IMICSLab/RadiomicsLiverFibrosisDetection .

Comparison of diagnostic performance between manual diagnosis following PROMISE V2 and aPROMISE utilizing Ga/F-PSMA PET/CT.

Enei Y, Yanagisawa T, Okada A, Kuruma H, Okazaki C, Watanabe K, Lenzo NP, Kimura T, Miki K

pubmed logopapersJul 15 2025
Automated PROMISE (aPROMISE), which is an artificial intelligence-supported software for prostate-specific membrane antigen (PSMA) PET/CT based on PROMISE V2, has demonstrated diagnostic utility with better correspondence rates compared to manual diagnosis. However, previous studies have consistently utilized <sup>18</sup>F-PSMA PET/CT. Therefore, we investigated the diagnostic utility of aPROMISE using both <sup>18</sup>F- and <sup>68</sup> Ga-PSMA PET/CT of Japanese patients with metastatic prostate cancer (mPCa). We retrospectively evaluated 21 PSMA PET/CT images (<sup>68</sup> Ga-PSMA PET/CT: n = 12, <sup>18</sup>F-PSMA PET/CT: n = 9) from 21 patients with mPCa. A single, well-experienced nuclear radiologist performed manual diagnosis following PROMISE V2 and subsequently performed aPROMISE-assisted diagnosis to assess miTNM and details of metastatic sites. We compared the diagnostic time and correspondence rates of miTNM diagnosis between manual and aPROMISE-assisted diagnoses. Additionally, we investigated the differences in diagnostic performance between the two radioisotopes. aPROMISE-assisted diagnosis was significantly associated with shorter median diagnostic time compared to manual diagnosis (427 s [IQR: 370-834] vs. 1,114 s [IQR: 922-1291], p < 0.001). The time reduction with aPROMISE-assisted diagnosis was particularly notable when using <sup>68</sup> Ga-PSMA PET/CT. aPROMISE had high diagnostic accuracy with 100% sensitivity for miT, M1a, and M1b stages. Notably, for M1b stages, aPROMISE achieved 100% sensitivity and specificity, regardless of the type of radioisotope used. However, aPROMISE was misinterpreted in lymph node detection in some cases and missed five visceral metastases (2 adrenal and 3 liver), resulting in lower sensitivity for miM1c stage (63%). In addition to detecting metastatic sites, aPROMISE successfully provided detailed metrics, including the number of metastatic lesions, total metastatic volume, and SUV mean. Despite the preliminary nature of the study, aPROMISE-assisted diagnosis significantly reduces diagnostic time and achieves satisfactory accuracy compared to manual diagnosis. While aPROMISE is effective in detecting bone metastases, its limitations in identifying lymph node and visceral metastases must be carefully addressed. This study supports the utility of aPROMISE in Japanese patients with mPCa and underscores the need for further validation in larger cohorts.

Fully Automated Online Adaptive Radiation Therapy Decision-Making for Cervical Cancer Using Artificial Intelligence.

Sun S, Gong X, Cheng S, Cao R, He S, Liang Y, Yang B, Qiu J, Zhang F, Hu K

pubmed logopapersJul 15 2025
Interfraction variations during radiation therapy pose a challenge for patients with cervical cancer, highlighting the benefits of online adaptive radiation therapy (oART). However, adaptation decisions rely on subjective image reviews by physicians, leading to high interobserver variability and inefficiency. This study explores the feasibility of using artificial intelligence for decision-making in oART. A total of 24 patients with cervical cancer who underwent 671 fractions of daily fan-beam computed tomography (FBCT) guided oART were included in this study, with each fraction consisting of a daily FBCT image series and a pair of scheduled and adaptive plans. Dose deviations of scheduled plans exceeding predefined criteria were labeled as "trigger," otherwise as "nontrigger." A data set comprising 588 fractions from 21 patients was used for model development. For the machine learning model (ML), 101 morphologic, gray-level, and dosimetric features were extracted, with feature selection by the least absolute shrinkage and selection operator (LASSO) and classification by support vector machine (SVM). For deep learning, a Siamese network approach was used: the deep learning model of contour (DL_C) used only imaging data and contours, whereas a deep learning model of contour and dose (DL_D) also incorporated dosimetric data. A 5-fold cross-validation strategy was employed for model training and testing, and model performance was evaluated using the area under the curve (AUC), accuracy, precision, and recall. An independent data set comprising 83 fractions from 3 patients was used for model evaluation, with predictions compared against trigger labels assigned by 3 experienced radiation oncologists. Based on dosimetric labels, the 671 fractions were classified into 492 trigger and 179 nontrigger cases. The ML model selected 39 key features, primarily reflecting morphologic and gray-level changes in the clinical target volume (CTV) of the uterus (CTV_U), the CTV of the cervix, vagina, and parametrial tissues (CTV_C), and the small intestine. It achieved an AUC of 0.884, with accuracy, precision, and recall of 0.825, 0.824, and 0.827, respectively. The DL_C model demonstrated superior performance with an AUC of 0.917, accuracy of 0.869, precision of 0.860, and recall of 0.881. The DL_D model, which incorporated additional dosimetric data, exhibited a slight decline in performance compared with DL_C. Heatmap analyses indicated that for trigger fractions, the deep learning models focused on regions where the reference CT's CTV_U did not fully encompass the daily FBCT's CTV_U. Evaluation on an independent data set confirmed the robustness of all models. The weighted model's prediction accuracy significantly outperformed the physician consensus (0.855 vs 0.795), with comparable precision (0.917 vs 0.925) but substantially higher recall (0.887 vs 0.790). This study proposes machine learning and deep learning models to identify treatment fractions that may benefit from adaptive replanning in radical radiation therapy for cervical cancer, providing a promising decision-support tool to assist clinicians in determining when to trigger the oART workflow during treatment.

Preoperative prediction value of 2.5D deep learning model based on contrast-enhanced CT for lymphovascular invasion of gastric cancer.

Sun X, Wang P, Ding R, Ma L, Zhang H, Zhu L

pubmed logopapersJul 15 2025
To develop and validate artificial intelligence models based on contrast-enhanced CT(CECT) images of venous phase using deep learning (DL) and Radiomics approaches to predict lymphovascular invasion in gastric cancer prior to surgery. We retrospectively analyzed data from 351 gastric cancer patients, randomly splitting them into two cohorts (training cohort, n = 246; testing cohort, n = 105) in a 7:3 ratio. The tumor region of interest (ROI) was outlined on venous phase CT images as the input for the development of radiomics, 2D and 3D DL models (DL2D and DL3D). Of note, by centering the analysis on the tumor's maximum cross-section and incorporating seven adjacent 2D images, we generated stable 2.5D data to establish a multi-instance learning (MIL) model. Meanwhile, the clinical and feature-combined models which integrated traditional CT enhancement parameters (Ratio), radiomics, and MIL features were also constructed. Models' performance was evaluated by the area under the curve (AUC), confusion matrices, and detailed metrics, such as sensitivity and specificity. A nomogram based on the combined model was established and applied to clinical practice. The calibration curve was used to evaluate the consistency between the predicted LVI of each model and the actual LVI of gastric cancer, and the decision curve analysis (DCA) was used to evaluate the net benefit of each model. Among the developed models, 2.5D MIL and combined models exhibited the superior performance in comparison to the clinical model, the radiomics model, the DL2D model, and the DL3D model as evidenced by the AUC values of 0.820, 0.822, 0.748, 0.725, 0.786, and 0.711 on testing set, respectively. Additionally, the 2.5D MIL and combined models also showed good calibration for LVI prediction, and could provide a net clinical benefit when the threshold probability ranged from 0.31 to 0.98, and from 0.28 to 0.84, indicating their clinical usefulness. The MIL and combined models highlight their performance in predicting preoperative lymphovascular invasion in gastric cancer, offering valuable insights for clinicians in selecting appropriate treatment options for gastric cancer patients.

Placenta segmentation redefined: review of deep learning integration of magnetic resonance imaging and ultrasound imaging.

Jittou A, Fazazy KE, Riffi J

pubmed logopapersJul 15 2025
Placental segmentation is critical for the quantitative analysis of prenatal imaging applications. However, segmenting the placenta using magnetic resonance imaging (MRI) and ultrasound is challenging because of variations in fetal position, dynamic placental development, and image quality. Most segmentation methods define regions of interest with different shapes and intensities, encompassing the entire placenta or specific structures. Recently, deep learning has emerged as a key approach that offer high segmentation performance across diverse datasets. This review focuses on the recent advances in deep learning techniques for placental segmentation in medical imaging, specifically MRI and ultrasound modalities, and cover studies from 2019 to 2024. This review synthesizes recent research, expand knowledge in this innovative area, and highlight the potential of deep learning approaches to significantly enhance prenatal diagnostics. These findings emphasize the importance of selecting appropriate imaging modalities and model architectures tailored to specific clinical scenarios. In addition, integrating both MRI and ultrasound can enhance segmentation performance by leveraging complementary information. This review also discusses the challenges associated with the high costs and limited availability of advanced imaging technologies. It provides insights into the current state of placental segmentation techniques and their implications for improving maternal and fetal health outcomes, underscoring the transformative impact of deep learning on prenatal diagnostics.

LRMR: LLM-Driven Relational Multi-node Ranking for Lymph Node Metastasis Assessment in Rectal Cancer

Yaoxian Dong, Yifan Gao, Haoyue Li, Yanfen Cui, Xin Gao

arxiv logopreprintJul 15 2025
Accurate preoperative assessment of lymph node (LN) metastasis in rectal cancer guides treatment decisions, yet conventional MRI evaluation based on morphological criteria shows limited diagnostic performance. While some artificial intelligence models have been developed, they often operate as black boxes, lacking the interpretability needed for clinical trust. Moreover, these models typically evaluate nodes in isolation, overlooking the patient-level context. To address these limitations, we introduce LRMR, an LLM-Driven Relational Multi-node Ranking framework. This approach reframes the diagnostic task from a direct classification problem into a structured reasoning and ranking process. The LRMR framework operates in two stages. First, a multimodal large language model (LLM) analyzes a composite montage image of all LNs from a patient, generating a structured report that details ten distinct radiological features. Second, a text-based LLM performs pairwise comparisons of these reports between different patients, establishing a relative risk ranking based on the severity and number of adverse features. We evaluated our method on a retrospective cohort of 117 rectal cancer patients. LRMR achieved an area under the curve (AUC) of 0.7917 and an F1-score of 0.7200, outperforming a range of deep learning baselines, including ResNet50 (AUC 0.7708). Ablation studies confirmed the value of our two main contributions: removing the relational ranking stage or the structured prompting stage led to a significant performance drop, with AUCs falling to 0.6875 and 0.6458, respectively. Our work demonstrates that decoupling visual perception from cognitive reasoning through a two-stage LLM framework offers a powerful, interpretable, and effective new paradigm for assessing lymph node metastasis in rectal cancer.

Ultrafast T2-weighted MR imaging of the urinary bladder using deep learning-accelerated HASTE at 3 Tesla.

Yan L, Tan Q, Kohnert D, Nickel MD, Weiland E, Kubicka F, Jahnke P, Geisel D, Wagner M, Walter-Rittel T

pubmed logopapersJul 15 2025
This prospective study aimed to assess the feasibility of a half-Fourier single-shot turbo spin echo sequence (HASTE) with deep learning (DL) reconstruction for ultrafast imaging of the bladder with reduced susceptibility to motion artifacts. 50 patients underwent pelvic T2w imaging at 3 Tesla using the following MR sequences in sagittal orientation without antiperistaltic premedication: T2-TSE (time of acquisition [TA]: 2.03-4.00 min), standard HASTE (TA: 0.65-1.10 min), and DL-HASTE (TA: 0.25-0.47 min), with a slice thickness of 3 mm and a varying number of slices (25-45). Three radiologists evaluated the image quality of the three sequences quantitatively and qualitatively. Overall image quality of DL-HASTE (average score: 5) was superior to HASTE and T2-TSE (p < .001). DL-HASTE provided the clearest bladder wall delineation, especially in the apical part of the bladder (p < .001). SNR (36.3 ± 6.3) and CNR (50.3 ± 19.7) were the highest on DL-HASTE, followed by T2-TSE (33.1 ± 6.3 and 44.3 ± 21.0, respectively; p < .05) and HASTE (21.7 ± 5.4 and 35.8 ± 17.5, respectively; p < .01). A limitation of DL-HASTE and HASTE was the susceptibility to urine flow artifact within the bladder, which was absent or only minimal on T2-TSE. Diagnostic confidence in assessment of the bladder was highest with the combination of DL-HASTE and T2-TSE (p < .05). DL-HASTE allows for ultrafast imaging of the bladder with high image quality and is a promising addition to T2-TSE.

Explainable AI for Precision Oncology: A Task-Specific Approach Using Imaging, Multi-omics, and Clinical Data

Park, Y., Park, S., Bae, E.

medrxiv logopreprintJul 14 2025
Despite continued advances in oncology, cancer remains a leading cause of global mortality, highlighting the need for diagnostic and prognostic tools that are both accurate and interpretable. Unimodal approaches often fail to capture the biological and clinical complexity of tumors. In this study, we present a suite of task-specific AI models that leverage CT imaging, multi-omics profiles, and structured clinical data to address distinct challenges in segmentation, classification, and prognosis. We developed three independent models across large public datasets. Task 1 applied a 3D U-Net to segment pancreatic tumors from CT scans, achieving a Dice Similarity Coefficient (DSC) of 0.7062. Task 2 employed a hierarchical ensemble of omics-based classifiers to distinguish tumor from normal tissue and classify six major cancer types with 98.67% accuracy. Task 3 benchmarked classical machine learning models on clinical data for prognosis prediction across three cancers (LIHC, KIRC, STAD), achieving strong performance (e.g., C-index of 0.820 in KIRC, AUC of 0.978 in LIHC). Across all tasks, explainable AI methods such as SHAP and attention-based visualization enabled transparent interpretation of model outputs. These results demonstrate the value of tailored, modality-aware models and underscore the clinical potential of applying such tailored AI systems for precision oncology. Technical FoundationsO_LISegmentation (Task 1): A custom 3D U-Net was trained using the Task07_Pancreas dataset from the Medical Segmentation Decathlon (MSD). CT images were preprocessed with MONAI-based pipelines, resampled to (64, 96, 96) voxels, and intensity-windowed to HU ranges of -100 to 240. C_LIO_LIClassification (Task 2): Multi-omics data from TCGA--including gene expression, methylation, miRNA, CNV, and mutation profiles--were log-transformed and normalized. Five modality-specific LightGBM classifiers generated meta-features for a late-fusion ensemble. Stratified 5-fold cross-validation was used for evaluation. C_LIO_LIPrognosis (Task 3): Clinical variables from TCGA were curated and imputed (median/mode), with high-missing-rate columns removed. Survival models (e.g., Cox-PH, Random Forest, XGBoost) were trained with early stopping. No omics or imaging data were used in this task. C_LIO_LIInterpretability: SHAP values were computed for all tree-based models, and attention-based overlays were used in imaging tasks to visualize salient regions. C_LI
Page 20 of 73730 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.