Sort by:
Page 152 of 3563559 results

An approach for cancer outcomes modelling using a comprehensive synthetic dataset.

Tu L, Choi HHF, Clark H, Lloyd SAM

pubmed logopapersJul 24 2025
Limited patient data availability presents a challenge for efficient machine learning (ML) model development. Recent studies have proposed methods to generate synthetic medical images but lack the corresponding prognostic information required for predicting outcomes. We present a cancer outcomes modelling approach that involves generating a comprehensive synthetic dataset which can accurately mimic a real dataset. A real public dataset containing computed tomography-based radiomic features and clinical information for 132 non-small cell lung cancer patients was used. A synthetic dataset of virtual patients was synthesized using a conditional tabular generative adversarial network. Models to predict two-year overall survival were trained on real or synthetic data using combinations of four feature selection methods (mutual information, ANOVA F-test, recursive feature elimination, random forest (RF) importance weights) and six ML algorithms (RF, k-nearest neighbours, logistic regression, support vector machine, XGBoost, Gaussian Naïve Bayes). Models were tested on withheld real data and externally validated. Real and synthetic datasets were similar, with an average one minus Kolmogorov-Smirnov test statistic of 0.871 for continuous features. Chi-square test confirmed agreement for discrete features (p < 0.001). XGBoost using RF importance-based features performed the most consistently for both datasets, with percent differences in balanced accuracy and area under the precision-recall curve of < 1.3%. Preliminary findings demonstrate the potential application of synthetic radiomic and clinical data augmentation for cancer outcomes modelling, although further validation with larger diverse datasets is crucial. While our approach was described in a lung context, it may be applied to other sites or endpoints.

SUP-Net: Slow-time Upsampling Network for Aliasing Removal in Doppler Ultrasound.

Nahas H, Yu ACH

pubmed logopapersJul 24 2025
Doppler ultrasound modalities, which include spectral Doppler and color flow imaging, are frequently used tools for flow diagnostics because of their real-time point-of-care applicability and high temporal resolution. When implemented using pulse-echo sensing and phase shift estimation principles, this modality's pulse repetition frequency (PRF) is known to influence the maximum detectable velocity. If the PRF is inevitably set below the Nyquist limit due to imaging requirements or hardware constraints, aliasing errors or spectral overlap may corrupt the estimated flow data. To solve this issue, we have devised a deep learning-based framework, powered by a custom slow-time upsampling network (SUP-Net) that leverages spatiotemporal characteristics to upsample the received ultrasound signals across pulse echoes acquired using high-frame-rate ultrasound (HiFRUS). Our framework infers high-PRF signals from signals acquired at low PRF, thereby improving Doppler ultrasound's flow estimation quality. SUP-Net was trained and evaluated on in vivo femoral acquisitions from 20 participants and was applied recursively to resolve scenarios with excessive aliasing across a range of PRFs. We report the successful reconstruction of slow-time signals with frequency content that exceeds the Nyquist limit once and twice. By operating on the fundamental slow-time signals, our framework can resolve aliasing-related artifacts in several downstream modalities, including color Doppler and pulse wave Doppler.

Agentic AI in radiology: Emerging Potential and Unresolved Challenges.

Dietrich N

pubmed logopapersJul 24 2025
This commentary introduces agentic artificial intelligence (AI) as an emerging paradigm in radiology, marking a shift from passive, user-triggered tools to systems capable of autonomous workflow management, task planning, and clinical decision support. Agentic AI models may dynamically prioritize imaging studies, tailor recommendations based on patient history and scan context, and automate administrative follow-up tasks, offering potential gains in efficiency, triage accuracy, and cognitive support. While not yet widely implemented, early pilot studies and proof-of-concept applications highlight promising utility across high-volume and high-acuity settings. Key barriers, including limited clinical validation, evolving regulatory frameworks, and integration challenges, must be addressed to ensure safe, scalable deployment. Agentic AI represents a forward-looking evolution in radiology that warrants careful development and clinician-guided implementation.

Preoperative MRI-based radiomics analysis of intra- and peritumoral regions for predicting CD3 expression in early cervical cancer.

Zhang R, Jiang C, Li F, Li L, Qin X, Yang J, Lv H, Ai T, Deng L, Huang C, Xing H, Wu F

pubmed logopapersJul 23 2025
The study investigates the correlation between CD3 T-cell expression levels and cervical cancer (CC) while developing a magnetic resonance (MR) imaging-based radiomics model for preoperative prediction of CD3 T-cell expression levels. Prognostic correlations between CD3D, CD3E, and CD3G gene expressions and various cancers were analyzed using the Cancer Genome Atlas (TCGA) database. Protein-protein interaction (PPI) analysis via the STRING database identified associations between these genes and T lymphocyte activity. Gene Set Enrichment Analysis (GSEA) revealed immune pathway enrichment by categorizing genes based on CD3D expression levels. Correlations between immune checkpoint molecules and CD3 complex genes were also assessed. The study retrospectively included 202 patients with pathologically confirmed early-stage CC who underwent preoperative MRI, divided into training and test groups. Radiomic features were extracted from the whole-lesion tumor region of interest (ROI<sub>tumor</sub>) and from peritumoral regions with 3 mm and 5 mm margins (ROI<sub>3mm</sub> and ROI<sub>5mm</sub>, respectively). Various machine learning algorithms, including Support Vector Machine (SVM), Logistic Regression, Random Forest, AdaBoost, and Decision Tree, were used to construct radiomics models based on different ROIs, and diagnostic performances were compared to identify the optimal approach. The best-performing algorithm was combined with intra- and peritumoral features and clinically relevant independent risk factors to develop a comprehensive predictive model. Analysis of the TCGA database demonstrated significant associations between CD3D, CD3E, and CD3G expressions and several cancers, including CC (p < 0.05). PPI analysis highlighted connections between these genes and T lymphocyte function, while GSEA indicated enrichment of immune-related pathways linked to CD3D. Immune checkpoint correlations showed positive associations with CD3 complex genes. Radiomics analysis selected 18 features from ROI<sub>tumor</sub> and ROI<sub>3mm</sub> across MRI sequences. The SVM algorithm achieved the highest predictive performance for CD3 T-cell expression status, with an area under the curve (AUC) of 0.93 in the training group and 0.92 in the test group. This MR-based radiomics model effectively predicts CD3 expression status in patients with early-stage CC, offering a non-invasive tool for preoperative assessment of CD3 expression, but its clinical utility needs further prospective validation.

BrainCNN: Automated Brain Tumor Grading from Magnetic Resonance Images Using a Convolutional Neural Network-Based Customized Model.

Yang J, Siddique MA, Ullah H, Gilanie G, Por LY, Alshathri S, El-Shafai W, Aldossary H, Gadekallu TR

pubmed logopapersJul 23 2025
Brain tumors pose a significant risk to human life, making accurate grading essential for effective treatment planning and improved survival rates. Magnetic Resonance Imaging (MRI) plays a crucial role in this process. The objective of this study was to develop an automated brain tumor grading system utilizing deep learning techniques. A dataset comprising 293 MRI scans from patients was obtained from the Department of Radiology at Bahawal Victoria Hospital in Bahawalpur, Pakistan. The proposed approach integrates a specialized Convolutional Neural Network (CNN) with pre-trained models to classify brain tumors into low-grade (LGT) and high-grade (HGT) categories with high accuracy. To assess the model's robustness, experiments were conducted using various methods: (1) raw MRI slices, (2) MRI segments containing only the tumor area, (3) feature-extracted slices derived from the original images through the proposed CNN architecture, and (4) feature-extracted slices from tumor area-only segmented images using the proposed CNN. The MRI slices and the features extracted from them were labeled using machine learning models, including Support Vector Machine (SVM) and CNN architectures based on transfer learning, such as MobileNet, Inception V3, and ResNet-50. Additionally, a custom model was specifically developed for this research. The proposed model achieved an impressive peak accuracy of 99.45%, with classification accuracies of 99.56% for low-grade tumors and 99.49% for high-grade tumors, surpassing traditional methods. These results not only enhance the accuracy of brain tumor grading but also improve computational efficiency by reducing processing time and the number of iterations required.

Anatomically Based Multitask Deep Learning Radiomics Nomogram Predicts the Implant Failure Risk in Sinus Floor Elevation.

Zhu Y, Liu Y, Zhao Y, Lu Q, Wang W, Chen Y, Ji P, Chen T

pubmed logopapersJul 23 2025
To develop and assess the performance of an anatomically based multitask deep learning radiomics nomogram (AMDRN) system to predict implant failure risk before maxillary sinus floor elevation (MSFE) while incorporating automated segmentation of key anatomical structures. We retrospectively collected patients' preoperative cone beam computed tomography (CBCT) images and electronic medical records (EMRs). First, the nn-UNet v2 model was optimized to segment the maxillary sinus (MS), Schneiderian membrane (SM), and residual alveolar bone (RAB). Based on the segmentation mask, a deep learning model (3D-Attention-ResNet) and a radiomics model were developed to extract 3D features from CBCT scans, generating the DL Score, and Rad Score. Significant clinical features were also extracted from EMRs to build a clinical model. These components were then integrated using logistic regression (LR) to create the AMDRN model, which includes a visualization module to support clinical decision-making. Segmentation results for MS, RAB, and SM achieved high DICE coefficients on the test set, with values of 99.50% ± 0.84%, 92.53% ± 3.78%, and 91.58% ± 7.16%, respectively. On an independent test set, the Clinical model, Radiomics model, 3D-DL model, and AMDRN model achieved prediction accuracies of 60%, 76%, 82%, and 90%, respectively, with AMDRN achieving the highest AUC of 93%. The AMDRN system enables efficient preoperative prediction of implant failure risk in MSFE and accurate segmentation of critical anatomical structures, supporting personalized treatment planning and clinical risk management.

Deep Learning-Based Prediction of Microvascular Invasion and Survival Outcomes in Hepatocellular Carcinoma Using Dual-phase CT Imaging of Tumors and Lesser Omental Adipose: A Multicenter Study.

Miao S, Sun M, Li X, Wang M, Jiang Y, Liu Z, Wang Q, Ding X, Wang R

pubmed logopapersJul 23 2025
Accurate preoperative prediction of microvascular invasion (MVI) in hepatocellular carcinoma (HCC) remains challenging. Current imaging biomarkers show limited predictive performance. To develop a deep learning model based on preoperative multiphase CT images of tumors and lesser omental adipose tissue (LOAT) for predicting MVI status and to analyze associated survival outcomes. This retrospective study included pathologically confirmed HCC patients from two medical centers between 2016 and 2023. A dual-branch feature fusion model based on ResNet18 was constructed, which extracted fused features from dual-phase CT images of both tumors and LOAT. The model's performance was evaluated on both internal and external test sets. Logistic regression was used to identify independent predictors of MVI. Based on MVI status, patients in the training, internal test, and external test cohorts were stratified into high- and low-risk groups, and overall survival differences were analyzed. The model incorporating LOAT features outperformed the tumor-only modality, achieving an AUC of 0.889 (95% CI: [0.882, 0.962], P=0.004) in the internal test set and 0.826 (95% CI: [0.793, 0.872], P=0.006) in the external test set. Both results surpassed the independent diagnoses of three radiologists (average AUC=0.772). Multivariate logistic regression confirmed that maximum tumor diameter and LOAT area were independent predictors of MVI. Further Cox regression analysis showed that MVI-positive patients had significantly increased mortality risks in both the internal test set (Hazard Ratio [HR]=2.246, 95% CI: [1.088, 4.637], P=0.029) and external test set (HR=3.797, 95% CI: [1.262, 11.422], P=0.018). This study is the first to use a deep learning framework integrating LOAT and tumor imaging features, improving preoperative MVI risk stratification accuracy. Independent prognostic value of LOAT has been validated in multicenter cohorts, highlighting its potential to guide personalized surgical planning.

Artificial Intelligence for Detecting Pulmonary Embolisms <i>via</i> CT: A Workflow-oriented Implementation.

Abed S, Hergan K, Dörrenberg J, Brandstetter L, Lauschmann M

pubmed logopapersJul 23 2025
Detecting Pulmonary Embolism (PE) is critical for effective patient care, and Artificial Intelligence (AI) has shown promise in supporting radiologists in this task. Integrating AI into radiology workflows requires not only evaluation of its diagnostic accuracy but also assessment of its acceptance among clinical staff. This study aims to evaluate the performance of an AI algorithm in detecting pulmonary embolisms (PEs) on contrast-enhanced computed tomography pulmonary angiograms (CTPAs) and to assess the level of acceptance of the algorithm among radiology department staff. This retrospective study analyzed anonymized computed tomography pulmonary angiography (CTPA) data from a university clinic. Surveys were conducted at three and nine months after the implementation of a commercially available AI algorithm designed to flag CTPA scans with suspected PE. A thoracic radiologist and a cardiac radiologist served as the reference standard for evaluating the performance of the algorithm. The AI analyzed 59 CTPA cases during the initial evaluation and 46 cases in the follow-up assessment. In the first evaluation, the AI algorithm demonstrated a sensitivity of 84.6% and a specificity of 94.3%. By the second evaluation, its performance had improved, achieving a sensitivity of 90.9% and a specificity of 96.7%. Radiologists' acceptance of the AI tool increased over time. Nevertheless, despite this growing acceptance, many radiologists expressed a preference for hiring an additional physician over adopting the AI solution if the costs were comparable. Our study demonstrated high sensitivity and specificity of the AI algorithm, with improved performance over time and a reduced rate of unanalyzed scans. These improvements likely reflect both algorithmic refinement and better data integration. Departmental feedback indicated growing user confidence and trust in the tool. However, many radiologists continued to prefer the addition of a resident over reliance on the algorithm. Overall, the AI showed promise as a supportive "second-look" tool in emergency radiology settings. The AI algorithm demonstrated diagnostic performance comparable to that reported in similar studies for detecting PE on CTPA, with both sensitivity and specificity showing improvement over time. Radiologists' acceptance of the algorithm increased throughout the study period, underscoring its potential as a complementary tool to physician expertise in clinical practice.

Kissing Spine and Other Imaging Predictors of Postoperative Cement Displacement Following Percutaneous Kyphoplasty: A Machine Learning Approach.

Zhao Y, Bo L, Qian L, Chen X, Wang Y, Cui L, Xin Y, Liu L

pubmed logopapersJul 23 2025
To investigate the risk factors associated with postoperative cement displacement following percutaneous kyphoplasty (PKP) in patients with osteoporotic vertebral compression fractures (OVCF) and to develop predictive models for clinical risk assessment. This retrospective study included 198 patients with OVCF who underwent PKP. Imaging and clinical variables were collected. Multiple machine learning models, including logistic regression, L1- and L2-regularized logistic regression, support vector machine (SVM), decision tree, gradient boosting, and random forest, were developed to predict cement displacement. L1- and L2-regularized logistic regression models identified four key risk factors: kissing spine (L1: 1.11; L2: 0.91), incomplete anterior cortex (L1: -1.60; L2: -1.62), low vertebral body CT value (L1: -2.38; L2: -1.71), and large Cobb change (L1: 0.89; L2: 0.87). The support vector machine (SVM) model achieved the best performance (accuracy: 0.983, precision: 0.875, recall: 1.000, F1-score: 0.933, specificity: 0.981, AUC: 0.997). Other models, including logistic regression, decision tree, gradient boosting, and random forest, also showed high performance but were slightly inferior to SVM. Key predictors of cement displacement were identified, and machine learning models were developed for risk assessment. These findings can assist clinicians in identifying high-risk patients, optimizing treatment strategies, and improving patient outcomes.

Benchmarking of Deep Learning Methods for Generic MRI Multi-Organ Abdominal Segmentation

Deepa Krishnaswamy, Cosmin Ciausu, Steve Pieper, Ron Kikinis, Benjamin Billot, Andrey Fedorov

arxiv logopreprintJul 23 2025
Recent advances in deep learning have led to robust automated tools for segmentation of abdominal computed tomography (CT). Meanwhile, segmentation of magnetic resonance imaging (MRI) is substantially more challenging due to the inherent signal variability and the increased effort required for annotating training datasets. Hence, existing approaches are trained on limited sets of MRI sequences, which might limit their generalizability. To characterize the landscape of MRI abdominal segmentation tools, we present here a comprehensive benchmarking of the three state-of-the-art and open-source models: MRSegmentator, MRISegmentator-Abdomen, and TotalSegmentator MRI. Since these models are trained using labor-intensive manual annotation cycles, we also introduce and evaluate ABDSynth, a SynthSeg-based model purely trained on widely available CT segmentations (no real images). More generally, we assess accuracy and generalizability by leveraging three public datasets (not seen by any of the evaluated methods during their training), which span all major manufacturers, five MRI sequences, as well as a variety of subject conditions, voxel resolutions, and fields-of-view. Our results reveal that MRSegmentator achieves the best performance and is most generalizable. In contrast, ABDSynth yields slightly less accurate results, but its relaxed requirements in training data make it an alternative when the annotation budget is limited. The evaluation code and datasets are given for future benchmarking at https://github.com/deepakri201/AbdoBench, along with inference code and weights for ABDSynth.
Page 152 of 3563559 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.